For most clustering problems, our true goal is to classify the points correctly, and commonly studied objectives such as k-median, k-means, and min-sum are really only a proxy. That is, there is some unknown correct clustering (grouping proteins by their function or grouping images by who is in them) and the implicit hope is that approximately optimizing these objectives will in fact produce a clustering that is close pointwise to the correct answer. In this paper, we show that if we make this implicit assumption explicit---that is, if we assume that any c-approximation to the given clustering objective F is epsilon-close to the target---then we can produce clusterings that are O(epsilon)-close to the target, even for values c for which obtaining a c-approximation is NP-hard. In particular, for k-median and k-means objectives, we show that we can achieve this guarantee for any constant c > 1, and for min-sum objective we can do this for any constant c > 2. Our results also highlight a difference between assuming that the optimal solution to, say, the k-median objective is epsilon-close to the target, and assuming that any approximately optimal solution is epsilon-close to the target. In the former case, the problem of finding a solution that is O(epsilon)-close to the target remains computationally hard, and yet for the latter we have an efficient algorithm.
Many natural games have both good and bad Nash equilibria. In such cases, one could hope to improve poor behavior by a "public service advertising campaign" encouraging players to follow a good equilibrium, and if every player follows the advice then we are done. However, it is a bit much to assume that everyone will follow along. In this paper we consider the question of to what extent can such an advertising campaign cause behavior to switch from a bad equilibrium to a good one even if only a fraction of people actually follow the given advice, and do so only temporarily. Unlike in the ``value of altruism'' model, we assume everyone will ultimately act in their own interest. We analyze this question for several important and widely studied classes of games including network design with fair cost sharing, scheduling with unrelated machines, and party affiliation games (which include consensus and cut games). We show that for some of these games (such as fair cost sharing), a random alpha fraction of the population following the given advice is sufficient to get a guarantee within an O(1/alpha) factor of the price of stability for any alpha > 0. However, for some games (such as party affiliation games), there is a strict threshold (in this case, alpha < 1/2 yields almost no benefit, yet alpha > 1/2 is enough to reach near-optimal behavior), and for some games, such as scheduling, no value alpha < 1 is sufficient.