Kemeny Rank Aggregation

In this blog post, we shall do a mini-survey on some results on Kemeny rank aggregation.

In social choice theory, preference or rank aggregation is the problem of combining rankings from multiple voters or users into one “central” ranking. Arrow [1] showed that no ranking or voting rule can simultaneously satisfy certain properties so we must trade off certain properties. Such a trade-off can be achieved via the Condercet winner criteria. A Condorcet winner is a candidate who beats all other candidates in head-to-head comparisons. The Condorcet winner does not always exist but if it exists, it is unique!

The Kemeny optimal ranking [2] will always yield a Condorcet winner if it exists. A Kemeny ranking is a permutation on candidates that minimizes the number of pairwise disagreements with the rankings from all voters or users. More precisely, the Kemeny ranking minimizes the Kemeny score given as follows:

KS(\pi) = \sum_{i=1}^n d(\pi, \pi_i),

where we are given n permutations {\pi_1, \ldots, \pi_n} on m candidates. d(\pi, \pi_i) is the Kendall-tau distance that measures the number of pairs of candidates that are ranked in different orders by \pi and \pi_i. For example, for \sigma_1 = (1, 2, 3, 4) and \sigma_2 = (2, 3, 4, 1), d(\sigma_1, \sigma_2) = 3 since the pairs (1, 2), (1, 3), (1, 4) appear in different orders in \sigma_1 and \sigma_2.

The decision version of computing the Kemeny score (i.e., for any k\geq 0, is there a ranking with Kemeny score at most k) is an NP-complete problem even for situations with only 4 voters. This result was shown by Bartholdi, Tovey, Trick [3] and strengthened by Dwork, Kumar, Naor, Sivakumar [4]. We can still resort to computationally inefficient algorithms that solve the problem exactly via integer linear programming, for example [5].

There are computationally efficient approximation algorithms to solve the problem. Here are some examples:

  1. KwikSort (3-approximation): KwikSort is modeled after the QuickSort algorithm for sorting items in a list. Pivotal (ah!) to the algorithm is the selection of a pivot in each recursive call. Based on the pivot, we decide to place an item before or after the pivot in rank order. See paper by Ailon, Charikar, and Newman [6] for more information.
  2. Borda (5-approximation): Uses Borda’s method where each item receives points based on its position in a ranking [7, 8].
  3. LP-Rounding (4/3-approximation): Formulates rank aggregation as an ILP (Integer Linear Program) and “relaxes” the program into an LP (Linear Program). The solution to this LP can be used to obtain a ranking with improved approximation factors (e.g., see [6]).
  4. (1 + \epsilon)-approximation: Kenyon-Mathieu and Schudy present a PTAS (Polynomial Time Approximation Scheme) for solving rank aggregation.

Take a look at the references below for more information on background on rank aggregation and related problems (e.g., clustering).

References

[1] K.J. Arrow. Social Choice and Individual Values. Number no. 12 in Cowles Foundation
Monographs Series. Wiley, 1963.

[2] J.G. Kemeny, J.L. Snell, and Karreman Mathematics Research Collection. Mathematical
Models in the Social Sciences. Blaisdell book in the pure and applied sciences. 1962

[3] J. Bartholdi, C. A. Tovey, and M. A. Trick. Voting schemes for which it can be difficult
to tell who won the election. Social Choice and Welfare, 6(2):157–165, 1989.

[4] Cynthia Dwork, Ravi Kumar, Moni Naor, and D. Sivakumar. Rank aggregation methods
for the web. In Proceedings of the Tenth International World Wide Web Conference,
WWW 10, Hong Kong, China, May 1-5, 2001, pages 613–622, 2001.

[5] Vincent Conitzer, Andrew J. Davenport, and Jayant Kalagnanam. Improved bounds
for computing kemeny rankings. In Proceedings, The Twenty-First National Conference on Artificial Intelligence and the Eighteenth Innovative Applications of Artificial Intelligence Conference,pages 620–626, 2006.

[6] Nir Ailon, Moses Charikar, and Alantha Newman. Aggregating inconsistent information: Ranking and clustering. J. ACM, 55(5):23:1–23:27, 2008.

[7] Jean-Charles de Borda. M´emoire sur les ´elections au scrutin. Histoire de l’Acad´emie
Royale des Sciences, 1781.

[8] Don Coppersmith, Lisa Fleischer, and Atri Rudra. Ordering by weighted number of wins gives a good ranking for weighted tournaments. ACM Trans. Algorithms, 6(3):55:1–55:13, 2010.

[9] Claire Kenyon-Mathieu and Warren Schudy. How to rank with few errors. In Proceedings
of the 39th Annual ACM Symposium on Theory of Computing, San Diego, California,
USA, June 11-13, 2007, pages 95–103, 2007.

Introduction to Sentiment Analysis: A Tale of Distinct Classes

Given some document (in plain text, preferably), how can we determine the emotional state or sentiment of the writer towards a particular entity at the time of production? Provided we have a sizable corpus of documents —  static or streaming — the answer to this question could be useful for several reasons. Ideally, a Sentiment Analysis tool should be able to estimate an answer to the following time-constrained, yet important questions: what’s the present favorability rating for Obama? How likable was the president just before the election? Knowing the answers to these questions could significantly sway political strategies of different organizations. Similarly, knowing answers to several other comparably phrased yet diverse questions could be very useful to a lot of organizations, brands, and people.

Let’s back up a little bit. We’ve been talking about determining the sentiment of a document in a very broad sense. What answer do we expect to get out of Sentiment Analysis? An extended description of the writer’s sentiments at the time of writing or a Yes/No/Neutral answer? It turns out that the latter is easier, and in some cases even preferable. The Yes/No/Neutral answer is much less opaque and much more suitable for analysis. There are probably other answering models but I’ll from hence concentrate on the Yes/No/Neutral answer.

You can consider the Yes/No/Neutral answer as a three-option, one-choice answer. Loosely speaking, according to this model, there are three classes: Yes, No, Neutral. But for reasons to be uncovered shortly, we’ll use only two classes: Yes, No. A document that has been determined to be positive-enough — with positivity above a certain threshold —  will be placed in the Yes class. On the other hand, documents negative-enough — with negativity above a certain threshold — will be placed in the No class. Documents not in the Yes/No class can be safely placed in the Neutral class. How do we determine the thresholds? Well, through training our model on pre-marked data. Binary classification is much easier to interpret and more amenable to several statistical models. At this point, we’ve been able to formulate the problem as a binary classification problem. The problem of statistical classification is well-known and well-researched.

Out of a cornucopia of classification methods, the best methods (judged by popularity, effectiveness, and ease-of-use) in my humble opinion are:

  1. Naive Bayes
  2. Maximum Entropy (Logistic Regression)
  3. Support Vector Machines

Using any (or a combination) of these methods, we can classify documents into the Yes-No(-Neutral) classes using a trained model. The method that should be used in production is the one with the highest accuracy on the test set.

Nick Jones and I, for our final project in our NLP class, built a twitter Sentiment Analysis tool. We tried two classifiers: Naive Bayes, and Maximum Entropy. Naive Bayes turned out to be more accurate (83.01% accuracy on the test set). Since our training set is a bit old (based on tweets made in 2006), we made a web application to classify more recent tweets in real-time via the twitter API. For more information on this, check out our github repo. There are several improvements we hope to make to this tool. But this is a start. The goal of this project was to experiment with some of the methods used in Sentiment Analysis. We sure did.