# An Application of Linear Programming in Game Theory

I took the Combinatorial Optimization class at AIT Budapest (Aquincum Institute of Technology) with David Szeszler, a Professor at the Budapest University of Technology and Economics. We touched on some Graph Theory, Linear Programming, Integer Programming, the Assignment Problem, and the Hungarian method. My favorite class in the course was focused on applying Linear Programming in Game Theory. I’ll summarize the most important aspects of that class in this blog post. I hope this piques your interest in Game Theory (and in attending AIT).

### Basics of Linear Programming

First, I want to touch on some topics in Linear Programming for those who don’t know much about setting up a linear program (which is basically a system of linear inequalities with a maximization function or a minimization function). You can skip this section if you are confident about the subject.

Linear Programming is basically a field of mathematics that has to do with determining the optimum value in a feasible region. In determining the optimum value, one of two questions can be asked: find the minimum point/region or find the maximum point/region. The feasible region in a linear program is determined by a set of linear inequalities. For a feasible region to even exist, the set of linear inequalities must be solvable.

A typical linear program is given in this form: $max\{cx: Ax \leq b\}$. $c$ is a row vector of dimension $n$. $A$ is an $m \times n$ matrix called the incidence matrix. $x$ is a column vector of dimension $n$. This is called the primal program. The primal program is used to solve maximization problems. The dual of this primal program is of the form $min\{yb: yA = c, y \geq 0\}$. $b, A, c$ are the same as previously defined. $y$ is a row vector of dimension $m$. This is called the dual program. The dual is just a derivation of the primal program that is used to solve minimization problems.

Having introduced primal and dual programs, the next important theory in line is the duality theorem. The duality theorem states that $max\{cx: Ax \leq b\}$  = $min\{yb: yA=c, y \geq 0\}$. In other words, the maximum of the primal program is equal to the minimum of the dual program (provided that the primal program is solvable and bounded from above). Using this “tool”, every minimization problem can be converted to a maximization problem and vice versa (as long as the initial problem involves a system of linear inequalities that can be set up as a linear program with a finite amount of linear constraints and one objective function).

There are linear program solvers out there (both open-source and commercial). Most linear program solvers are based on the simplex method. I acknowledge that the summary of Linear Programming given here is devoid of some details. Linear programming is a large field that cannot be  wholly summarized in a few sentences. For more information on linear programming,  check out this wikipedia page.

### Sample Game Theory Problem

Suppose that I and my roommate Nick are playing a game called Yo!. The game rules are as follows: if we both say Yo!, I get $2. If I say Yo! but Nick says YoYo!, I lose$3. On the other hand, if we both say YoYo!, I get $4. If I say YoYo! but Nick says Yo!, I lose$3. The rules are summarized in the table below:

 Daniel Nick Yo! YoYo! Yo! $2$-3 YoYo! $-3$4

The values make up the payoff matrix. When Daniel gains, Nick loses. When Nick gains, Daniel loses. A negative value (e.g. \$-3) indicates that Daniel loses but Nick gains. Now the question surfaces: is there a smart way of playing this game so that I always win? Of course, if I could predict Nick’s next move all the time, then I’ll certainly play to win. But I can’t.  I must come up with a strategy that reduces the risk of me losing to a minimum and increases my chance of winning. In other words, I want to maximize my minimum expected value. So I wish to know how often I should say Yo! and how often I should say YoYo!. This problem is equivalent to trying to find a probability column vector of dimension 2 (for the two possible responses Yo!, YoYo!). Such a probability vector is called a mixed strategy. For example, a mixed strategy for Daniel could be the column vector: $(1/4 \ 3/4)^T$. This translates to saying YoYo! three-quarters of the time and saying Yo! a quarter of the time. My expected value is then $1/4*2 + 3/4*(-3) = -7/4$. This mixed strategy doesn’t seem optimal! In fact, it’s not as we’ll see later!

This kind of Game Theory problem where we wish to obtain an optimal mixed strategy for the Column player (in this case, Daniel) and an optimal mixed strategy for the Row  player (in this case, Nick) is called a Two-player, zero sum game. A mixed strategy for the Column player is an $n$-dimensional probability vector $x$; that is, a column vector with nonnegative entries that add up to 1. The $i^{th}$ entry of the mixed strategy measures the probability that the Column player will choose the $i^{th}$ column. In any Two-player, zero sum game, the problem is to maximize the worst-case gain of the Column player which is equivalent to finding

$max\{min(Ax) : x$ is a probability vector $\}$ where $A$ represents the payoff matrix

Analogously, the problem of minimizing the worst-case loss of the Row player is equivalent to finding

$min\{max(yA) : y$ is a probability vector $\}$ where $A$ is the payoff matrix

There’s a theorem that states that $max\{min(Ax) : x$ is a probability vector $\}$$min\{max(yA) : y$ is a probability vector $\}$ = $\mu$.  We call $\mu$ the common value of the game. This theorem is called the Minimax Theorem.

### Minimax Theorem

The Minimax Theorem  was proved by John von Neumann (one of the greatest polymaths of all time, I think). It states that “For every two-player, zero sum game the maximum of the minimum expected gain of the Column player is equal to the minimum of the maximum expected losses of the Row player”. In other words, there exists the optimum mixed strategies $x$ and $y$ for the Column player and the Row player respectively and a common value $\mu$ such that

1.  No matter how the Row player plays, $x$ guarantees an expected gain of at least $\mu$ to the Column player and
2. No matter how the Column player plays, $y$ guarantees an expected loss of at most $\mu$ to the Row player

### Solving the Two-Player, Zero Sum Game

Now let’s try to solve the Yo! game. First, we aim to obtain the mixed strategy for the Column player. Let $x$ be the mixed strategy where $x = (x_1, x_2)^T$ for which $x_1, x_2 \geq 0$ and $x_1 + x_2 = 1$. We wish to find the maximum of $min(Ax)$ where $A$ is the payoff matrix. To make this into a linear program, we can say $\mu = min(Ax)$. So $\mu$ is worst-case gain of Daniel. We wish to maximize $\mu$. Since $\mu$ is the minimum possible value of $Ax$, we obtain the following linear constraints

• $2x_1-3x_2-\mu \geq 0$
• $-3x_1+4x_2-\mu \geq 0$
• $x_1 + x_2 = 1$
• $x_1, x_2 \geq 0$

Solving the linear program gives us $x_1=7/12, x_2=5/12$ and $\mu = -1/12$. So the optimal mixed strategy for the Column player is $x = (7/12 \ 5/12)^T$. This translates to saying that if Daniel says Yo! $7/12$ of the time and YoYo! $5/12$ of the time, his worst-case gain will be $-1/12$. In other words, Daniel will lose at most $1/12$ the value of the game no matter how Nick plays. According to the minimax theorem, this is optimal.

Note that this doesn’t mean that Daniel will always lose the game but that he can lose by at most $1/12$ the value of the game. If Nick doesn’t play optimally (Nick doesn’t use his optimal mixed strategy), Daniel will most likely win!

Nick could obtain his optimal strategy by solving the dual of the primal program to obtain the vector $y$ which will be his optimal mixed strategy.

The minimax theorem is an interesting and very useful application of Linear Programming in Game Theory. Two-player, zero sum games can also be solved using Nash Equilibrium which is very closely related to the minimax theorem but applies to two or more players. Nash Equilibrium was first proposed by John Nash. There are many Two-player games including Poker, Card games, Betting games, and so on. As a result, Linear Programming is used in the Casino!

# The Trimph of (Almost) Certain Uncertainty

You’ve probably heard of Nate Silver. Maybe? If you are in the category of people that don’t know who Nate Silver is, there are 5 possible reasons you haven’t heard of him (not necessarily sorted in order of ridiculousness): you don’t use twitter; if you do, you probably didn’t check out your twitter trends days leading to the 2012 presidential election (or even during the election); you don’t read news that much; all the people on twitter that you follow either don’t care about the election that much or don’t read news that much; you aren’t a Republican pundit.

I’d call myself a news addict. I love reading news whenever I need to take a break. It’s a good way to get out of the flow of work but still learn stuff. But I didn’t know who Nate Silver was until few weeks before the election – specifically during Hurricane Sandy. During Sandy, I didn’t expect to see any thing that’s not related to the Superstorm trending on twitter. But there was Nate Silver’s. Maybe he’s the director of FEMA, I contemplated. So I researched him a little and found lots of information about him, mostly criticisms (at that moment).

I learned about the past successes of Nate Silver in almost accurately predicting the performances of baseball players in Major League, his perfect forecast of the outcome of the 2008 Presidential election. Nate, apparently, uses complex statistical models to predict elections. His models, I think, are mostly hinged on calculating weighted averages of polls appropriately. This, surprising, is a non-trivial task because of the volatility of election polls and the disparity between different election polls. You can’t give the same weight to all polls. Factors like method of polling, polling demographics, party affiliation, accuracy of past polls, and other factors must be considered. For example, during the election, some polls like Gallup and Rasmusseen stood out of all the others. Their results predicted that Romney will win the Presidential election by a comfortable margin. So determining the amount to weight the polls coming in can be hard sometimes. But Nate Silver is on top of his game. He correctly predicted the winner of 49 of the 50 states in the 2008 presidential election. This won him much fame and acclaim. Consequently, he was named one of the World’s 100 Most influential people by Time magazine. Wikipedia has some information on Nate Silver and his accomplishments. Also, Nate’s book, The Signal and the Noise, was published in September 2012.

But all the fuss about Nate Silver during Hurricane Sandy wasn’t because everyone loves and agrees with him. It was because Nate Silver was being heavily criticized for the apparent bias in his prediction models. Some notable political pundits (mostly republicans), including Karl Rove, criticized Nate Silver for trying to help Obama win the election (via the “Mehrabian Polling Snowball Effect”, I guess). On the eve of the election, Nate Silver said there was a >90% of Obama winning the election. Some Republicans called it “gobblygook.” Dean Chambers dismissed Nate Silver as a “thin and effeminate man with a soft-sounding voice” who skews polls in favor of Democrats. Nate, of course, disagreed (in fact, laughed it off). He knew he was going to have the last laugh.

And he’s the one laughing now. He correctly predicted the winner of all 50 states and the District of Colombia (for the Presidential election). What a boss! His profile as a renowned statistician has never been higher. His profile as a celebrity is equally high. Twitter is now commandeered by a ‘Drunk Nate Silver’ (@nateDRUNKsilver). And, apparently, Nate Silver’s a witch (or wizard, rather).

Yes, he predicted almost accurately the results of both the 2008 and 2012 Presidential elections. Yes, he seems to have a skill to identify and
differentiate the “Signal from the Noise”  (that’s Nate Silver’s book– at the time of writing, it’s the #2 Best Selling book on Amazon). But does this
mean that the result of his predictions made him stand out. I certainly don’t think so. Last Febraury, The Signal predicted that Obama would win
reelection. They predicted the outcomes of the Presidential election in 50 out of 51 states accurately. Why? Because the question of who’s going to win a Presidential election in a particular state is a 0-1 question. No one’s talking about these folks even though their predictions were almost accurate. Why? The more the hype/controversy about your prediction accuracy/magic, the more the hype about your prediction models, and consequently the more popular you are.
That’s why Nate’s popular right now: there’s too much hype about the accuracy of his predictions.

# Solution to Problem 3: Gotta Max Profits

Another way of the stating the problem is: determine the maximum value subarray from the array of “crude” stock price changes. This is reducible to the maximum value subarray problem (oh yeah!).

There are three known ways (specific to my knowledge thus far) of solving this problem:

• brute-force:  list all $n(n-1)/2$ (“n choose 2”) possible subarrays and determine which subarrray has the maximum value. $O(n^2)$ running time.
• divide-and-conquer: there’s a divide and conquer algorithm that runs in $O(nlogn)$ time. It’s kind of cool but not as fast as we’d want it to be.
• dynamic programming: this solution runs in $O(n)$ time. What would we do without dynamic programming?

Below is the solution (using dynamic programming) to the maxSub array problem in python:

https://gist.github.com/3734606.js

### Recurrence relation for maxSub array problem

Given a sequence of $n$ real numbers $A(1) \ldots A(n)$, determine a contiguous subsequence $A(i) \ldots A(j)$ for which the sum of elements in the subsequence is maximized.

For any $i$ and $j$ where $1 <= i <= j <= n$
$M(j)$ = Maximum sum over all windows ending at $j$.

$M(j)$ = $max \{ M(j-1) + A[j], A[j] \}$. So either we extend the optimal window ending at $j$ or start a fresh window with $A[j]$.

Chao!