Hi Voting Theory people! I want to discuss the concept of the “Veil of Ignorance,” and ways that it might be used to mitigate the manipulation of voting systems via strategic voting.
Discussion on Manipulation in General
Manipulation of voting systems can occur in many ways that have been discussed at length on this forum. Manipulation is often described in terms of casting a “dishonest” ballot in order to gain a strategic advantage regarding the probable outcome of an election. This is a good way to acquire an intuition about manipulation, but it is an informal description that fails to capture the full scope. It has been correctly noted on this forum that the notion of an “honest” versus a “dishonest” ballot is poorly defined, and is at best a social construct that is both difficult to measure and up to debate. Furthermore, it is important to categorize manipulation according to scale, where, as a model, the predominant strategies are segregated by the boundaries of chosen coalitions of appropriate sizes. Coalitions can range in size from a single individual to a majority of the electorate, and it is important in game-theoretical analysis to select coalitions that can be used to effectively and efficiently model the functional interests, incentives, and influences of the groups within a population. Otherwise it will be almost impossible to reliably model electoral behavior.
In a society, it seems there will always be many diverse coalitions, so choosing the coalitions to analyze functionally will depend on the kind of information we want to glean about the society. An “ideal” voting system I think would be one where an analysis based on segregated individual interests could effectively predict outcomes. That is, one in which each voter is expected to cast a ballot that does not vary respective to the expected ballots cast by other coalitions of voters. We might say that we want ballots cast by individual voters to be invariant under changes in information about the behavior of external coalitions, i.e. the strategy of any single voter is not contingent or dependent on the strategies of other voters. Unfortunately, satisfying this goal completely is known to be impossible by Gibbard’s Theorem, but it may be possible to approximate the situation in a way that is stable and persists over time.
Here is a descriptive hypothesis: small coalitions tend to form spontaneously, and then as a reaction, other small coalitions form to counterbalance the advantages gained by the spontaneously formed coalitions. Eventually, those small coalitions will spontaneously coalesce into larger ones, etc., until one is left with a small number of large political parties. The analogy I have in mind is the freezing of a liquid—first, small, random impurities act as nucleation sites for solidification, and then those small solid particles act as more potent nucleation sites for further solidification, until the whole substance is frozen. A “solid” electorate is what we want to avoid. The ideal liquid electorate is impossible by Gibbard’s theorem—but perhaps a slushy or goopy electorate might be attainable. What we need to accomplish to achieve this is to somehow reduce the incentives that exist for individuals to form strategic coalitions, and also reduce the incentives that exist for smaller coalitions to coalesce strategically into larger ones.
As a somewhat obvious example, plurality voting does the exact opposite of what we want—it is essentially a catalyst for the solidification of the electorate, which is what “Duverger’s Law” recognizes. But regardless of the precise mechanisms that cause this unfavorable catalyzation (which in the case of plurality voting, I think it can be mostly pinned on “vote splitting”), the catalyzation itself is, in my opinion, the crux of its failure.
The Veil of Ignorance
To try to counteract coalition formation and manipulation, I believe we must be able to somehow cause voters to empathize with each other on a large scale. Smaller coalitions are less likely to form among a healthy group of friends, for example, since each friend tends to naturally empathize with each other friend, and also tends to consider each other friend’s well-being as an influential interest regarding their own behavior. Small coalitions will form as small groups empathize amongst themselves, and as its members fail to empathize in a relevant way beyond the confines of their coalition.
The Veil of Ignorance can be used, on one hand, to require voters to put themselves in the position of a “typical” voter. Here is an example to illustrate how this might be accomplished, where manipulations are for the moment ignored:
Suppose a group of friends are all trying to decide on where to go to dinner. There are several different restaurants that are suggested by the group, and trying to be fair they put the matter to a vote. Because they don’t want any bias and want to make sure everybody is reasonably pleased with the result, they do the following: First, each of them (in good faith) scores each restaurant with an integer from 0 to 5, with 0 meaning the restaurant is tied for their least favorite, and differences between scores corresponding to degrees of preference. Next, the friends look only at the distributions of the scores, without reference to the name of the restaurant it belongs to, and vote together on the distribution. The restaurant with the winning distribution is where they all agree to go.
Notice that by ignoring the name of the restaurant when voting for distributions, the friends have been able to eliminate some degree of self-interest—or rather, they have been able to re-direct it. A reasonable friend will choose a distribution that balances his own personal risks of going to a restaurant that he gave a low score with the reward of going to a restaurant that he gave a high score. Assuming that no voter can identify restaurant names corresponding with certain distributions, each voter will have been placed in a state of “artificial empathy” with all of the other voters.
On the other hand, the Veil of Ignorance can be used to discourage dishonesty! Here is an elaboration: Consider the same friends trying to go out to eat, only this time, they are worried that some friends might score “dishonestly” to manipulate the end result of the vote—for instance, several of the friends might agree to score their mutually favored restaurant as a 3, and then vote for the distribution with the most 3s. So they come up with the following idea: with a 50% chance, they will vote on the distribution themselves, and with the other 50% chance, some impartial algorithm will determine the winning distribution. This way, if any particular friend wants to avoid risk, they must construct a ballot that is likely to yield a decent outcome for them in either situation. If the algorithm operates such that higher scores are more favorable for the possibility of going to each restaurant, then this dramatically discourages the dishonest voting that was described above. There is still the possibility that a voter might be able to identify which of the restaurants correspond to certain distributions, but with many voters this becomes unlikely.
To generalize this procedure, it may also be a good idea to keep secret which particular voting system will be used, i.e. to allow voters the information about which systems will potentially be used, but to select one at random without their prior knowledge. Then, if the systems are chosen carefully, the risk of using an inappropriate manipulation tactic for the randomly chosen system may also discourage manipulation in general.
That’s all I have to say for now. Thanks for reading and I’m definitely interested in any thoughts on this subject. I don’t think we want to be hooking voters up to polygraphs, but we want voters to be “honest,” so it’s an issue.