Spinoffs from the "Hit-Piece" Discussion

Over in the “hit-piece” discussion that has unfortunately become necessary, RobBrown says

With STAR, you just give Nader 5, Gore 4, and Bush zero.

This ignores the Gibbard Theorem, which says that no voting system relieves you of taking into account your estimate of where other voters stand, if you want to bring to bear all your power.

Sara_Wolf says

With Score the best strategy is generally to give all your candidates
min or max scores, while also being sure to max-score your lesser-evil
if you don’t think your favorite can win.

No, I’m pretty sure this is not the best strategy. It is to give your best candidate the top score, your compromise candidate .99 of the way between the minimum and maximum scores if you think your favorite would only get .01 proportion of the votes in choose-one plurality, and the other candidates the minimum. This strategy will cause support for your favorite to be reported, while providing substantial opposition to the worst candidates (as you judge them). If sufficient proportion of the electorate agree with your stances toward the candidate and they follow this same strategy, your candidate will win. But if you vote Approval-style, your candidate will tie the compromise candidate.

I want to end by stating that Score Voting is a great voting system.
That it’s a lot better than Approval, . . .

I contend that all Range Voting is equivalent, regardless of whether it is Approval or finer grained. My grounds are that over the long term, voters figure out pretty good strategies, because Darwin, and because we can see that with the English system (choose-one plurality) (FPtP), the Americans have figured out the main strategy for it, which is to discourage moral candidates from running, on grounds that they will split the vote with the lesser evil, resulting in the election of the greater evil. And my second ground is that if the range is too coarse, the voters can consult a random (or pseudorandom) number and approve the candidate in question with a probability that reflects the score they would like to give.

Wolk and respondents mention a contention that Range Voting provides incentives that some regard as a deal-breaker. I don’t understand this contention. So far as I have heard, the only incentive Range provides is to exaggerate support for a candidate between that for other candidates but never crossing one candidate over another. Is there an argument running around that says it gives an incentive to cross them one over another?

Of course I think the Venn diagram errs where it separates Approval from finer-grained Range (i. e. Score). I argued above that they are equivalent once the voters figure out strategy, and I argued that the voters eventually will figure out strategy.

I don’t see why Accurate is a distinct concept from Equal. It seems to me that “winners accurately reflect the will of the people” exactly when the people have equal power to one another in determining the winner.

Honest is nonsense given the Gibbard theorem.

RobBrown says

the gold standard of “fairness” is whether it selects the “first choice of
the median voter”.

Any kind of first-choiceism raises my suspicions because it smacks of IRV.

Who is the “median voter” under an “Everybody-Loves-Raymond” scenario? For example there are two voting factions and three candidates and faction F0 consists of 51% of the voters and their true scores are A 1, R .99, B 0 and the position of the remaining faction is A 0, R .99, and B 1. Who is the median voter, and why should she have her first choice when clearly R gives much higher VSE than A?

OK, reading down, I see that for evaluating Acccuracy, starvoting.us broadens the criteria beyond mere equality, including in particular the Condorcet criterion or constraint and VSE simulations. The trouble with VSE (or Baysian Regret to look at basically the same concept from the other side) is that they start with an assumption about what strategy the voters will use. I don’t whether in each case this assumption is good. And as for Condorcet, it relies on ranking and throws away strength of preference within a ranking.

I want to suggest that STAR and Approval are equivalent. Since both of them pass the Frohnmayer balance test, and since neither of them makes a voter’s valuation of one candidate depend on the voter’s valuation of another candidate, they give equal power to the voters, one voter to another. If a given electorate faced with a given field of candidates would elect different candidates with STAR and Approval, we would have to conclude that at least one of the outcomes would disadvantage one voter unfairly and give another voter an unfair advantage, because that system did not produce the same outcome as the other system, which other system is fair based on its according equal power to the voters, one to another, based on its passing the balance constraint and allowing independent valuations of the candidates. However, the same consideration applies to the first system, and so they can’t produce different outcomes. Therefore, they are equivalent.

Participants keep writing as though there were a background assumption that for Approval, a threshold-based strategy is what people will use or what would work best for them. I don’t think this is the case. The best strategy is to decide how you would vote in fine-grained Range and simulate this using probability when you decide whether to approve.

I’d guess that with STAR, there are a greater number of scenarios where you don’t have to strategically vote than Score. Certainly, in the scenario that you quoted, it would seem that min-maxing (or your “99% frontrunner strategy” which I address below) wouldn’t be necessary in STAR (though if cloning occurred, it would). On top of that, it may be worth measuring, even when a voter is incentivized to vote strategically, how much they lose by voting honestly in either system.

If voters in general judge your favorite to be better than the lesser evil, then it’s likely you’d have at least one voter who bullet votes or otherwise reports that honest scored preference, rather than approving both candidates. The odds of an exact tie is worth ignoring essentially because of this type of “noise” in the electorate, as far as I can see, so that alone doesn’t make it a bad idea to min-max.

This is a tangential point, but most scored methods don’t allow a voter to indicate maximally strong transitive pairwise preferences in the same way a ranking allows for. That is, Condorcet allows you to maximally boost A over B when looking at the A vs B matchup, while giving B maximal support over C in the B vs C matchup, whereas only one of those two things can happen if using a scored ballot (though with Score, the IIA compliance guarantees that whichever decision the voter makes will be respected, which is not necessarily true with ranked methods). So in terms of expressiveness, a Condorcet method where voters could indicate either weak (rated) or strong pairwise preferences might be good enough for you.

I don’t know where the data for it can be found, but according to comments from Keith Edmonds on the hitpiece post, it appears that there were a lot of non-normalizing voters in the recent STAR voting Oregon Independent Party poll.
To me, the most basic argument that not every voter will be strategic is that a lot of people either don’t vote or do so begrudgingly; why would all such voters make an all-or-nothing decision in terms of how much of their voting power to use? In fact, providing them an option to weaken their own votes might be exactly what drives them to the polls, because it allows them to avoid the moral dilemma of “am I giving too much support to these bums by giving my full support to any of them?”

Not in vote-for-or-against with sufficiently many candidates.


Just wanted to point out that one way to handle strategic 100/99 scoring strategy is to have an explicit approval cutoff. Methods such as Approval Sorted Margins or Smith//Approval that use explicit approval cutoff use rank (inferred from rating) as much as possible, but will incorporate that explicit approval cutoff when falling back in the case of cycles.

I don’t see how this black-and-white perspective is particularly helpful. No system is perfect – we know that.

The question is how much do you benefit by knowing how others are likely to vote, and how difficult is it to gain something by attempting to take this into account?

At one end of the spectrum is plurality, where strategy tends to be easy and obvious: generally you should try to guess who is in the top two and vote for your favorite of those. But if you think there is a chance that your guess is wrong, you can take that into account, and factor in exactly how much you like or dislike each candidate, balance it against the probabilities, etc. (that’s exactly what I did when I voted for Perot in 92, and I think my vote was rational even though it wasn’t for one of my predicted top two candidates )

At the other extreme, under say a condorcet system (especially one with a good way of resolving cycles), I can’t imagine a large number of people doing anything but stating their sincere preferences. And for those who do anyway, my bet is that they wouldn’t gain much if anything by doing so.

Anyway, the point is, just because you can make black-and-white statements such as “no voting system relieves you of taking into account your estimate of where other voters stand, if you want to bring to bear all your power,” doesn’t mean that that is the last word on it. Far from it. Some systems are dramatically better than others in this regard.

I’ll say this example is very unrealistically contrived, but yes I would argue that if (and that is a huge if) the electorate so neatly divides into two camps, one with 51% and one 49%, the median voter is in the first camp.

Any real election, with Raymond getting such wide support from both sides, he’d get at least a few rating him above both A and C. (did Raymond himself put someone above him?) And it would only take a few for him to win in a median seeking election.

Let me go back to my “ideal” election of voting for a numerical value, such as the temperature to set the office thermometer. You can vote by simply stating your preferred temperature, and the median value is chosen. This would actually be a good example of the type of election where “estimating where others stand” does not help you “bear all of your power”. It just doesn’t.

And yes you could imagine a situation where 51% of the office want 65 degrees, and 49% of the office want 90 degrees. You can imagine it, but that isn’t realistic at all. Realistic is for there to be people in the middle ground area, that are happy with 70 degrees or so. (if not, I guess it is time to have two separate offices, right?)

I think there are a lot of further insights that can be gained by continuing to talk about such “ideal” or “pure” voting scenarios. They are good for understanding our goals, and provide a baseline for understanding more “messy” situations, with finite numbers of real candidates and lots of different ideological dimensions, some of which correlate with one another to varying degrees.

In fact, as a thought experiment, you could try to imagine the “Everybody Loves Raymond” vote being 3 nominated temperatures for the office thermostat. A is 65, R is 70, C is 75. Does it seem realistic to you that everyone would be almost as happy with 70 as with either 65 or 75, but no one would pick 70 as first choice? That would be an absurd distribution of preferences.

Another thought experiment regarding the temperature vote: someone might argue that, if you use the average rather than the median to determine the winner, that might raise the VSE. After all, average takes into account people who are out on the extremes, while median simply factors them the same as people with more moderate preferences… they each have equal pull under median. Anything that optimizes for high VSE is going to measure those extreme views.

Even if it has a lower VSE, in my mind median is more “fair,” and is inherently more stable in a game theoretical sense. It better accomplished “one person one vote” in the sense that all voters have equal power to pull it in their preferred direction. Having more extreme views – or pretending to to gain an advantage – doesn’t give you more voting power.

So whenever we get deep into the discussion of VSE and issues such as “no voting system is perfect”, I always ask people to look at these sort of simple numerical elections first. And I’ll ask this: do you agree that, if voting for a numerical value (such as temperature to set the thermostat, or membership dues of a club), that selecting the median can’t be improved upon?

You don’t have to get zero first-place votes for this pathology. Consider this instead:
51% A=10 B=[7-9] C=0
15% B=10 A=[0-2] C=[0-2]
34% C=10 B=[7-9] A=0

Brackets indicate voters spread their votes randomly in that range. B still gets 0.85(8)+0.15(10)=8.3 on average which is far ahead of A.

That does work well for some situations but I think the whole argument breaks down when you parse it into anything with more than one dimension. For example:

  • In a Condorcet cycle (~33% A>B>C, ~33% B>C>A, ~33% C>A>B), who is the “median” voter? (Approval Voting version: ~25% vote AB, ~25% vote BC, ~25% vote CD, ~25% vote AD.)
  • If a village that is built around a large circular lake votes on where to put the capital, where is the “median”?

Just on this one bit - going for the mean here could lead to very strange results, because just one voter going for an extreme value could lead to the deaths of everyone in the office!

None of the mean-based voting systems allow such a pull on the average by one voter.

Yes and I explain that here: https://pianop.ly/voting/median.html

Obviously, no one is going to have a system that is so trivially gamable.

The point is that, the more you try to allow the system to account for “degree of difference from median”, the more you are going to encourage exaggeration and otherwise damage the “each voter has equal pull” criterion.

You can make it less and less contrived, and it becomes less and less a problem.

You are describing a scenario where more than half of the people choose A as first choice, and saying it is a pathology because A is chosen. I don’t see that as a real problem.

You’ve still got some bizarre polarization, in that while 51% love A, 49% hate A. Likewise, 34% love C, and all the rest really hate C. And then all those centrists who like B the best, really hate A and B. You’ve got no one that likes A best, and dislikes B much at all. You’ve got no one who likes C best, but dislikes B much at all. You’ve got no one who likes B most (clearly a candidate with broad appeal), but has a strong preference of either A or C.

I’d love to see some kind of narrative that could explain why the voters in the various camps would feel this way. To me it is a nonsensical distribution of preferences, but choosing the candidate that most people like better than any others isn’t particularly wrong.

How does this sort of thing happen? It’s easy enough to write down on paper, but it’s hard to imagine in the real world, unless you just assume that the voters have extremely weak feelings about all the candidates, and it is basically a three way tie. If you really want to know who the median voter is in such an election, we need more information, beyond just the rankings (such as scores, as you gave above, but where they aren’t all normalized to have a min and max). The median voter might well be someone who says “I am pretty close to being equally ok with all 3 of them,” e.g… A: 51, B: 52, C: 49. If you insist that we decide exactly who is the median based only on the rankings, though, we’d need to get something more accurate than “~33%”… because otherwise you’ve simply described the degenerate case of a 3-way tie vote.

And I’ve already said about median, it can be hard to determine what is the “median voter” in complex real world elections. It’s a hypothetical concept. But think of it like the Condorcet criterion, which says “if there is a condorcet winner, it should choose it.” “Median criterion” would say “if it can be determined who the median voter is, the method should choose that voter’s first choice candidate.”

This is a complete misinterpretation of the concept. The median voter would not choose to put the capital in the middle of a lake, would they? There is a reason it is worded “first choice of the median voter” rather than “the median candidate.”

The state capitals analogy always assumes there are a finite set of possible candidates. If you want to simply the example as if you could put the capital anywhere, fine, but you need to choose one or the other.

Number of dimensions doesn’t matter. I won’t go into more detail now, as this is getting too long already and I just don’t have time. Maybe I’ll come back to that. (FYI, I’ve written a pretty sophisticated graphical vote simulator that models this sort of thing with more than one dimension, but I’m still polishing it up before showing it around)

Look, it is possible that an individual person could say this: “If you offer me strawberries or a banana, I’ll choose the strawberries. If you give me choice between a banana and an apple, I’ll pick the banana. But if given a choice between apples and strawberries, I will always choose the apple.” You might say it is impossible for a person to feel this way, but I know, as the dad of a six year old, that it is perfectly possible. :slight_smile:

You could try to justify it, maybe by saying that the enjoyment of eating one fruit is somehow altered by which fruit it was presented with. Or whatever.

But I’d also argue that such a person is being some combination of irrational or just being difficult. You’ll go crazy trying to accommodate people who are being difficult that way.

I think your examples above are pretty much the same thing. You’ve got a population that is just being difficult. Or, maybe I should say, you are being difficult by proposing that these examples are actually realistic possibilities that a voting system should try to accommodate. Please don’t take this as some sort of personal attack – I don’t really think you are intentionally being difficult. But I think you should look closely at those examples, try to come up with a narrative as to why the preferences are distributed the way you describe, and consider the possibility that they are kind of like that difficult kid who wants to deny the transitive law of order ( https://www.expii.com/t/transitive-property-of-order-4273 ).

Because choosing A leaves a solid 34% of the electorate maximally dissatisfied and an additional 15% almost as such, while choosing B leaves everyone happy.

Are you going to just endlessly nit-pick every possible scenario I throw down until I give up?

All right, here’s something more realistic.
There are 2 main issues X and Y.
One faction really wants X, but moderately dislikes Y. (But they would choose X+Y over nothing.)
Another faction really wants Y, but moderately dislikes X.
A third faction wants neither.
Assume that any two of those three factions is a majority. (Independents can exist but they have to be small in number.)

  • A majority (factions 1 and 3) would be against issue Y.
  • A majority (factions 2 and 3) would be against issue X.
  • A majority (factions 1 and 2) would rather have both X+Y pass than have neither pass.
  • There are two Condorcet cycles here: XY > 0 > X > XY and XY > 0 > Y > XY. (0 = neither issue.)

I anticipated this kind of response, but you still have an angular distribution of voters. How do you find the “median” point on a circle?

Reminds me more of the framing effect or the availability bias than anything. The person is always picking the first item offered.

I think the best way to break through our polarized times (after we somehow install Score Voting) is to run more moderate candidates that appeal across party lines. Because there is no fear of vote-splitting, people will have to at least consider giving the centrists a shot (you know, so the other party doesn’t win…)

I’ll only address this one comment, partly because I don’t have more time, and partly because it is really, really important.

I’m sorry if you feel nitpicked, but I’d say you are nitpicking the median concept, and doing so disingenuously by continuing to produce opaque examples that are at best unrealistic, and at worst, absurd.

I keep trying to send people back to the super simple, super pure temperature voting example, for which I built an app and made a video to graphically demonstrate, and I do this for a reason. It makes the core concepts clear. What you are doing with your examples I would consider to be sleight of hand. It appears that your examples seem to be intentionally obscure so that people will be fooled into thinking there is a bigger problem than there is, but they’re not quite sure why there is a problem because it isn’t that clear why the examples are so unrepresentative of any real world situation.

( and yes I’ve seen people do this same thing since I started participating in election methods mailing list almost 20 years ago, and I think it is one of the main reasons things go round and round and we don’t make progress )

What you are doing is providing examples of scenarios that get increasingly unlikely as the number of voters increases. Just as no voting system can magically eliminate the issue of “what if there is an exact tie?” at least ties become less and less likely to be a problem as the number of voters increases. Condorcet cycles are another example of something that becomes increasingly unlikely as the number of voters increase. [1] Likewise, your examples where there are a small number of hard-edged groups of people whose members all seem to think and vote exactly the same as one another, don’t tend to happen with large numbers of voters.

So using that temperature voting example [2], I’ll explain what I see you to be doing with your examples. It is like you are asking “what if 51% want 65 degrees and 49% want 75 degrees?” In that case, someone might argue that it doesn’t make sense to choose 65 degrees (the median), when any reasonable person can see that around 70 degrees would be a much better option.

And the naive point of view would be yes there’s a serious problem with median (compared to, say, average). But in the real world it isn’t a problem because with larger numbers of voters, you would have people in the middle. Which would mean that the median would almost always be a middle ground result, probably right around 70 degrees. And that would probably be true with a dozen voters. Such supposed problems get less and less pronounced when you have hundreds or thousands of voters.

With real candidates, playing out over a number of years, this becomes even more true. Potential candidates see that the middle ground is the sweet spot, and try to accommodate that middle ground with their platforms and policies. Both voters and candidates would converge on that middle ground with a median seeking method.

Again, these are things that your examples fly in the face of. You have people unnaturally clustering in well defined groups, so that there are no individual voters near any sort of middle ground. Yes median performs poorly when this happens, but this doesn’t happen in the real world elections, it only happens when people are trying to come up with contrived examples.

[1] It is my hypothesis that Condorcet cycles aren’t actually representative of how the population really feels about any realistic scenario, they are simply an odd degenerate case that gets increasingly unlikely as the number of voters increases. I explore this in another thread, which I see you’ve responded on.

[2] Although I don’t think you’ve stated it explicitly, I would hope that you would agree that voting for the median (in that example) makes sense while voting for the average does not. I will even take that further and assert that voting for the median in such a numerical vote is as close to “perfect” as you are going to get in any sort of election system. (i.e. immune to exaggeration, doesn’t incentivize estimating how others will vote, game theoretically stable, each voter has equal power)

I suspect that margin plays a role as well as number of voters. That is, if P[A>B] ≈ 0.5, P[B>C] ≈ 0.5, and P[A>C]≈0.5, then cycles could still occur in a real world context.

Things that are based on margin also become less likely as the number of voters increases, right? I’m saying Condorcet cycles are very similar to the concept of actual ties (such as in plurality), the chance of ties approaches zero as the number of voters approaches infinity. With a dozen voters, having “0.5” in there is reasonable and possible. With a million, the chance of it being 0.5 as opposed to 0.500162 or the like becomes increasing unlikely.

If that’s all it is, the cycle is not a reflection of a true, real world cyclic situation, but just an indication of a bit of random slop in a near-tie.

I see there are some examples where a Condorcet cycle “actually exists in reality” (albeit a fictional reality), such as this one in the book Gaming the Vote, where the country of Squareovia has to choose a capital, and the relationship between voters and potential capitals is very carefully contrived geometrically.

No matter how much you increase the number of voters, the cycle will essentially remain unchanged, assuming the population remains with a similar geographical distribution.

I believe such real world scenarios are extremely rare, although I guess I wouldn’t be asking this question if I was 100% sure.

Someone said that there is no perfect voting system. I don’t see why to agree, in the single-winner context. How do the systems that give the same answer as Approval fall short of perfection?

Someone arguing for serving a “median voter” urged readers to return to the example of choosing the temperature to keep an office at. I am not favorably impressed with the usefulness of this example toward studying about systems to choose a candidate to fill a political office. The temperature is one-dimensional and continuous, and political candidates are neither.

I’m sorry you don’t see the usefulness, but in general I tend to return to simpler examples when people are getting tripped up on concepts that become unclear when you introduce a lot more things prior to understanding the simple things. It is like in economics, you need to make sure you understand perfect competition before you attempt to understand “real world” macro and micro economic situations where it is distinctly imperfect. Or how in physics and mechanics, you study basic Newtonian physics before you introduce quantum effects and add all kinds of complications like air resistance and other messy real world details. In game theory (which is very relevant to voting) you make sure you understand the Prisoner’s Dilemma first. You need a foundation.

My experience, and this isn’t just with voting theory, is that when someone says “this simple example is not relevant because things are more complicated,” it’s a red flag that they haven’t studied much theory and take a very undisciplined approach to the subject matter.

Someone here pointed out the Reddit conversation of over a year ago where I ended up making a fairly elaborate demo to show why, in the case of one dimensional numerical voting, exaggeration didn’t help with median while it did with average. The reason I went back to that, and spent the time, was because there were a bunch of people who insisted it wasn’t true. To me, how in the world can you claim to have an opinion on all this stuff if you don’t understand such a basic concept? It’s basically the simplest, purest case of the whole issue of strategy and gamability.

From this article about a suspected Condorcet cycle in the 2009 Romania presidential election:

Juho Laatu (after reading an earlier draft of this paper) proposed classifying Condorcet cycles in large elections into three types:
I. “Weak” cycle (aka random cycle or noise-generated cycle):

  • the looped candidates are almost tied
  • can be a result of some almost random variation in the votes
  • one could say that this kind of a loop is one special version of a tie
  • any of the looped candidates could be the winner (with no big violation of any of the majority opinions) against any
  • the expected winner may change from day to day in the polls

II. “Strong” cycle (stable cycle, rational cycle, cycle with a stable identifiable reason):

  • there is some specific, in principle describable, reason that has led to the formation of this loop (not random variation in the votes)
  • the cycle / opinions are strong enough to that they are unaffected by daily/weekly random opinion fluctuations

III. “Strategic” cycle:

  • artificially generated by strategies employed by some voters to “game the election”
  • not based on sincere opinions

The chance of a weak cycle would go down to zero as the number of voters becomes very large, since you need the pairwise margins to be statistically insignificant. However, strong cycles will be more likely to occur when the margins are small because the paradoxical phenomena required for a Condorcet cycle to occur need not be as extreme. (Notice that in the Romanian example that Warren Smith postulates is a strong cycle, even though the margins are mostly statistically significant, they are still very tight.) Or, for a mathematical proof:

Let At represent the number of first choice votes that A receives and BmA be the margin by which B voters prefer A to C (Note: BmA=-BmC). If there is a Condorcet cycle between A, B and C where A>B>C>A, then we must have
Ct<Bt+AmB; Bt<At+CmA; At<Ct+BmC
This is equivalent to
Ct < Bt+AmB < At+CmA+AmB < Ct+BmC+CmA+AmB
So we have BmC+CmA+AmB>0
For a A<B<C<A cycle, the signs are flipped.
Ct > Bt+AmB > At+CmA+AmB > Ct+BmC+CmA+AmB
This is equivalent to
Ct+BmA+CmB+AmC > Bt+BmA+CmB > At+BmA > Ct
Note: BmA+CmB+AmC = -(BmC+CmA+AmB)

So if there is a Condorcet cycle, then the sign of BmC+CmA+AmB determines the direction. Furthermore, the magnitude |BmC+CmA+AmB| affects how easy it is for a Condorcet cycle to occur, since it is the width of the range of values that Bt+AmB and At+CmA+AmB (or Bt+BmA+CmB and At+BmA) must fall under. If it is very small, it will be very hard for a Condorcet cycle to occur. Intuitively, the three terms BmC, CmA, and AmB seem like they should usually be anticorrelated: we would expect that if more B voters support C second, then more C voters will support B, and fewer A voters will support B, since it suggests greater similarity between B and C, and greater difference between B and A. We would especially expect two terms increasing to cause the other term to decrease. If more B voters support C second and more C voters support A second, that especially would make us suspect that more A voters will support C. All of this means that:

  1. It will be hard for all 3 terms to have the same sign. (This is important, when the first choice counts are equal, there is a cycle iff all 3 terms have the same sign.)
  2. It will be hard for the total to get far from zero.

So what does this have to do with pairwise margins of victory?
If Ct < Bt+AmB < At+CmA+AmB < Ct+BmC+CmA+AmB
Then 0 < Bt+AmB-Ct < At+CmA+AmB-Ct < BmC+CmA+AmB
0< Bt+AmB-Ct < BmC+CmA+AmB
Bt+AmB-Ct is B’s pairwise margin of victory over C.
So BmC+CmA+AmB bounds the pairwise margins of victory (and it not only bounds B’s margin of victory over C, but C’s over A and A’s over B, since the decision to put C at the start of the inequality was arbitrary.)

Thus if there is a Condorcet triple, all 3 pairwise matchups will have margins of victory smaller than |BmC+CmA+AmB|, which is usually small. (In the infamous Burlington 2009 election, it was 89 votes, about 1% of the total.)

It’s probably possible to extend this concept to longer cycles, but it would be more complicated.

One last thing. Yet another way of expressing the conditions necessary for a condorcet cycle to occur is
0 < AmB+Bt-Ct
0 < CmA+At-Bt
0 < BmC+Ct-At
(This is for an A>B>C>A cycle, for the opposite, flip the signs.)
The right-hand sides are just margins of victory. However, when BmC+CmA+AmB>=3, there will always be some first preference vote count that meets the condition.

This is not a wholehearted endorsement of the min-max strategy. You are predicting it will work on the grounds that not everyone will obey it. I contend that it would be politic to recommend a strategy that would would work if everyone obeyed it. Such a recommendation is easier to defend against counterarguments. Why not go that route? Political opponents can be very good at detecting insincere speech and at the first faint whiff of it, they become your mortal enemy for the next thousand years.

This seems to argue that compared to STAR, straight Range falls short when it comes to revealing additional information about voter sentiment after having completed the first and more important task with which we would entrust a voting system, that is, deciding which candidate to put in office.

Before even discussing the efficacy of the systems at revealing or hinting at additional information, you and I had better satisfy ourselves that the systems we are comparing satisfy Job Number One equally well, the one voting system to the other. And a corollary to that is that they will select the same winner every time. Otherwise, one of them would have to be falling short in according the voters equal power. One of them would have to be cheating at least one voter out of her rightful measure of political power.

STAR involves two rounds of tallying, in which the first round eliminates some of the candidates from the second round. The second round permits just two candidates and moreover, it operates by projecting the ballots down to ranking and only paying attention to that, and thus ignoring the relative ratings the ballots give the two finalist candidates.

How can we be sure that STAR doesn’t give an opportunity for a faction to gain an unfair advantage by working to promote into the second round a candidate they think can’t win, so as to crowd out someone who could win under straight Range but whom they don’t like?

I’m not looking to make a politic argument, but rather, the truthful argument here. However, I think it’s really not that difficult to point out that when we have people voting 3rd party even under FPTP, that there will always be honest voters in any system, and therefore it’s unrealistic to assume total compliance with any strategy.

Just theoretically, would you be satisfied with STAR if it gave voters the choice to cast their ratings in the runoff (i.e. a 5/5 and 4/5 yield a 0.2 vote margin in favor of the 5) instead?

I don’t have a definitive answer, but note that turkey-raising could potentially throw the candidates a faction actually prefers out of the runoff. Further, it might even elect the turkey if several factions engage in the strategy.

The stuff you quote from Juho Laato is exactly what I was getting at, although he worded it very well and has obviously investigated it more deeply than I have (I think my own view on it is more of a “hunch,” hence my post).

I was suggesting that “weak” cycle is probably the by far the most common. Stable/rational cycles are the kind you can get from that image I posted (from “Game the Vote”), but seem unlikely for real world elections, except, as you note, when the election is very tight already.

The third, a strategic cycle, is of course interesting. In theory, a decent voting method can make them go away entirely. Right? Well, that’s also a bit of a hunch. :slight_smile:

Regardless, thanks for pointing me to all this. I will look in more depth at your math / proofs as I get some time to give them proper thought.

1 Like