Median with multiple breakdown points Voting

Ballot: use range [0, MAX]
Counts:

  • for each candidate, the ratings received are sorted from lowest to highest.
  • calculate: Q = “number of votes + 1” / (MAX+1)
  • the breakdown points (position in the rating list) are Q, Q•2, …, Q•(MAX+1). If the breakdown point (position) isn’t an integer, the rating in the previous and next position is averaged.
  • for each candidate, the values ​​in breakdown points are added and the highest sum wins.

In case of a tie, the process is repeated (only for the candidates in a tie), changing the value of Q:
Q = “number of votes + 1” / (MAX+2)
In case of a tie, it will repeat, using (MAX+3) in Q, and so on.

Why multiple breakdown points?
The classic median considers only 50% as breakdown points, and this can lead to bad results. Example:
A: 0,0,0,5,5 --> Median 0
B: 0,0,1,1,1 --> Median 1
B beats A even though A is clearly better than B.

Assuming we have range [0,3], we observe the following case:
A: 0,0,0,0,0,0,0,0,0,0,0,3,3,3,3 --> Sum breakdown: 3
B: 0,0,0,1,1,1,1,1,1,1,1,1,1,1,1 --> Sum breakdown: 3
The sum of the ratings of A and B (Score Voting) would be equal, so tie makes sense.

Now, maximize A’s ratings, and make B slightly better:
A: 0,0,0,0,0,0,0,0,3,3,3,3,3,3,3 --> Sum breakdown: 3
B: 0,0,0,1,1,1,1,1,1,1,1,2,2,2,2 --> Sum breakdown: 4
In this system B wins while with Score Voting there would be: A[21] B[16] and A would win.
Remarks:

  • This is one of the worst cases (rare).
  • B isn’t very far from A (as happens using the classical median).
  • The ratings (utility) in B are more distributed among the voters than in A and this, for some people, can make B consider better than A (or anyway not much worse), even if it has a lower total utility.
  • The high resistance to the strategies of the median concept may be sufficient to justify this case.

It’s not at all clear to me that A is better than B. B is clearly better from the majoritarian perspective, and who’s better from the utilitarian perspective is unknowable because you gave scores, which are not the same as utilities, much less interpersonally comparable utilities. Even if they’re honest scores (which the 1s are obviously likelier to be than the 5s), B’s could be (and evidently are, judging by the fact that at least one of those who prefer him only used 1/5 of the range in the contest between relevant candidates) artificially low due to irrelevant candidates. For example, with scores sorted by voter:

A: 0,0,0,5,5
B: 1,1,1,0,0
C: 5,0,0,0,0
D: 0,5,0,0,0
E: 0,0,5,0,0

Even if all votes are honest, that doesn’t mean the distance between A and B is greater from the perspective of voters 4 and 5 than it is from that of voters 1-3. It could just be that no spoiler is present to reduce A’s 5s to 1s the way that C-E reduce B’s 5s to 1s.

Yes, there is a trade-off that sum and median are the two extremes of. But that doesn’t mean there is a goldilocks zone; often the best is one of the extremes, for it is competing interests, not competing estimators, that compromises must be found for. And if there were a goldilocks zone, I would expect it to be a generalization of the trimean, i.e. give each quantile weight equal to its distance from the closest extreme. Using your second example:

A: 0,0,0, 0 ,0,0,0, 0 ,0,0,0, 3 ,3,3,3 --> Trimean: 3/4
B: 0,0,0, 1 ,1,1,1, 1 ,1,1,1, 1 ,1,1,1 --> Trimean: 1
B wins, as he should, lest we punish him for his supporters’ relative honesty.

Finally, why do you set the number of ratings summed per candidate equal to the range? A smaller range results in more significant rounding errors, necessitating more data points if anything.

Even if all votes are honest, that doesn’t mean the distance between A and B is greater from the perspective of voters 4 and 5 than it is from that of voters 1-3.

I know this, the problem is that true distance is impossible to know because it is something very subjective. The only thing you can do is take the votes for what they are “putting all ranges evenly” even if your criticisms make sense.
I also don’t like Score Voting for that reason.

Finally, why do you set the number of ratings summed per candidate equal to the range?

To reduce strategies. With the median, large groups of strategic voters are typically needed to change a result.
Single strategic voter often has no (not even minimal) influence on the result, although the fact remains that his vote also contributes to determining it.

But isn’t the low probability of an individual voting strategically changing the median compensated for by the fact that, when it does, it does so by at least 1, whereas the greatest change an individual can make to the mean is only MAX/V? Anyway, you didn’t answer my question. Your idea is not a type of median, it is a sum of quantiles. My question is, why do you set the number of quantiles summed equal to the range? If it’s about limiting the probability of a change in an individual vote changing the sum, wouldn’t it be better to do the opposite (decrease the number of quantiles summed as the range increases)?

In the votes with range, what you say is valid (a “distance” of 3 points in a vote, can be different from another, in terms of utility), but it is also valid that there is no way to know the true distances therefore one can only observe the votes for what they are.
Considering the votes as if they represent the actual utility, the classic median seems to me that it can give too wrong results, so instead of taking only 1 value in the center, I take a few more.
The ideal is to use a range as small as [0,3] in order to have few values ​​(3) to consider, with greater resistance to strategies and greater simplicity of writing.

The best way to solve the utility problem for me is the cumulative form of voting, where the voters all have the same “power” (points) to distribute based on their preferences and to prevent irrelevant candidates from altering the result, you can use sequential elimination with the redistribution of points. However, this method has other problems in single-winner contexts (it fails monotony and is subject to min-maxing like many other systems).

You equate power with points, as if range voters who give more total points have more power, which is not the case. And cumulative is not merely subject to min-maxing: it reduces to single-choice with min-maxing; range reduces to approval, which is far superior. Multi-round cumulative is also dependent on irrelevant alternatives and also reduces to single-choice (in which one’s choice is not necessarily one’s favorite), at least in the decisive round (i.e. the round in which a single vote is likeliest to determine the winner of the final round).

So range is better than cumulative regardless of the number of rounds, but if you really want an incentive-compatible multi-round system, the only way to do it is to simulate strategy: for the first round, convert scores to zero-information strategic approval votes; for subsequent rounds, convert scores to strategic approval votes with the previous round’s winner as expected winner and the previous round’s runner-up as expected runner-up. Whether multi-round is worth the complexity is debatable, but if you’re going to go multi-round you might as well do it right.

1 Like

Voters use range votes but then the 100 points are distributed proportionally.
A vote with a range like this: A[4] B[2] C[2] D[0]
It’s converted to: A[50] B[25] C[25] D[0]
therefore they all have the same power (points).
If A is irrelevant and is eliminated at the beginning, the vote becomes:
B[50] C[50] D[0]
in this way, it’s resistant to irrelevant alternatives (if they are really irrelevant).

Yes, it’s resistant, as any sequential system is in comparison to its single-round analog, but still dependent and still inferior to its range analog. For example, it’s obviously better to max the candidate likeliest to be tied for last with D in Round 2. And if Round 2 instead determines who defeats D in Round 3, the vote is wasted, though with a bigger range it might have been A[9] B[6] C[3] D[0] and in the absence of A it might have been B[4] C[2] D[0]. Even then, the power of the vote would be reduced by the failure to min C.

Sequential range is a bit better: after A’s elimination, it becomes: B[4] C[4] D[0], so the voter maximizes his chances of defeating D in Round 2 (unless of course he’s given a larger range and uses it to differentiate B and C, in which case his power to defeat D in Round 2 is diminished, albeit less so than in cumulative). But he still has an incentive to falsely min his least favorite of B and C if he expects Round 2 to determine who defeats D in Round 3.

My system avoids all these problems. If the voter is truly indifferent between B and C, it’s effectively the same as sequential range. But if the true ratings are A[9] B[6] C[3] D[0], C is either approved or disapproved in Round 2, depending on whether B beats D in Round 1: if he does, we go offense, disapproving C because we prefer B; if not, we go defense, approving C because we prefer him to D. Being truthful about our preference for B over C and C over D doesn’t diminish our power in the slightest.

The system I described is this (Distributed Voting) with this extra rule, for single winner cases:

  • the vote with range, at the beginning, is reversed.
  • the candidate with the highest sum of points loses.

A vote like this (range[0,4]):
A[4] B[2] C[2] D[0]
is inverted ( |rating - 4| ) and becomes:
A[0] B[2] C[2] D[4]
which is then used to proportionally distribute the points as follows:
A[0] B[25] C[25] D[50]

This change makes this system one of the most resistant to generic min-maxing strategies, those in which a vote like this:
A[0] B[1] C[2] D[3] E[4] F[5]
would become strategically like this:
A[0] B[0] C[0] D[0] E[5] F[5]
i.e., this strategie:
[0,1,2,3,4,5] --> [0,0,0,0,5,5]

What I noticed in some simulations, however, is that even the system that uses Interpolated Median has this extreme resistance.
In Interpolated Median, a single tactical vote can change the outcome, even if very little, and I wanted to avoid this with the system I have proposed here (in order to further reduce the min-maxing strategies).
Now, you can say what you want, but in context with these ratings (range):
A[5] B[1] C[0] D[0] E[0]
A[5] B[1] C[0] D[0] E[0]
A[0] B[1] C[5] D[0] E[0]
A[0] B[0] C[0] D[5] E[0]
A[0] B[0] C[0] D[0] E[5]
I will never accept that the winner is B instead of A (with the classic median B wins).
Interpolated Median would also win B.
I also think that an Interpolated Median with sequential elimination and stretch of votes could solve this problem in an adequate way (that is, consistently with what you said about irrelevant candidates).

I made the min-maxing simulations with this site; you can also copy-paste the votes of the example above to see the results (select “single-winner” for a simplified view).

My system avoids all these problems.

Is there a precise definition of your system somewhere? I wouldn’t want to risk replying to something I misunderstood.

Good request Essenzia!
I was wondering about the same.

I think I understand that your system is the one called “Cardinal Baldwin”, which works like this:
Given a vote like this (range [0,6]):
A[6] B[3] C[2] D[1] E[0]
the candidate with the lowest sum loses.
If you delete A, the vote is normalized like this:
B[6] C[4] D[2] E[0]
if you then delete E, the vote becomes:
B[6] C[3] D[0]

There is also the “Stretch Baldwin” variant where it only normalizes when all the highest rated candidates in a vote are eliminated.
If from this vote: B[6] C[4] D[2] E[0] E is eliminated, the vote does not change: B[6] C[4] D[2].
This variant tends to be more resistant to min-max strategies (at least in the simulations I tested) but both fail too much anyway.

That’s what min-maxing looks like in score, where the strategic vote always weakly preserves the order of the honest vote, but a reasonable min-maxing strategy in Extended DV might be A[5] B[5] C[0] D[5] E[5] F[5] if the voter’s best chance of being decisive is a first-round tie that, if C wins and survives, he goes on to win the whole thing, but, if he loses and is eliminated, D, E, or F wins the whole thing. This is of course quite plausible if C is a centrist, which is consistent with the scores, for centrists’ greatest tests are typically the early rounds.

Anyway, I’m skeptical of strategy-resistance metrics, as they tend to focus exclusively on deterring strategy, and thus perversely treat backfires as a good thing. For me, the test of a voting system is how it performs when voters use the best strategy for their situation, not some pre-determined generic strategy. In other words, I favor an endogenous strategy model, for the definition of strategic voting is inextricable from the particular voting system.

This is a low bar. According to your site, only median loses A here, and all systems are strategy-proof. So let’s take each voter’s favorite for granted and randomize the others:

A[5] B[2] C[2] D[5] E[0]
A[5] B[5] C[1] D[0] E[1]
A[2] B[5] C[5] D[2] E[0]
A[0] B[3] C[5] D[5] E[2]
A[2] B[4] C[0] D[5] E[5]

Here, the systems are split between B and D (the weak Condorcet winner), with AV, SV, DV and Stretch Baldwin winning B and FPTP, Median, Cardinal Baldwin and Extended DV winning D (so would my system). But notice that Cardinal Baldwin is basically strategy-proof here, whereas in Extended DV Voter 2 can flip it to B by increasing C’s score to 3 or 4, and Voter 3 can flip it to B by increasing A’s score to 4 (and those are just the opportunities involving a single change).

No, in my system, the finishing order of one round determines the approval threshold of the next. In the simplest version, only the winner is taken into account: the voter approves (1) all candidates he prefers to last round’s winner and disapproves (-1) all those he prefers last round’s winner to. In the next simplest version, the runner-up is taken into account, so that the voter approves candidates he considers equal to last round’s winner if he prefers them to last round’s runner-up and disapproves them if he prefers last round’s runner-up.

Did you test enough simulations? Have you tried the opposite? Normalizing only when all the lowest rated candidates are eliminated? Given that normalization is precisely what makes Cardinal Baldwin more strategy-resistant than score, isn’t the proposition that a variant that only normalizes half the time is even more strategy-resistant an extraordinary claim that requires extraordinary evidence?

For me, the test of a voting system is how it performs when voters use the best strategy for their situation, not some pre-determined generic strategy.

For me, that is one of the possible tests, but also generic strategies I think need to be tested.
The general min-maxing strategy isn’t to be underestimated as it can lead voters to create a multiple-choice vote by default, even if a range is available (and even if you have no information on the likely results or votes of others).

Extended DV Voter 2 can flip it to B by increasing C’s score to 3 or 4, and Voter 3 can flip it to B by increasing A’s score to 4

Ok, you found a case where Extended DV fail, but:

  • it’s a strategy that requires a flip, which makes sense only if you know in advance the way in which the other voters have assigned ratings in individual votes (quite irrelistic in an electoral context). It’s one thing to know who the frontrunners are, it’s another to know how the voters assign ratings.
  • if in that context B wins instead of D it’s not much damage. It would be really problematic if one of A,C,E wins.
  • in your example, all the voters who support “D>B” strangely have already assigned the maximum rating to D in the “honest” votes (respectively for those “B>D”). Since maximization is the real problem with min-maxing strategies, it’s pretty obvious that in this particular case min-max might not work.
  • being a sequential elimination method it’s normal for Extended DV to fail in some (rare) cases due to a flip, but all the talk so far has been about min-max tactics not flip.

No, in my system, the finishing order of one round determines the approval threshold of the next.

Very interesting/particular idea; let me know when you have made it “well defined” (eg with page on electoWiki).

Did you test enough simulations? Have you tried the opposite?

This specifically (CB vs SB), not so much because in any case even CB seems too subject to generic min-maxing, then I don’t care.

Normalizing only when all the lowest rated candidates are eliminated?

  1. If irrelevant candidates deserve low rating (0), this typically doesn’t affect relevant candidates (even without sequential elimination).
  2. If irrelevant candidates deserve high rating for some voters (5), then this will generally lower the scores given to relevant candidates.

For this reason it certainly makes sense to normalize when eliminating high-rated candidates (case 2), while in case 1) normalizing I think it unnecessarily alters the true interests of the voters (or rather, what little is known about true interests).

In fact, CB is much more in the majority than SB, i.e. in cases like this (votes without strategies):
55%: A[5] B[4] C[0]
45%: A[1] B[5] C[0]
CB wins A, SB wins B.
Specifically, in the final head-to-head it makes sense to me that, if possible, it’s better to sum the points they have rather than use head-to-head.

A multiple-choice vote is better than a single-choice vote, which is what limited voting systems like Extended DV can lead to in the likely event that voters have information on the likely results or votes of others. When they have no such information, multiple-choice makes no sense in any system. How do you know whether to min or max a candidate with intermediate utility when you have no idea of the utility of the expected winner? If you have no information on the expected winner, any threshold is as good as any other, so you might as well bullet vote and save some energy, if it makes sense to vote at all.

I randomly generated it on the first try, but I’d be happy to look at a counter-example.

No, all you have to know is that B and D are the frontrunners to know that it’s foolish to waste your voting power giving a low rating to A, C or E.

They will if Voter 2 plays it safe and maxes C, or if Voter 3 errs a bit and maxes A.

No, Voter 2 gets a better result by bullet voting against D, and Voter 3 gets an equal result but comes very close to his best result.

CB is obviously more strategy-resistant here. Indeed, isn’t this more likely to occur as a result of one-sided strategy? Clearly, the 55% have C confused with a relevant candidate (or maybe they made the mistake of listening when they were told SB was strategy-resistant). Even if both sides are honest, and the 45%'s strongest preference is really B>A, the probability of it being due to their hating A more than the 55% hate B, as opposed to hating C less than the 55% hate C, is undefined. Gun to my head, assuming honest voting, which is the rightful winner? B, I agree, but I reject the assumption of honesty for systems that incentivize strategy.

A multiple-choice vote is better than a single-choice vote, which is what limited voting systems like Extended DV can lead to in the likely event that voters have information on the likely results or votes of others.

multiple-choice in case the actual frontrunners are known, it is equivalent to a single-choice vote in which only the frontrunners clash.

Furthermore, bullet voting in Extended DV works in reverse because the 100 points are given to disadvantage not to advantage.
Bullet voting would take this form:
Original: A [5] B [5] C [5] D [5] E [5] F [0]
Inverted: A [0] B [0] C [0] D [0] E [0] F [5]
Distributed: A [0] B [0] C [0] D [0] E [0] F [-100]
There is no other type of bullet voting in Extended DV, and this is very different from a single choice.

Even in the “simple” DV the points are not wasted anyway, because the redistribution makes sure that at the end (last head-to-head) they all group together giving maximum weight to the vote, if the worst frontrunners had rating 0.

How do you know whether to min or max a candidate with intermediate utility when you have no idea of ​​the utility of the expected winner?

Honest vote:
A [5] B [4] C [3] D [2] E [1] F [0]
If a voter is willing to give up the difference between A and B to favor A and B as much as possible over the others, then he will vote like this:
A [5] B [5] C [0] D [0] E [0] F [0]
This is a trade-off, which in many systems has more times that it works than it fails.
In range SLE systems, this general tactic works well:
[0,1,2,3,4,5] --> [0,0,0,1,1,5]

No, all you have to know is that B and D are the frontrunners to know that it’s foolish to waste your voting power giving a low rating to A, C or E.

Actually, you’re wasting it if you don’t give it points… remember that the vote is reversed in Extended DV.

They will if Voter 2 plays it safe and maxes C, or if Voter 3 errs a bit and maxes A.

If you use such precise strategies it’s not realistic. The sensible thing to do is to choose some strategy and have it used by random voters, not just some, with others all honest.

In the last example, the problem focuses on the last head-to-head which in my opinion is best done without minimizing the worst candidate in the votes.
C in the example can be equal to any number of other candidates (more or less voted), but all worse than A and B, so useless.

No, there can be pleasant surprise in multiple-choice, whereas in single-choice the frontrunner designation is a self-fulfilling prophecy. The problem is, you’re oscillating between zero information and perfect information, when the problems with limited voting manifest in an imperfect information reality.

No, it really isn’t. The problem with single-choice isn’t that you only make one mark or that it’s for a preferred candidate, it’s that your threshold is constrained to have only one candidate on one side of it and that candidate (if you know what’s good for you) is not actually your most or least preferred. So reverse plurality has the same basic problem as plurality, reverse cumulative has the same basic problem as cumulative, and Extended DV has the same basic problem as DV. Extended DV is after all effectively the same as the simpler Reverse DV, in which you instruct voters to rate their dislike for candidates and skip the inversion step.

You’re assuming it will still be a contest by the final round. In reality, any round can be the round in which an individual vote is most likely to be decisive, and the strategic DV voter votes so that his vote will have become a bullet vote by that time.

With no information on the candidates’ probabilities of victory? Based on what? It’s impossible to model elections without creating probabilities of victory. For example, when you simply randomly assign each voter a utility for each candidate, that creates in the long run an equal probability of victory for each candidate, which, for example, biases an analysis of “zero-information” approval voting strategies in favor of approving above average candidates (i.e. optimal strategy in case of equal probabilities of victory).

I’m aware of that. If you give them an honest, low rating, it becomes a high reverse rating, which dilutes the power of your vote against your least favorite frontrunner.

No, CB is strategy-proof here, so my examples are sufficient. And I don’t know what you mean by precise. Voter 2’s strategy couldn’t be simpler, and it’s the standard strategy for when your least favorite is the frontrunner.

there can be pleasant surprise in multiple-choice

Then there are not actually 2 frontrunners, but they are 3 or 4 frontrunners, and in that case also in DV it makes more sense to divide the points as a strategy.

reverse cumulative has the same basic problem as cumulative, and Extended DV has the same basic problem as DV

You would be right if the Equality Criterion was valid (not in this case).

any round can be the round in which an individual vote is most likely to be decisive, and the strategic DV voter votes so that his vote will have become a bullet vote by that time.

Ok, but the actual bullet voting in Extended DV is to make 1 of the many candidates lose and not make 1 win, this is why it does not generate the single-choice problem.

With no information on the candidates’ probabilities of victory? Based on what? It’s impossible to model elections without creating probabilities of victory.

Honest Voting (Score Voting):
A[5] B[4] C[3] D[2] E[1] F[0]
Strategic vote:
A[5] B[5] C[5] D[0] E[0] F[0]
The length of the arrows depends on the difference created between 2 candidates with respect to the honest vote; the longer the arrow, the more likely it’s to change the winner.
Worst cases:
A -> B , B -> C, A --> C , D -> E , E -> F , D --> F
Best cases:
D ----> C , D —> B , D --> A , E —> C , E --> B , E -> A , F --> C , F -> B
The indicated strategy, in the absence of predictions of the results, offers a high profit (statistically).

No, CB is strategy-proof here, so my examples are sufficient.

The sensible strategy, that does not include the flip, is this “Voter 2 plays it safe and maxes C” and CB also fails it.
And I repeat: the sensible thing to do is to choose some strategy and have it used by random voters (at most all, if you can’t simulate random), not just one voter strategic, with others all honest.

More sensible example; honest interests:
20: A[5] B[2] C[2] D[5] E[0]
21: A[5] B[5] C[1] D[0] E[1]
20: A[2] B[5] C[5] D[2] E[0]
20: A[0] B[3] C[5] D[5] E[2]
20: A[2] B[4] C[0] D[5] E[5]
the frontrunners are B and D. CB and E-DV both say that D wins.
All voters (not just those of one type) apply the classic strategy of the frontrunners:

  • set the worst frontrunner, and all the candidates that are worse or equal to him, to 0.
  • set the best frontrunner, and all those better or equal to him, to 5.

20: A[5] B[0] C[0] D[5] E[0]
21: A[5] B[5] C[1] D[0] E[1]
20: A[0] B[5] C[5] D[0] E[0]
20: A[0] B[0] C[5] D[5] E[0]
20: A[0] B[0] C[0] D[5] E[5]
E-DV wins D (with about double that C). CB wins C.

What if you only set the worst frontrunner to 0 (without minimizing other candidates)?
E-DV wins D. CB wins A.

CB doesn’t look very strategy-proof here.

There you go again, oscillating between zero information and perfect information. Just because minor candidates have a finite chance, that doesn’t mean voting as if they had an equal chance dominates voting as if they had an infinitesimal chance.

There are two single-choice problems, the cis single-choice problem (being constrained to choose one candidate to help) and the trans single-choice problem (being constrained to choose one candidate to harm). They’re obviously both rubbish, but reasonable people can disagree on which is worse. On the one hand, cis favors extremists; on the other hand, helpful bullet votes tend to be closer to real-life honest votes than harmful bullet votes.

You said that, only this time you corrected your error and gave C a 5. But as I pointed out, that’s not optimal zero-information strategy (which I maintain is a contradiction in terms), it’s optimal strategy for equal probabilities of victory (a type of imperfect information). When you say “statistically”, you’re referring to imperfect information trials, in which your method of generating utilities, whether you intend it to or not, generates certain long-term probabilities. The fact that your statistics favor an equal-probabilities-of-victory strategy suggests that you generate utilities randomly, whereas in reality a candidate favored by a given voter is more likely to be favored by other voters.

That’s not the classic frontrunners strategy, and it’s certainly not optimal for either system. And it biases, as it’s much further from optimal limited voting strategy than optimal unlimited voting strategy. The optimal frontrunners strategy for plurality/cumulative is you bullet vote for the best frontrunner (or the worst, in reverse plurality/cumulative). The optimal frontrunners strategy for approval/score is you max candidates you prefer to the expected winner, min those you prefer the expected winner to and, for those you consider equal to the expected winner (including the expected winner himself), max them if you prefer them to the expected runner-up and min them if you prefer the expected runner-up to them.

So, if all voters use the appropriate frontrunners strategy, CB becomes:

A[5] B[0] C[0] D[5] E[0]
A[5] B[5] C[5] D[0] E[5]
A[0] B[5] C[5] D[0] E[0]
A[0] B[0] C[5] D[5] E[0]
A[0] B[0] C[0] D[5] E[5]
Result: C-D tie, but still no incentive to vote strategically.

And E-DV becomes:
A[5] B[0] C[5] D[5] E[5]
A[5] B[5] C[5] D[0] E[5]
A[5] B[5] C[5] D[0] E[0]
A[5] B[0] C[5] D[5] E[5]
A[5] B[0] C[5] D[5] E[5]
Result: A-C tie, which is of course worse than the all-strategic CB outcome.

All speeches boil down to:
in EDV or CB how much does the min-max strategy linked to frontrunners work?

The optimal frontrunners strategy for approval / score is you max candidates you prefer to the expected winner, min those you prefer the expected winner to and, for those you consider equal to the expected winner (including the expected winner himself), max them if you prefer them to the expected runner-up and min them if you prefer the expected runner-up to them.

Given a vote like this (range [0,4] for convenience):
A[4] B[3] C[2] D[1] E[0]
B and D are frontrunners and both of our strategies say the vote will take this form:
A[4] B[4] C[?] D[0] E[0]
The only difference is what happens to C (ie, to those candidates between the 2 frontrunners).

You say that C should have maximum rating (4), but this is a non-optimal arbitrary choice, because:

  • favors the victory of C over D (positive),
  • but it also favors the victory of C over B (negative).

Similar speech if C is minimized; it’s not necessarily right.
For this reason I instead assume that each candidate does what they like with C. In practice I simply left their ratings unchanged (so C[2]).

More precisely, I think that the management of C depends on the voting system. In CB the best (strategic) thing I think is to put C to 1 because:

  • If B,C remain, the vote becomes: B[5] C[0]
  • If C,D remain, the vote becomes: C[5] D[0]
  • If B,C,D remain, the vote becomes: B[5] C[1] D[0] favoring B at most, even with respect to C.

While in DV the best tactic is to set C to 1 (or at most to 0), for similar reasons as above.
Not surprisingly, my strategy returns much better results for the voters, in this context (at least for EDV) and equal for CB, while with your strategy in EDV there would be more dissatisfied voters than happy.
That is, the fact that your strategy generates more dissatisfied voters can be understood as a motivation that pushes voters not to use this strategy (therefore, EDV is more resistant to it with respect to CB).

P.S.
Indeed, applying the same argument to CB, it too is resistant to both mine and your strategy in this case.
It would be interesting to do a test in which random voters (not all) use these strategy and see if there are more strategic voters among them who are satisfied or dissatisfied with the strategic result obtained… is the next kind of simulation I’m going to do in the site (if I have time).

Given near-perfect information, yes. More generally, optimal score strategy is to max all candidates better than the weighted (by probability of victory) average candidate and min the rest. For CB, it’s more complicated: each candidate repulses nearby candidates (i.e. sends them further away on the range) with a strength determined by his probability of victory; however, in the near-perfect information case, the repulsive force of the frontrunner dominates, yielding a strategy identical to that of score .

But D is the expected winner, and B is the expected runner-up, so the positive is likelier to occur than the negative (and, indeed, does occur).

It would do the same thing if you voted B[5] C[4] D[0]. That is indeed the optimal vote in a wide range of information environments.

No, individuals acting individually are motivated by the individual consequences of their own actions, not the collective consequences of aggregate action. Voter 2 controls only Voter 2’s vote, he gets a better result by bullet voting against D, and he could care less how that makes the other voters feel.

P.S. Here’s that Electowiki