I’ve noticed a few cardinal PR method inventors creating elaborate tiebreaking methods, but when is anything more than random selection necessary?

# A compromise between Vote Unitarity and Thiele PR methods

If you are going to have capped surplus handling rather than reweighted surplus handling (I do not like this aspect of SSS, and would prefer that scores be multiplied by the ballot’s weight, which both SSS and SSQ are compatible with) then nonrandom tiebreaking is necessary to pass Pareto. If you are going to completely exhaust ballots, as all quota based surplus handling does, you need nonrandom tiebreaking to pass Pareto.

The tiebreaker I would have liked to use would have been the left derivative of f at q (smallest wins, if that’s equal then use the next left derivative), but that won’t work for capping.

But the odds of a tie are overwhelmingly low, and unlikely to do much harm. I suppose it makes sense to prove mathematical properties for the system though.

I think there are interesting arguments for either method. Capped surplus handling (which I assume means that a ballot scoring A5 B3 C2 becomes A3 B3 after C is elected) maximizes the chance that your ballot influences the election towards someone you scored, but reweighted surplus handling maximizes the chance that the election is influenced towards someone you favor most. Capped surplus handling seems to make more sense in the philosophy of consensus PR (though reweighted still can be mixed in), while reweighted surplus handling ought to make more sense for more “proportional” forms of cardinal PR, like allocated and Monroe methods.

But if I know that for the last few seats that my contribution to candidates I scored 5 and that I scored 2 are going to be treated identically, then it’s going to make me hesitant to give candidates I actually support 2/5 any points.

But if you reweight, and the candidates you scored a 5/5 didn’t have a chance of winning, but the candidates you support 2/5 did, you’d be much less likely to be able to help elect the 2/5’s over the candidates you dislike. Ideally, I think the choice of capping vs. reweighting should be up to the voter, similar to allocation vs. unitarity, but because capping is so much simpler, it may end up as a default choice. Maybe a simulation with Keith’s code with capping vs. reweighting could help show the quality difference.

But the consensus candidates it helps might only be consensus mediocre. For example:

2 quotas: A5 B5 C2 D1 E0 F0 1.25

2 quotas: A0 B0 C2 D1 E5 F5 1.25

(4 seats)

The capping winners are A, E, C, D.

The reweighting winners are A, E, B, F.

If C and D had 3 and 4 point averages, I could understand favoring them over the partisans and not going full Monroe. But utilitarian selection already aids consensus candidates enough. In the 4th round, each ballot’s cap is 1.25, so capping interpreted that as every voter expressing 80% support for D. In reality they expressed 20% support for D.

So when you consider what a candidate’s average score should be, reweighting makes more sense than capping.

There are no real consensus candidates in this example, and I don’t think it’d be harmful for voters to zero out C and D under capping. Now, if you had a more nuanced example where some more utilitarian candidates win, then some more partisans, then maybe for the final seats the “lowest common denominator” compromise candidates win, even then I suspect voters could just zero out those low-utility compromises and keep their scores high for the more high-utility compromises with little loss to ballot power. The one scenario where capping may struggle is when a voter gives a 1/5 to the most appealing candidate on the other side, but here, the voter can easily judge that such a candidate is high-utility at least for a large portion of the electorate, and so choose whether the weakening of their own favorites’ chances is worth it.

Largely, I think any kind of utility-based voting, whether single-winner or multi-winner, is going to require strategy for voters to yield the results they really want. But ideally, it’d be great for the voter to have an option to have their ballot reweighted or capped, as the judgement of which is better often turns on the voter’s own values and the particular election scenario rather than one single “best” judgement. If it’s an either/or choice, I think capping makes it a bit easier to make the election turn out how you want; with reweighting, you will have a significantly harder time compromising once any of your highly scored options wins, and then we go back to the problem of voters having to give more preference to a compromise over their favorite in order to give the compromise any chance that STV and FPTP-based PR methods have. But considering that most PR proposals work within 5-seat districts at max, and that almost all forms of cardinal PR (other than Sequential Monroe, and maybe others I’m unaware of?) select the utilitarian winner for the first seat, I don’t think that the reduction in ability to compromise under reweighting will be very problematic. At worst, it makes it harder for a voter to feel cross-represented by all 5 representatives of the district to some degree, but they all still feel connected to the utilitarian at worst.

The point of the example is that if the voters don’t zero them out, then C and D win, even though no one likes them, to show how capping can promote mediocrity as much as consensus.

An alternate way of extending the approval case of this method to Score, based on Fastest to Quota instead of SSS:

Each ballot starts with weight 1. The quota starts at a Hare Quota. (Although you could do this with an uncapped quota, it would defeat the purpose. Probably.)

Each round, elect the candidate that can reach MAX*Quota points on the fewest ballots, counted by weight. If the last ballot added would cause a candidate to exceed the quota, then only count the portion of the ballot necessary to meet quota. (e.g. if the scale is 0-9 and the quota is 35pts, and a candidate had 4 max scores on ballots with full weight, then the number of ballots needed to meet quota is 3.888.) Note that the best way to do this is always to add higher scores first.

Surplus handling:

If a candidate has more ballots giving them the lowest score used to make quota than they need, treat all ballots with the lowest score used to achieve quota as contributing p * ballot_weight * Score points towards meeting quota, where p is the same for all ballots giving them the lowest score, and the total amount contributed should be MAX*Hare_Quota.

A total of one quota of weight is removed from the ballots. Each ballot pays in proportion to the score contributed.

This is all the same as the original method. What changes is:

*Steps for when no one makes quota:*

Reduce the quota, retroactively increasing the each previous elected candidate’s surplus. (Maintain the order in which ballots are used: since ballots giving the candidate a higher score are used first, ballots giving the candidate a lower score should be restored first.) When the quota has been reduced enough that a candidate is able to make quota, then that candidate wins.

The goal of this alternate extension is to incorporate the deficit adjustment into a method with a Monroe-like prioritization of higher scoring ballots. Actually using Monroe would be silly, since the only way a Monroe score meets quota is if all ballots involved give the candidate a maximum score. It would essentially be the approval case with all nonmaxed candidates considered disapproved. Using Fastest to Quota gets around that.

What do you think of taking the highest “Score - Mean of Scores on Ballot” first?

I don’t like it for this purpose.

1 quota: A5 B5 C4 [other candidates: 9 4s, 3 0s] avg score 3.333

another ballot: A2 B2 C5 [12 0s] avg score 0.6

[Other ballots that don’t give points to A and B.]

Right now, A and B are effectively tied, reaching a Hare quota of points on a Hare quota of ballots, specifically, the first group of ballots that I list. But if the other ballot I list raises A by a point or two, then they will be earlier in the order that A’s ballots are counted, but also give A fewer points than a ballot from the first quota would have. So A will reach a quota of points after B as a result, as B will still reach a full quota on the minimum number of ballots.

Here is an argument in favor of capping:

(https://www.reddit.com/r/RanktheVote/comments/d09lyi/an_interesting_take_on_pr/ez8nu2e/)

Number | Ballots |
---|---|

49 | a1:5 a2:5 a3:5 b1:0 b2:0 b3:0 |

17 | a1:0 a2:0 a3:0 b1:5 b2:4 b3:4 |

17 | a1:0 a2:0 a3:0 b1:4 b2:5 b3:4 |

17 | a1:0 a2:0 a3:0 b1:4 b2:4 b3:5 |

Because the majority differentiated between their candidates, the minority wins a majority of the seats under scaling, because the majority can’t give full vote power to all of its candidates in the final round. One solution is to run a simulation of allocating all of the ballots to winners before the final round, and then pick the candidate with the most ballots preferring them in the final round, but an even easier one (here) is capping; it eliminates the differentiation the majority gave their candidates after their first candidate wins, allowing them to push equally for all of them in the final round.

Maybe there is some amalgamation of the two that’s possible?

If you’re going to insist on passing the Droop Proportionality criterion, the fact that you’re using score immediately means you fail.

Also, capping’s solution is to essentially ignore the fact that there was ever differentiation to begin with.

For this specific case, both optimal and Sequential Monroe get the desired result. So does the method I described in

(Specifically the 2nd post. I designed both methods in that thread to prove a point, but the method in the first post was especially designed to prove a point.)

The score of 2 As, 1 B is 941/3, and the score of 2 Bs, 1 A is 942/3.

SSQ leaves B further behind for the 3rd seat.

Does the narrow difference between the scores mean that this example is still very likely to yield 2 As if slightly tweaked?

Also, does the improved resolution of a 0 to 10 scale merit any possible reduction in political viability if it helps mitigate this scenario?

There are a few things about this scenario that you could tweak. Let’s see what they would do.

- Changing the margin: a narrower margin would allow A to win. On the other hand, the margin is fairly narrow already, and I think there is limited value in preserving majoritarianism in virtual ties.
- Changing the score which B subfactions give to members of B outside their subfaction. If they decided to start giving them 3s instead of 4s, that would allow A to win.
- Changing how precisely the subfactions of B divided themselves. The exact precison in which they are divided seems fairly unlikely, especially since they aren’t trying to do it, like in vote management examples. If the B3 faction shrinks by X, and it is transferred to B1, the swing is X points in favor of {A1 B1 B2}, so it appears that precise division is a bad case for this method. Moving members from the smallest faction to the largest seems to help B. Shrinking the second smallest, however, hurts it, I’m fairly sure A would win 2 seats if the division were 19-16-16. However, shrinking the smallest faction enough can offset this even if all the gains only go to the largest faction. For example, B would probably win two seats with 30-16-5.
- Changing how many candidates each B voter considers to be ideal. The fact that there were no B loyalists whatsoever seems unrealistic. Nobody voted B1 5 B2 5 B3 4 or anything like that. This would clearly help B.

3 and 4 are likely to cause the example to become more realistic, and they generally help B. The second case seems potentially problematic in the real world as well, although you could make a case that it shows that the B candidates don’t really have the strongest support, since their faction is so incohesive.

I think a good argument against the example is that the A voters sacrificed their ability to differentiate who within their coalition would win; they didn’t have any impact on that battle. Maybe the best way to look at is that if you’re on the verge of capturing or losing a seat, min-maxing is a good idea, and STV does marginally better in these scenarios (though elimination by first-choice may not be very good or useful overall?)

The fact that STV delivers a majority to B in this case is a consequence of Droop Proportionality being the basis on which ordinal methods justify being called PR. Any ordinal PR method would do that. There are ordinal PR methods besides STV, which is not a particularly good one. Ordinal generally lacks a concept of Independent Proportionality as I have described for score methods.

What do you mean by IIB; I’m a novice interested in better democracy and voting methods.

Independence of Irrelevant Ballots: basically, if a ballot gives every candidate the same rating, it should not influence the outcome.