A compromise between Vote Unitarity and Thiele PR methods

Here’s a GIF that shows how your method works in one round:


Here’s the picture for the whole thing:

Your method could be thought of as “once candidates fall below the quota, it’s not the candidate with the most points that wins, rather, it’s the candidate who gets the largest proportion of their original score back when the quota previous candidates needed to win lowers by any variable amount.” This means you can compute it by simply reducing the quota by a number like 1, running surplus handling, then automatically electing the candidate who gets the largest proportion of their original score back. If you want to find the q-value from there for completeness, you can do a mathematical equation which balances the rate at which the winner’s score increases in proportion to how much you lower the quota, and then use that to show the exact number of points they and other candidates have after that round is over.
Someone with explosive growth but who is very far from the quota can lose to someone with very slow growth but who is very close to the quota. I really don’t know how your method could be explained to people in examples, because the math required to compute is an equation for each candidate of the form (current score of candidate + points restored from quota lowering = quota) and you have to solve this equation for each candidate and then take the highest quota value. GIFs make it easier to show, though, since you can show the quota line going left and the points restored making each candidate’s point bar inch right, until you intuitively see the first candidate meets the quota.
Ideally I’d like to make a GIF where the quota moves leftward at a constant pace (with the quota value itself going down every frame with an effect similar to https://www.shutterstock.com/video/clip-2610635-grid-rapidly-changing-numbers), the points restored bar for each candidate moves rightward at a constant pace (with the point value for the candidates increasing with a similar effect), and we see a marker for where the old quota was. The already-elected candidates have the portion of their black bar that’s between the old quota marker and the current quota marker turned blue, and you’d be able to tell that the points restored bars have some proportion to that blue value. And once some candidate reaches the current quota, maybe have their points bar flash in various colors to visually highlight that “We have a winner!!!” But I have no idea how to make a GIF that advanced, unless I manually do each frame. Does anyone know how to make the code for generating such a thing? It could probably be done very quickly in similar fashion to (https://twitter.com/ElectionsNI/status/827129018944204800, http://electionsni.org/results, and https://www.bbc.com/news/av/uk-northern-ireland-36214503/ni-assembly-election-how-does-the-stv-system-work)

I have defined a tiebreaker for this method, with the goal of passing Pareto. The idea is that tied candidates are separated by further reducing the quota until one of them has a higher score than the other. To define the tiebreaker, I will describe a coerced SSS election as one that must elect candidates based on the order that they appear on an ordered list until the end of the list is reached, and must use a prespecified quota size when handling surpluses rather than the Hare quota. Suppose we conduct a coerced SSS election using the ballots of the real election, and where the list is the order that the first n candidates were elected in the real election, if it is currently the n+1th round of the real election. Each candidate’s score in the n+1th round of the coerced SSS election depends on q. For each unelected candidate, U, we can define a function, f, with domain [0,∞), where if x is the quota size used for surplus handling in the coerced SSS election I described, then f(x) is the adjusted sum score that the coerced SSS election returns for U in the n+1th round. A candidate’s q-score is the unique value q in the domain of f such that f(q)=q. If two candidates A and B are tied for q-score, then the tie is broken like this:

Let fA be the function described for A, and let fB be the function described for B. If there is a closed interval with right endpoint q where for all x on the interval, fA(x)>=fB(x), and for at least one y on the interval, fA(x)>fB(x), then A has priority. If neither A nor B have priority over the other, then the tie is broken randomly. For ties of more than 2 candidates, eliminate all candidates for whom another has priority over. Then choose randomly among the remaining candidates.

So in the 2 winner election where the only ballot cast is A5 B5 C3, A and B are tied for the first seat, and it is randomly given to (let’s say) A. Both B and C have q-scores of 2.5, so they are tied. However, fB(x)=5-x and fC(x)=min(3,5-x) so B has priority, for example, the interval [1, 2.5].

I’ve noticed a few cardinal PR method inventors creating elaborate tiebreaking methods, but when is anything more than random selection necessary?

If you are going to have capped surplus handling rather than reweighted surplus handling (I do not like this aspect of SSS, and would prefer that scores be multiplied by the ballot’s weight, which both SSS and SSQ are compatible with) then nonrandom tiebreaking is necessary to pass Pareto. If you are going to completely exhaust ballots, as all quota based surplus handling does, you need nonrandom tiebreaking to pass Pareto.

The tiebreaker I would have liked to use would have been the left derivative of f at q (smallest wins, if that’s equal then use the next left derivative), but that won’t work for capping.

But the odds of a tie are overwhelmingly low, and unlikely to do much harm. I suppose it makes sense to prove mathematical properties for the system though.

I think there are interesting arguments for either method. Capped surplus handling (which I assume means that a ballot scoring A5 B3 C2 becomes A3 B3 after C is elected) maximizes the chance that your ballot influences the election towards someone you scored, but reweighted surplus handling maximizes the chance that the election is influenced towards someone you favor most. Capped surplus handling seems to make more sense in the philosophy of consensus PR (though reweighted still can be mixed in), while reweighted surplus handling ought to make more sense for more “proportional” forms of cardinal PR, like allocated and Monroe methods.

But if I know that for the last few seats that my contribution to candidates I scored 5 and that I scored 2 are going to be treated identically, then it’s going to make me hesitant to give candidates I actually support 2/5 any points.

But if you reweight, and the candidates you scored a 5/5 didn’t have a chance of winning, but the candidates you support 2/5 did, you’d be much less likely to be able to help elect the 2/5’s over the candidates you dislike. Ideally, I think the choice of capping vs. reweighting should be up to the voter, similar to allocation vs. unitarity, but because capping is so much simpler, it may end up as a default choice. Maybe a simulation with Keith’s code with capping vs. reweighting could help show the quality difference.

But the consensus candidates it helps might only be consensus mediocre. For example:
2 quotas: A5 B5 C2 D1 E0 F0 1.25
2 quotas: A0 B0 C2 D1 E5 F5 1.25
(4 seats)
The capping winners are A, E, C, D.
The reweighting winners are A, E, B, F.
If C and D had 3 and 4 point averages, I could understand favoring them over the partisans and not going full Monroe. But utilitarian selection already aids consensus candidates enough. In the 4th round, each ballot’s cap is 1.25, so capping interpreted that as every voter expressing 80% support for D. In reality they expressed 20% support for D.

So when you consider what a candidate’s average score should be, reweighting makes more sense than capping.

There are no real consensus candidates in this example, and I don’t think it’d be harmful for voters to zero out C and D under capping. Now, if you had a more nuanced example where some more utilitarian candidates win, then some more partisans, then maybe for the final seats the “lowest common denominator” compromise candidates win, even then I suspect voters could just zero out those low-utility compromises and keep their scores high for the more high-utility compromises with little loss to ballot power. The one scenario where capping may struggle is when a voter gives a 1/5 to the most appealing candidate on the other side, but here, the voter can easily judge that such a candidate is high-utility at least for a large portion of the electorate, and so choose whether the weakening of their own favorites’ chances is worth it.

Largely, I think any kind of utility-based voting, whether single-winner or multi-winner, is going to require strategy for voters to yield the results they really want. But ideally, it’d be great for the voter to have an option to have their ballot reweighted or capped, as the judgement of which is better often turns on the voter’s own values and the particular election scenario rather than one single “best” judgement. If it’s an either/or choice, I think capping makes it a bit easier to make the election turn out how you want; with reweighting, you will have a significantly harder time compromising once any of your highly scored options wins, and then we go back to the problem of voters having to give more preference to a compromise over their favorite in order to give the compromise any chance that STV and FPTP-based PR methods have. But considering that most PR proposals work within 5-seat districts at max, and that almost all forms of cardinal PR (other than Sequential Monroe, and maybe others I’m unaware of?) select the utilitarian winner for the first seat, I don’t think that the reduction in ability to compromise under reweighting will be very problematic. At worst, it makes it harder for a voter to feel cross-represented by all 5 representatives of the district to some degree, but they all still feel connected to the utilitarian at worst.

The point of the example is that if the voters don’t zero them out, then C and D win, even though no one likes them, to show how capping can promote mediocrity as much as consensus.

1 Like

An alternate way of extending the approval case of this method to Score, based on Fastest to Quota instead of SSS:

Each ballot starts with weight 1. The quota starts at a Hare Quota. (Although you could do this with an uncapped quota, it would defeat the purpose. Probably.)
Each round, elect the candidate that can reach MAX*Quota points on the fewest ballots, counted by weight. If the last ballot added would cause a candidate to exceed the quota, then only count the portion of the ballot necessary to meet quota. (e.g. if the scale is 0-9 and the quota is 35pts, and a candidate had 4 max scores on ballots with full weight, then the number of ballots needed to meet quota is 3.888.) Note that the best way to do this is always to add higher scores first.

Surplus handling:
If a candidate has more ballots giving them the lowest score used to make quota than they need, treat all ballots with the lowest score used to achieve quota as contributing p * ballot_weight * Score points towards meeting quota, where p is the same for all ballots giving them the lowest score, and the total amount contributed should be MAX*Hare_Quota.

A total of one quota of weight is removed from the ballots. Each ballot pays in proportion to the score contributed.

This is all the same as the original method. What changes is:

Steps for when no one makes quota:
Reduce the quota, retroactively increasing the each previous elected candidate’s surplus. (Maintain the order in which ballots are used: since ballots giving the candidate a higher score are used first, ballots giving the candidate a lower score should be restored first.) When the quota has been reduced enough that a candidate is able to make quota, then that candidate wins.

The goal of this alternate extension is to incorporate the deficit adjustment into a method with a Monroe-like prioritization of higher scoring ballots. Actually using Monroe would be silly, since the only way a Monroe score meets quota is if all ballots involved give the candidate a maximum score. It would essentially be the approval case with all nonmaxed candidates considered disapproved. Using Fastest to Quota gets around that.

What do you think of taking the highest “Score - Mean of Scores on Ballot” first?

I don’t like it for this purpose.
1 quota: A5 B5 C4 [other candidates: 9 4s, 3 0s] avg score 3.333
another ballot: A2 B2 C5 [12 0s] avg score 0.6
[Other ballots that don’t give points to A and B.]
Right now, A and B are effectively tied, reaching a Hare quota of points on a Hare quota of ballots, specifically, the first group of ballots that I list. But if the other ballot I list raises A by a point or two, then they will be earlier in the order that A’s ballots are counted, but also give A fewer points than a ballot from the first quota would have. So A will reach a quota of points after B as a result, as B will still reach a full quota on the minimum number of ballots.

Here is an argument in favor of capping:
(https://www.reddit.com/r/RanktheVote/comments/d09lyi/an_interesting_take_on_pr/ez8nu2e/)

Number Ballots
49 a1:5 a2:5 a3:5 b1:0 b2:0 b3:0
17 a1:0 a2:0 a3:0 b1:5 b2:4 b3:4
17 a1:0 a2:0 a3:0 b1:4 b2:5 b3:4
17 a1:0 a2:0 a3:0 b1:4 b2:4 b3:5

Because the majority differentiated between their candidates, the minority wins a majority of the seats under scaling, because the majority can’t give full vote power to all of its candidates in the final round. One solution is to run a simulation of allocating all of the ballots to winners before the final round, and then pick the candidate with the most ballots preferring them in the final round, but an even easier one (here) is capping; it eliminates the differentiation the majority gave their candidates after their first candidate wins, allowing them to push equally for all of them in the final round.

Maybe there is some amalgamation of the two that’s possible?

If you’re going to insist on passing the Droop Proportionality criterion, the fact that you’re using score immediately means you fail.

Also, capping’s solution is to essentially ignore the fact that there was ever differentiation to begin with.

For this specific case, both optimal and Sequential Monroe get the desired result. So does the method I described in


(Specifically the 2nd post. I designed both methods in that thread to prove a point, but the method in the first post was especially designed to prove a point.)
The score of 2 As, 1 B is 941/3, and the score of 2 Bs, 1 A is 942/3.

SSQ leaves B further behind for the 3rd seat.

Does the narrow difference between the scores mean that this example is still very likely to yield 2 As if slightly tweaked?

Also, does the improved resolution of a 0 to 10 scale merit any possible reduction in political viability if it helps mitigate this scenario?

There are a few things about this scenario that you could tweak. Let’s see what they would do.

  1. Changing the margin: a narrower margin would allow A to win. On the other hand, the margin is fairly narrow already, and I think there is limited value in preserving majoritarianism in virtual ties.
  2. Changing the score which B subfactions give to members of B outside their subfaction. If they decided to start giving them 3s instead of 4s, that would allow A to win.
  3. Changing how precisely the subfactions of B divided themselves. The exact precison in which they are divided seems fairly unlikely, especially since they aren’t trying to do it, like in vote management examples. If the B3 faction shrinks by X, and it is transferred to B1, the swing is X points in favor of {A1 B1 B2}, so it appears that precise division is a bad case for this method. Moving members from the smallest faction to the largest seems to help B. Shrinking the second smallest, however, hurts it, I’m fairly sure A would win 2 seats if the division were 19-16-16. However, shrinking the smallest faction enough can offset this even if all the gains only go to the largest faction. For example, B would probably win two seats with 30-16-5.
  4. Changing how many candidates each B voter considers to be ideal. The fact that there were no B loyalists whatsoever seems unrealistic. Nobody voted B1 5 B2 5 B3 4 or anything like that. This would clearly help B.

3 and 4 are likely to cause the example to become more realistic, and they generally help B. The second case seems potentially problematic in the real world as well, although you could make a case that it shows that the B candidates don’t really have the strongest support, since their faction is so incohesive.

1 Like

I think a good argument against the example is that the A voters sacrificed their ability to differentiate who within their coalition would win; they didn’t have any impact on that battle. Maybe the best way to look at is that if you’re on the verge of capturing or losing a seat, min-maxing is a good idea, and STV does marginally better in these scenarios (though elimination by first-choice may not be very good or useful overall?)

The fact that STV delivers a majority to B in this case is a consequence of Droop Proportionality being the basis on which ordinal methods justify being called PR. Any ordinal PR method would do that. There are ordinal PR methods besides STV, which is not a particularly good one. Ordinal generally lacks a concept of Independent Proportionality as I have described for score methods.