I am starting a fresh thread for Equal Vote’s “0-5 Proportional Research Committee” which has been organized by Sara Wolf, ie the “Wolf Committee” for short.Any regular on this forum is likely aware of this work from prior threads

https://forum.electionscience.org/t/different-reweighting-for-rrv-and-the-concept-of-vote-unitarity

https://forum.electionscience.org/t/utilitarian-sum-vs-monroe-selection

https://forum.electionscience.org/t/re-what-are-the-best-ways-to-do-allocated-cardinal-pr-in-public-elections

I would NOT recommend you go back and read them now due to the lenght. Instead I will do my best to summarize.

*Quick summary*

The idea of the committee was to pull together experts and methods to try to come up with the best multi-winner score system. This can also be a set of systems like in the single winner case where Score, Approval and STAR all have different trade offs. Even if we agree on a few systems that is a better place then we are now. The following results exclude the consideration of Optimal methods. Optimal Methods are too computationally expensive to be viable in the simulation I have written.

The system I invented was intended to replace STV and is somewhat a different in conception from RRV . It was based off of a slightly different philosophy of how to extend proportional representation beyond the simple partisan bullet voting definition. It was more Monroe in philosophy where as RRV is sort of Thiele. The idea is that where RRV reweights following the Jefferson method by 1/(1+SUM/MAX) my system was to subtract from your total MAX score the amount which you gave to all elected candidates until it goes to zero. Surplus handling is done by only subtracting a fraction of the amount to compensate. If this is unclear there is more here. The idea was that each person was given a total score and that this vote power should be conserved under the reweight transform from each sequential election of candidate. This would be a Unitary transformation is physics so I called it Vote Unitarity. The name of the system will be Sequentially Spent Score (SSS).

In both SSS and RRV the process is to elect and then reweight repeatedly until all winners are found. The selection is done by taking the Utilitarian winner which is the sum of the reweighted scores. What kicked off this work was that @parker_friedland pointed out that this may not be the best way to select. A better method may be to select by the highest sum of score in a Hare Quota. He invented this as the selection for an allocation systems. Allocation systems assign voters to winning candidates then completely exhaust their ballot.

When I originally implemented the code I wrote it to select the highest sum of score in the a Hare Quota of voters. I should have done it in a Hare Quota of ballots. The difference being that ballots are down weighted so a single voter could have only half a ballot remaining.

In any case this give three coded selections methods:

- Utilitarian: Sum of score
- Hare Voters: Sum of score in Hare quota of voters
- Hare Ballots: Sum of score in Hare quota of ballots

There are also three coded Reweight methods:

- Jeffereson: Reweight by 1/(1+SUM/MAX)
- Unitary: Subtract SUM/MAX until exhausted
- Allocation: Exhaust whole ballots by allocating to winners independant of score given

Note that bothe 2 and 3 require surplus handing and fractional surplus handling was done for both.

It was then suggested that we also try the KP transform and it was easy to implement so sure.

All possible combinations can make a proportional multi-winner voting system. There are 3 x 3 x 2 = 18 possible systems here. They are not all motivated theoretically or advocated for by anybody but the idea was to try a few and see what we can find. So I wrote some simulation code.

*Implementation*

I wrote some simulation code which @psephomancy as able to fix so it runs in a reasonable time. https://github.com/endolith/Keith_Edmonds_vote_sim

I assume a 2D ideological space [-10,10] by [-10,10] motivated by the political compass. I then randomly simulate 10,000 voters in this space by them being members of ideological clusters (ie parties). I randomly select 2-7 parties and give them a random position is the 2D space. The 10,000 voters are randomly assigned to each party and their distance from the party center is determined by a Gaussian distribution with a standard deviation between 0.5 and 2. Candidates are created at every grid point in the plane. They are not wanted to be random as we are trying to find the best system under optimal candidates. Also, when a candidate is elected I create a new one in the same ideological position. The score each voter gives to each candidate is determined from their euclidian distance, d, as score = 5 - 2.0*d where 5 being the maximum score. I then put the score of the closest candidate to 5 to help make scores more realistic. I do not expect the distances or the method of deriving score to be particularly realistic. However, I do expect the distributions of score to span the space of reality. This means the only unknown is the weighting in the space which I take as uniform. I do this simulation 25,000 times and compute several metrics for comparison.

After a stupid amount of debate here and in direct messages I settled on 12 metrics. There are 6 metrics which are measures on Utility and 6 which are measures on variance/polarization/equity. Below is a list of these metrics along with some python code for each based on a pandas dataframe of scores, **S** , with the winners as the columns and one row for each winner. Those who are code savvy may find the code definition simpler to understand. These metrics are all based on score and not the relative positions in the 2D space. I do not expect that to be accurate enough for metrics to be useful.

~ *Utility Metrics* ~

**Total Utility**

The total amount of score which each voter attributed to the winning candidates on their ballot. Higher values are better.

S.sum(axis=1).sum()

**Total Log Utility**

The sum of ln(score spent) for each user. This is motivated by the ln of utility being thought of as more fair in a number of philosophical works. Higher values are better.

np.log1p(S.sum(axis=1)).sum()

**Total Favored Winner Utility**

The total utility of each voters most highly scored winner. This may not be their true favorite if they strategically vote but all of these metrics assume honest voting. Higher values are better.

S.max(axis=1).sum()

**Total Unsatisfied Utility**

The sum of the total score for each user who did not get at least a total utility of MAX score. Lower values are better.

sum([1-i for i in S.sum(axis=1) if i < 1])

NOTE: The scores are normalized so MAX = 1

**Fully Satisfied Voters**

The number of voters who got candidates with a total score of MAX or more. In the single winner case getting somebody who you scored MAX would leave you satisfied. This translates to the multiwinner case if the one can assume that the mapping of score to Utility obeys Cauchy’s functional equation which essentially means that it is linear. Higher values are better.

sum([(i>=1) for i in S.sum(axis=1)])

**Totally Unstatisfied Voters**

The number of voters who did not score any winners. These are voters who had no influence on the election (other than the Hare Quota) so are wasted. Lower values are better.

sum([(i==0) for i in S.sum(axis=1)])

~ *Variance/Polarization/Equity Metrics* ~

**Utility Deviation**

The standard deviation of the total utility for each voter. This is motivated by the desire for each voter to have a similar total utility. This could be thought of as Equity. Lower values are better.

S[winner_list].sum(axis=1).std()

**Score Deviation**

The standard deviation of all the scores given to all winners. This is a measure of the polarization of the winner in aggregate. It is not known what a good value is for this but it can be useful for comparisons between systems.

S.values.flatten().std()

**Favored Winner Deviation**

The standard deviation of each users highest scored winner. It is somewhat of a check on what happens if the Cauchy’s functional equation is not really true. If the highest scored winner is a better estimate of the true happyness of the winner than the total score across winner. Lower values are better.

S.max(axis=1).std()

**Number of Duplicates**

The code currently allows for clones to be relected. Ideally this would not happen if there are enough candidates. This gives a mesure of the ability to find minority representors. Lower is better.

len(winner_list) -len(set(winner_list))

**Average Winner Polarization**

The standard deviation of each winner across all voters averaged across all winners. The polarization of a winner can be thought of as how similar the scores for them are across all voters.

S.std(axis=0).mean()

**Least Polarized Winner**

The lowest standard deviation of the winners across voters. The winner who has the loweststandard deviation/polarization.

S.std(axis=0).min()

*First Results*

At the time of the first results I had not yet implemented Allocation. I had not found my bug of having the Hare quota of Voters not Ballots so the voter method was used. Also the Number of duplicates had a minor bug so ill cut it from the plots below.

Systems are defined in terms of their Selection, their reweight and if the KP transform was applied.

For this round the simulated systems where:

- utilitarian_unitary (ie SSS)
- hare_voters_unitary
- utilitarian_jefferson (ie RRV)
- utilitarian_jefferson_kp
- hare_voters_jefferson

The results are in the following histograms. You are going to have to zoom in.

Utilitarian Jefferson with and without KP transform does not seem to make much of a difference except in a few metrics. Adding the KP transform seems a little better in Totally Unstatisfied and total Favoured Winner Utility. Since the Favoured Winner Deviation is the highest for this it seems this extra utility comes at the cost of a minorities utility. I would think this is bad. When looking at the Utility Deviation there is a much more pronounce bump in the tail than in the other two RRV systems. This is likely a sign of a pathology. It seems that all things considered this is no better than Utilitarian Jefferson (ie RRV) so it should be eliminated.

This gives the 4 remaining systems as the combinations of the 2 reweighing methods and the 2 selection methods. The trade off between Monroe and Utilitarian selection seems to be the same for both reweigtings. The Utilitarian selection does better for Utility , as expected. But the Hare Voter selection seems to be better for Total Unsatisfied Utility, Favoured Winner Utility and Wasted Voters. This can also been seen in the deviation plots where the deviation is higher for Utility selection than Hare Voter. In short Utility selection is more Utilitarian and Hare Voter is more equitable. No surprise there at all.

A similar effect is seen for Jefferson and Unity reweigtings for both selections. Jefferson reweigting gets more total utility than Unitary reweigting but is less equitable.

This means if all you care about is Utility the best method is Utilitarian Divisor (RRV) since it gets you the max Utility without violating common ideas of Proportional Representation. If all you care about is Equitable results, Favoured winner Utility, Unsatisfied Utility and Wasted Voters the best method is Hare Voters Unitary. These metrics are more like what people want when they want PR so it is not clear that it is better than raw Utility.

*Second results*

After a lot of feed back I added Allocation and Hare Ballots. There was a pretty decent case that Using a quota based selection with Unitary reweighting was nonsense even though it produced great results. This convinced me to leave out Hare Ballots selection with Unitary reweigting. Instead I only simulated the 4 methods which people were advocating for.

These are:

- hare_ballots_allocate (Sequential Monroe from @parker_friedland)
- utilitarian_allocate (Allocated Score from @Jameson-Quinn)
- utilitarian_jefferson (RRV from Warren Smith)
- utilitarian_unitary (SSS from myself)

Its worth noting that the simulations I ran in the prior run were exactly the same so the results for the same system will be identical. You can see this for SSS and RRV.

My biggest surprise with this was that the Hare Ballot selection did not seem to make a difference for the Allocation.

In terms of Utlity RRV is best and Allocation is the worst with SSS in the middle. However, Unsatified utility is really a better metric and was the whole point of inventing Unitary reweighting. Interestingly allocation does just as well.

There is a weird bump in “fully satisfied voters” for allocation. This gives me a lot of pause because of how the simulation is done. We do not know what the real world looks like this is a simulation of everything. The cases where there was nobody fully satisfied might be common or uncommon we just don’t know. What we do know is that Allocation cannot handle them and the other systems can.

For the Utility and score deviation RRV (utilitarian_jefferson) does just terribly. Meaning that it did not find a fair outcome. Allocation had two peaks on either side of Unitary (SSS). It could be better or worse but in some cases it seems to not find a natural solution.

I could make a lot more comments about the other plots but not of them are crucial to the understanding.

Thanks for reading this far.