The tranche method does have the advantage of being precinct-summable. It made me wonder: Are there any precinct-summable quantities in statistics that could improve this method?
Thinking of the following:
- Standard Deviation
- Median and quartiles (or perhaps, the W-tiles or (W+1)-tiles for W winners)
- Sum of… squares?
- …and, of course, the average or sum!
Here is one example of what I am thinking of. My vision is for a Score system but let’s use Approval to simplify things.
Suppose there are four candidates, A A’ B C. (A’ is a clone of A.) 2 winners.
1 voter: A
1000 voters: A,A’
1000 voters: A,A’,B
1000 voters: B,C
1000 voters: C
We compute the correlation between each pair of candidates. (Actually, since the first winner is A, we only need to care about the correlations between A and the other three…)
correlation(A,A’) ≈ 1
correlation(A,B) ≈ 0
correlation(A,C) ≈ -1
Therefore, A’ is punished for being correlated with the first winner A. Now, the issue is what formula to use to punish. My first idea was to scale down X by 1/(1 + Σ correlation(X,W)) for all winners W, but that can only decrease, not increase, X’s score (so C does not get infinite votes for having a -1 correlation!!!)
This has the disadvantage of resulting in a (near?) B-C tie in my example. Perhaps 1/(1 + Σ (corr(X,W) + 1)/2)? This means that C wins with 1000 votes, B gets 667, and A gets 500? I mean, it gets the same result as harmonic Approval (except B gets 83 fewer votes), but maybe if we have more complicated ballot types, it might diverge and be more interesting.
This, however, jumps right back into the whole thing about punishing people for voting for consensus candidates… maybe with more tweaking something could work?
(Note: We only deweight the candidate scores, not the individual ballots!)