With Droop proportionality? It’s sort of an acknowledgment that if every unique Droop Quota bullet votes, any voting method which treats all voters as equal has to give the Droop quotas representation (though I’m not sure of this for cardinal PR methods, since people only say they pass a Hare PR criterion and never Droop PR).
Now interestingly enough, if we use Toby’s idea
Then I’m starting to think stable sets with Droop Quotas looks like a Smith-efficient Condorcet method, perhaps even in the PR case.
If it does, I’d suggest going a step further and using the votes of neutral voters to bolster the more socially beneficial set in the pairwise matchup. If I’m understanding correctly, that solves the issue with
since with the 99 neutral voters, you can use their votes to go towards the obviously better outcome.
Actually, I suppose such a modification could be used in Condorcet methods done on rated ballots to bolster utilitarian winners.
I think using the Droop quota makes more sense for this method. Let’s say you’ve got 100 voters in two opposing factions - A and B, with 2 to elect. You could have these ballots:
AB would be stable because you need 100% support for A to get the final Hare quota. So for any proportions other than 100% for one of the factions, AB would be a stable set.
AA would be stable here, despite AB being clearly the better result.
I think using Hare quotas gives too many sets stability. You end up with long plateaus of stability. Using Droop would cause it to instantly flip from one result to another. E.g.
AA is the only stable result.
AB is the only stable result.
I would also definitely only consider ballots that express a preference between the different outcomes.
should be treated the same as:
But in this latter case A no longer has two full Droop quotas. Similarly for voters who approve all or none of the candidates. It’s quite a simple tweak to make it pass independence of irrelevant ballots (I presume - it certainly assists it) and I don’t see any downsides.
The good news from a utilitarian perspective is that for
you still get BC, since 6.666 or more voters always benefit from that set over any other in terms of total utility of the winner set.
I think it’s worth emphasizing that the Condorcet principle at its most abstract is simply “a group of things that are better than all other things one-on-one are better than all other things”. The part of that statement that is often contested is whether some variation of “preferred by more voters” or “gives higher social utility” is the better way to define “better”. But it’s certainly interesting to build connections between all of these ideas.
This is an example where stable sets with Hare or Droop Quotas appears to not give Party A any representation, since when we add 10 Vermin votes in the HB Quota is 18.333, and there isn’t a quota of votes in favor of any of their candidates when they do vote management?
It may be worth mentioning that for this example, Algorithmic Asset would always elect 3 B candidates and 2 A candidates, since that is the maximally beneficial vote management equilibrium the voters can manage during the “negotiations”. So maybe Algorithmic Asset or Schulze STV where a voter’s preference between sets is defined as based on their total utility from each winner set can provide some help with formalizing stable sets. An alternative idea is simply to figure out the quota solely based on those voters who have preferences between the candidates (or only relevant candidates?) in the sets being compared. Then, when comparing sets of varying permutations of A and B candidates, the quota lowers enough to give A candidates representation.
This is definitely what I would do. Going back to one of my examples:
When considering AA versus AB, a Droop quota can be considered to be a third of the voters excluding the C faction.
I would also use the KP-transformation by the way. Take this example with scores out of 10 and two to elect:
67 voters: A=1, B=0
33 voters: A=0, B=10
If we’re using score voting rather than ranked voting, it makes sense to look at more than just the number of voters who prefer one set to another, and look at how much they prefer it. In this case, the transformed ballots would be:
6.7 voters: A
33 voters: B
And I’d be happy for B to win both seats. This also provides a more continuous transition from ballots being relevant to irrelevant in the irrelevant ballots scenario. Rather than going from fully counting tiny differences in preference to suddenly ignoring the ballot, it’s a gradual process and more satisfactory all round.
I’d suggest that if it were instead a 1-winner election:
50 A8 B0
49 A0 B10
There isn’t as strong a reason to give B that seat. So it’s worth considering some sort of “if the winner set the cardinal PR method suggests is best is only 20% better than the ranked PR method’s suggestion, go with the ranked result.” Maybe more concretely, do the KP transform but bias the transformation to some degree towards maximizing a voter’s power.
I’d also like to point out that if you have a voter who must pick between a winner set with their favorite who is 10 points, or a winner set where they have 2 candidates worth 2 points each and 1 candidate worth 7 points, going with the latter set is not as straightforward as “maximize utility” would suggest. So perhaps we ought to have some kind of factor such that a voter gets a bit of extra utility simply based on whether they’re getting higher-preferred candidates, rather than a simple addition of utilities.
I believe some sort of Nash Equilibrium for Approval and Score is actually the Smith Set in general; this is because if you have multiple candidates in the Smith Set and at least one outside of it, the coalitions will strategically reach a place where they’re trying to min-max the Smith Set candidates and in the process, make it impossible for those beaten by Smith Set candidates to get more points. Perhaps there must be a condition specified for how many moves are allowed to be played in the game once a cycle in winning outcomes is reached so that the players can figure out when to min-max in such a way that they can at least get their preference from the Smith Set.
Wouldn’t the KP transform actually give 67 A, 603 empty ballots, and 330 B ballots? I’m guessing it makes no difference which way you do it, and I personally prefer the “treat scores as fractional approvals” approach as well, but just wondering.
I believe this is no longer Smith-efficient even in the single-winner case with the KP transform, since with an election
51 A5 B4
49 A0 B5
You get from KP this many ballots:
I do not favor the Droop stability definition over the Hare Stability definition. As I said above, the point of this is to come up with a group of stable winner sets. This group is the set of reasonable options that can be considered PR. From there the best stable winner set can be chosen via a different criteria. If we choose the Droop definition then it is so much more restrictive that it eliminates single member score. It is nice that it eliminates some “bad set” in other situations but this is not the goal. Are there any situations where the Hare Stable winner set with the highest Utility is not the best solution? If not I do not see any reason to take issue with this being a good replacement for PR.
An alternative is to take the Hare stable winner set which, when the most-supporting Hare quota score is computed for each of its winners and all of the winners’ quota scores added up per set, has the highest total. This is equivalent in the single-winner case to Score.
This would be the something like a Monroe Quality metric. We are trying to put together a list of such quality metrics on the Wolf committee thread. The issue with the one you define is that it is not allocative like Monroe intended. This means that the same voter can be in all quotas. This is not really bad for an evaluative metric. Maybe propose it over there. Please make a formula like I have been doing to be very clear about the definition. I have be using MS Word formulas and screen shotting a picture.
B is still the score winner though, and if the method reduces to score voting in the single-winner case, I wouldn’t be too displeased with it.
And I’m probably more inclined to go with the more “mathematically pure” method rather than add conditionals in, at least to begin with. I’d rather have the basic method shown to work first and then once it’s set in stone, add bits to it.
For example, score voting versus STAR for single winners. Whichever one thinks is better in practice, I think it’s better to make sure score voting exists as a fully-thought-out concept first before introducing “extras” like the STAR run-off. Obviously score voting itself is very simple so it doesn’t take much coming up with, but I’m sure you see my point.
I ignored the empty ballots because they wouldn’t make a difference in the counting process, but I suppose for completion they should be listed. But as for whether it’s 6.7 for A or 67, I suppose it depends on how you want to define it, but I don’t see it as that important really!
Right, but I think if it’s purely Smith efficiency we’re after, scores might become irrelevant, and only ranks important. Looking at my previous example:
The “best” result of CD would be stable under the definition in this thread, but I don’t think it would be any sort of Nash equilibrium as one set of voters could just “defect”, and then they both would and you’d ultimately end up with AB.
The way I’m seeing it at the moment, I quite like this method with the initial stability definition + Droop + KP.
I was more thinking in the single winner case. Clearly in the multi winner case we need a metric which takes into account some proportionality measure. I think that the winner set being stable is a fair requirement for a system. In the multimember case the system is defined by some metric or procedure. The procedure/method is not good if it does not produce a stable winner set.
It seems to me Sequentially Spent Score could be used somehow as a metric to compare winner sets i.e. the first winner set satisfies 90% of voters fully (max-score amount of utility derived from the winner set or more), while the second winner set satisfies only 80% of voters fully but 20% of voters 3/5ths. That is one natural way to find a winner set which best balances equity of utility and utility.
Some possible ideas for speeding up computation of the core (and how finding the core might help speed up computation elsewhere):
Would computing the core and then running an optimal cardinal PR method on that be significantly faster than having to compute the optimal method on all possible winner sets?
With the “generalized core” definition (smallest set of winner sets unblocked by any other sets; or possibly also that block any sets that block them?) there will always be a core, if that helps.
The same algorithms used to compute the Smith or Schwartz Set:
Finding the core may be analagous to finding a Condorcet winner (or weak Condorcet winners?), so adopting the fastest generalized way of identifying a CW to the core, I think one possible algorithm would be “check whether random pairs of sets block each other, eliminating a set that is blocked in a pairwise matchup and which doesn’t block its opposing set. When no further eliminations can be done, check whether the remaining sets block all others.”
A post considering the application of these ideas in Condorcet methods:
Interestingly, the concept of a “locally stable” committee (which passes Droop PR?) has been researched for Condorcet PR methods: https://arxiv.org/abs/1701.08023
Electing U1-3, A1-3 would mean that the 2 A voters would have 6 candidates elected and the 1 B voter would have 3. This is what Thiele would prefer.
Electing U1-3, A1-2, B1 would mean that aside from the universally liked candidates, the 2 A voters would have 2 candidates and the 1 B voter would have 1. This is what Phragmen-based methods would prefer.
But with this stability method, I think both sets would be stable. The B voters have three candidates anyway, so I think they’d need four quotas of voters to find a blocking set to U1-3, A1-3, which they don’t have. Similarly the the A voters have five candidates elected under the other result so they’d need six quotas of voters to overturn it, which they don’t have. That’s my understanding anyway.
I never really agreed that this was a desirable property. The “uncoloured” candidates are not really uncoloured as if they do not have a position in the ideological space. They are more like multicoloured or mixed coloured.They will be closer ideologically to some of the colour clusters than others. When elected their closeness needs to be compensated for in the PR calculation. Say you have RGB colours. The fist elected candidate is uncolored but very close to R and very far from B. I would not want to treat RGB on equal footing after that point. I think all candidates have to always be treated as uncoloured. Using colours in this way is a crutch.
Yes there can be multiple stable winner sets and both of those are stable.
I don’t see it in terms of colours anyway. But I think that it makes sense for there to be proportional representation among the rest of the candidates. There’s a quote from me on rangevoting.org
Three people share a house and two prefer apples and one prefers oranges. One of the apple-preferrers does the shopping and buys three pieces of fruit. But instead of buying two apples and an orange, he buys three apples. Why? Because they all have tap water available to them already and he took this into account in the proportional calculations. And his reasoning was that the larger faction (of two) should have twice as much as the smaller faction (of one) when everything is taken into account, not just the variables. Taken to its logical conclusion, Thiele-thinking would always award the largest faction everything because there is so much that we all share – air, water, public areas, etc!
That example is for the special case where they all equally like a candidate. In this case he uses water as the justification. What if it was milk and the guy that likes oranges is lactose intolerant? He would not be served at all by the milk so there would be no utility gain. It does not matter that the milk is not fruit. What matters is the amount of utility gain from each winner/item. This is why the concept of uncoloured in strong PR is meaningless and the universally liked candidate is meaningful. They are very different. I would view all candidates as uncoloured and think universally liked candidates are very rare. Justified Representation is the way to view PR on an individual candidate level. Requiring a stable winner set is just a stronger definition of Justified Representation.