Do more proportional PR methods have bad worst-case scenarios?

The main method to look at would be Sequential Monroe, since that’s the most proportional semi-viable PR method possible, but other more proportional methods that focus on quota preference over consensus are useful to discuss too. With utilitarian PR schemes, it’s possible for minorities to win majorities and for one side to lose out on seats because they are more compromising or honest; some might consider those significant downsides. But with more proportional (cardinal) schemes, what are the big downsides?

Free riding is one downside. You also need to specify tiebreakers to ensure that if the votes not in the quota have a particularly strong opinion, between two choices rated equally highly by a quota, that the more preferred candidate wins (that is, Monroe won’t pass Pareto without tiebreakers.)

Isn’t every PR method subject to this problem? The only exception I can think of might be Thiele, but that’s only because Thiele is so generous to voters who have already elected their favorite, even if they barely won. Also, would “highest score wins, allocate a Hare Quota of ballots based on giving highest scores, use fractional surplus handling” have a significant problem with free riding (it could be a very viable allocated PR method, or a good template for one.)

How much harm does it cause to fail Pareto on average? Are there ways to pass Pareto more often without getting too complex, or are there relatively simple tiebreakers that pass Pareto every time?

Yes, but some are more so than others. I think that for any method that passes the “Limited Resistance to Hylland Free Riding” criterion, that large-scale vote management will be too difficult to be worth attempting. With other methods, it is hard to say.

It’s not that difficult to pass Pareto. An example of a tiebreaker for Sequential Monroe would be to take the ballots outside the quota scoring each candidate the highest, and if one candidate is scored higher on their ballot than the other, then elect that candidate, otherwise repeat with the next highest ballot that hasn’t been used for quota or a tiebreaker, until there is a difference.

1 Like

Your link does not point to any clear discussion of those issues, but they are issues indeed!

By far the worse is causing what I would call a “flip” in which the side with more vote support turns out to be the minority in the body versus another that got less vote support; it happens a lot in many systems, and when we have a clear definition of proportionality I’d call that an egregious violation–a “proportional” method that does this does not deserve the name.

Some small jitter of exact number of seats is a much lesser problem, unless they can be shown to have a tendency, as has been shown to me regarding Hamilton’s method tending to give more seats to a faction that splits itself–that’s a tendency and it presumably drives candidates and electorates toward more splitting, whether they consciously notice what is going on–it can be a quasi-Darwinian thing; a constellation whose adherents value unity find themselves doing more poorly than polls or raw data of elections would suggest they ought, and perhaps inadvertently discover that when they splinter they do better as a group. Even this is at worse an annoyance, and it seems downright bizarre for people who frown on permitting any sort of partisan alliance to be recognized by the system whatsoever to protest; the logical conclusion would after all be every candidate running with no formal ties to any other! This minor jitter “fissiparious” feature applies to any subtractive approach to processing cardinal ballots of course, since that mathematically corresponds to the Greatest Remainders method after all.

  1. The first general thing to come to mind is for me to point out any system that separates the voters into separate districts with no cross-correlation between them in any form must pose the risk of both these outcomes; the smaller the bits the electorate are chopped up in, with fewer representatives to be chosen, the more severe we can expect this effect to be. To name an ordinal example I just looked at the US House in 2016 and 2018 applying Rank Choice Single Transferable Vote to a scheme whereby the existing single Congressional Districts currently apportioned to each state are simply mandated to elect two Representatives, for a House of 870 Members. Presuming that each party that ran historically would run two candidates, that all voters would simply vote their same historic choice for first ranked and the other candidate of their party second, given that I have district by district data on the races it was pretty easy to apply STV rules to show that most of the House would be elected by quota–in 2016, just 32 districts would remain where two candidates would not be elected by quota and terminate the process before any second choices are even examined.

In 2018 that would be 21 (across the board, 2018 was more tightly polarized on the duopoly race than 2016 or typical years in general). I need not even point out that all the 838 seats won that way in '16 and 849 in '18 would all be duopoly wins! Clearly these would be uninfluenced by third party votes in any form. At that point, in a massive reversal of historical outcomes, Democrats would enjoy a huge structural advantage already–in '16 these indisputable wins would have the Democrats at 433 seats to Republican 405–436 are needed to control the House. In 2018 before resolving any of the 21 districts subject to settlement of the last seat by elimination (note, in both races at least one seat is won by quota in every district, it is only the second seat that needs settlement) the Democrats would have already 495 seats, a massive surplus over 436–not quite extreme, since the share of 870 by raw ratio would be 465, and bearing in mind the method effectively filters out third parties, a two-party split with the Republicans would be 473. Historically the Dems hold 235 seats of 435, which is a hair under their half-share of 473 and two hairs over their properly proportional share of 233. But do note, with STV in two member districts, the Democrats enjoy a structural advantage beyond even 473, equivalent to 10 seats in the real House, and that is before we even figure out if they could win any of the outstanding 21! Still in functional terms it makes little difference; either way Democrats control the House, and by an honest majority in both PV and seats won.

But in 2016 the outcome of using 2 member STV would be far more perverse–as noted the Democrats are, with the 838 seats firmly settled by quota win alone, already just 3 seats shy of a majority in the body, whereas the Republicans trail 30 seats short–but wait, more voters voted Republican than Democratic in 2016! It comes as little surprise to CES aficionados that a RCV method is screwed up…but don’t be so hasty to cast that stone, a little scrutiny and analysis shows clearly that what is fishy here is the partitioning of the electorate into separate districts. If we take a hard look for instance at North Carolina, that state which has 13 CD returned 10 Republicans and 3 Democrats in real life, but when we look at the state partisan balance overall (the only third party vote being a small number of Libertarians in one district) we find the real proportion should have been 7 R, 6 D! It is now infamous and proven in public record court evidence that this is a deliberate partisan gerrymander of course. Well, what happens if we apply STV to these 13 districts, for 26 members? The outcome is…11 Republicans, 15 Democrats! Clearly of course it should have been 14 to 12 if this method were worthy of the name “proportional.” The Democrats gain 3 seats beyond the correction of gerrymandered distortion–why?

Because it is gerrymandered, of course, that’s why. The gerrymander strategy that worked well for Republicans when facing FPTP boomerangs straight into their faces and delivers numerous districts–the same districts the Republicans packed as many Democrats as they could into to “waste” their votes above 50 percent–as 2-Democrat partisan monopolies, while the larger number of Republican dominated districts are so dominated by smaller margins and so mostly wind up being split evenly between the two parties.

Any system that chooses to divide up the electorate into lots of little bits is subject to gerrymander. But I must stress, such structural imbalances arise naturally as well as by malicious intent. And there might be legitimate reasons to seek to structure districts with other concerns than obsessive impartiality and rigid attempts to get as close to some target average population as possible–yet if we don’t prioritize absolute equality of districts with scrupulous party-blind procedures (as though we can really achieve that!) clearly a districting scheme would be unfair insofar as we relax those constraints. And yet, the moment we draw a fair, perfectly impartial district map, ongoing life deranges it; people reach maturity and register to vote, people die, people move in and move out of the districts; it is a moving target at best.

Thus the more we rely on fine grained small districts, the more chaotic at best the relationship between how people voted and what outcomes they get. We can cure this disease with systemwide integration of course; with a suitable method to restore balance, we can relax about how we draw districts and not worry about how changing demographics over time between districtings changes things.

  1. assuming for a moment it is generally clear what “proportional” means, the mathematical method we use to translate the spectrum of votes into integer shares has great bearing on the outcomes, and as it happens, exactly in the manner you seem to acknowledge is unfortunate. That it, it “causes minorities to win majorities and for one side to lose out on seats.” Now to be sure, any method of settling on a fixed set of representatives of finite number must be somewhat guilty of this; by cutting off the smallest vote earners in effect the voter sovereignty a fraction of the electorate–usually far larger a fraction than a single quota–left out in the cold unrepresented are of course losing out, and in certain cases this could tip a minority-plurality leading party that did fail of full 50 percent plus one majority to become a full majority in the body–or more realistically, when we consider that any really proportional method will encourage voters to desert big tent parties and venture to support smaller ones they really have greater confidence in who could not win under the old winner take all rules, coalitions can form that do not embody over half the ballots cast but do command a body majority.

But this seems to be an inevitable fact of republicanism, and even having direct democracy would still lead to minorities losing out on policy, whereas there is a real advantage to voters entrusting their revocable proxy to champions who can focus their attention full time on negotiating the interest of their constituencies–learn expertise in the art of parliamentary dealing; command staffs of experts who can examine obscure questions in exhaustive detail; have intimate knowledge of the workings of the law so as to craft legislation that works seamlessly toward the goal rather than making rookie mistakes in such work.

But in the question of degree, that is clearly not irrelevant after all. I’ve shown how applying the most extremely concentrating method of PR devised, Jefferson’s, can conjure up a false majority where in fact the party so empowered falls short, by a really significant number of quotas, from having that commanding a degree of electorate confidence, and at the same time the method is pretty much scorched earth on smaller candidacies. Thus the vital role that the marginal parties could perform, in confronting a unified party with the strongest claim on leadership, and offering to catalyze their taking that role but only on condition they take serious heed of their smaller viewpoint, this being the catalyst of inter-party cooperation and collegial integration of the body, is short-circuited as the magnified bigger party seizes control unilaterally.

  1. Most fundamentally though, we must as I have been saying since joining here, take heed as to what it is we even mean by claiming to have a “proportional” system anyway. If multichoice cardinal voting is supposed to embody some kind of incentive to coalition building (which I think would, in the absence of a proportionality guarantee, amount to hegemony of so-called “moderates,” with exclusion of the more “extreme” even when they have quota claims) then it follows that the resulting body will not be proportional in the sense of expressing each voter’s true “utility” preference pretty much by definition.

It is not even clear to me such a system can be reconciled with “one person one vote.” Isn’t saying we want to encourage voters to and reward them for spreading extra ratings around beyond the single candidate they like the most, tantamount to saying “we reward the generous cardinal voter with extra power, and punish the bullet voter by diminishing the outcomes they so very clearly favor below their share”?

Which is why I find criticisms claiming that the multichoice voter is unduly penalized when I take guaranteeing one person one vote in the outcome as a key priority pretty alarming. To say it should be otherwise is frankly a declaration that proportionality is not a consideration to be valued.

It seems clear that advocates of cardinal voting are trying to avoid perfect proportionality, as this leads to majority rule; the goal is to try to increase representation for consensus candidates so that the legislative majority rules for the minority’s interests as well. I think an important consideration is how to allow the voters to choose whether they themselves value proportionality or consensus, and accommodate various preferences, but ultimately the system has to prioritize one over the other, and punish/ignore voters who look for the other.

One person, one vote can be (legally) reconciled with FPTP currently, and that’s the worst system for either proportionality or quality of the representatives. So clearly, pretty much anything that is either going to increase proportionality or quality will still fall into that framework. It is worth considering whether overrepresenting consensus candidates is fair, and whether proportionality is the true meaning of one person, one vote, but I’d say that as long as a voting method doesn’t force a voter to support consensus candidates by more than one quota (that is, the voter knows they can get 2 representatives, but is unsure of whether they can win the 3rd seat, and so they’re forced to compromise in order to guarantee some kind of representation for that seat), it should be constitutional and fair (for the most part.)

So first of all be clear about this–if that is true, either in the sense of “this is the CES philosophy” or “this is what serious electoral science proves, not as an optional preference but by solid logic–that ‘perfect proportionality is majority rule and majority rule means total disregard of minority interests’”–then neither CES, you, nor anyone belonging here thinks proportionality is any sort of value to be followed whatsoever–or at best a muddled, imprecise form of it, probably arising automatically by haphazard influences, is quite good enough for what positive value it has. There should then be no pretense that any cardinal method would be improved by any sort of special rules or procedures to shore up the “proportionality” of a cardinal approach.

Now if that broad claim, that indeed certain types of candidate should be favored, and others discouraged, and with them some categories of the electorate should expect to be filtered out of direct representation, by depreciating their chances of electing someone of their preference and by making the occasional reelection of such persons more chancy, versus raising the chances of the “good” kind of candidate representing the “correct” mentality voter blocs (who are purportedly more worthy because their special mentality to be cultivated and rewarded is more empathetic and generous and inclined to community-focused fairness and balance, I at least hope that is what is to be cultivated here) and stabilizing their tenure in office, is actually being made, I certainly think it is within my rights to ask you, as someone essentially saying this, to please point to the science that is supposed to ground this philosophy in solid argument.

Because you see, I keep hearing platitudes, but the site claims to be a Center for Electoral Science, and if I come in saying “how do you get proportionality using cardinal methods?” and am first told “oh it can be done!” but it turns out actually it would be bad to seek that, and you yourself seem not to be too rigorous in your own logic, then I want to see the references that elucidate the counterintuitive notion that one actually does not want an electoral system in which people, however wrongheaded one may subjectively judge them to be, can get a “fair” representation in John Adams’s quite straightforward sense of the term–the commonwealth pretty exactly mirrored in the governing bodies that is–and explain how it is fairer if it is filtered and weighed in the proper manner.

This political theory needs to be drawn out into plain daylight to begin to be as approximately and primitively “scientific” as the perhaps primitive but quite seriously thought out work of such people as the Framer generation, the consensus practices of the past couple centuries, or even such figures as Aristotle, Thomas Aquinas, or Machiavelli.

Why? Why not double down on the value you think is most beneficial and filter out the one you think is deplorable? It could be that if you were to refer me to the “science” that is your presumptive grounds for saying such things, it would be clearer there how things you are describing as opposed, antagonistic binaries–assuming it should be plain that proportionality hurts consensus, and consensus must somehow be incompatible with a proportional basis, that is what I keep getting from your responses, and that’s an opposition that needs to be explained better because it is not at all obvious that this is at all true–might have more subtle forms of expression that show that actually some of each are necessary for the real goal at hand.

In fact it does seem you yourself are not comfortable with such simple sweeping claims as the opposition of “proportionality OR consensus,” or you would not be on this endless quest for the perfect nice mechanism; you could just focus on a cardinal approach that disregards proportionality as something you look for at all. If it is true that

then you are not really following that logic, and this could be because you recognize a flaw in it. Best to tease the precise nature of your own reservations out and contemplate them in broad daylight, and let that be your guide to what you would choose as your concrete goal.

Well, we can agree we all hate FPTP, but that’s some pretty dubious strawman logic there! It is like saying Hitler was a vegetarian therefore all vegetarians are like Hitler. (Not a bad metaphor, as I gather Hitler was a rather haphazard vegetarian who used his dietary choices as a manipulative stance, much as the US Civil Rights era hitting upon one-person-one-vote as their silver bullet in the context of FPTP which a moment’s logic should have told them itself compromises the obvious moral basis of the claim one person one vote would rest on, save as an arbitrary game rule).

One person one vote is a value if we grant the basic premise of human equality, of equal justice before the law. It is a value if we believe in democracy. Now you might reasonably say “well, if a cardinal system empowers persons who choose to exercise its power by marking more diverse choices over someone who picks one candidate to back with all their might, that course is open to any voter equally; all may play the same game and therefore it is in fact equal for all voters.” Which is the nature of my response to someone who objects to recognition of alliances between candidates and use of that to empower voters who wish to make sure their vote is not wasted but conserve–you can form and back a party too you know, there is no contradiction between persons who recognize or work to build an alliance representing their values, and a person who insists that such alliances are flawed–the system ultimately elects individuals to the body, it is not like a vote for a party sends in robot clones in lockstep from workshops where they are made to order on Magrathea to crowd out real human beings.

But this is not what you are saying, although possibly it might be what you really meant to say.

Ideally, a multiwinner system should allow consensus or proportionality without the risk of vote-splitting. But designing a system for consensus generally makes it harder for voters to get their proportionality guarantees without strategy and luck, while designing a proportional system (like yours) means voters can split and waste their votes when trying to compromise. So a balance needs to be struck, and I think that balance should be in favor of consensus. It’s pretty clear that if you have a legislative majority of moderates, that is more consensus-biased than one of mostly liberals or conservatives with a few moderates thrown in. The logic or “science” here is simple: the moderate majority will generally produce different legislation than the liberal + moderate majority, even if the median legislator is the same between them. That’s not to say people won’t have influence under a consensus-biased system, just that it might be diluted. Obviously, voters will reject a system with too much dilution, so the consensus bias can’t go too far. In a 5-seat district, it should be relatively simple for voters to get something mirroring their preference if they’re urgent about it.

For tiebraking I think you should just choose the candidate with the highest average among all the non-exhausted ballots (it would be a weighted average if fractional surplus handling is used).

There is another approach of just expanding the quota size until one of the candidates wins, though one of the reasons for only looking at the ballots in the quota is that those ballots are going to be gone after that round so you should try to maximize their preferences while their ballots still exist. When you start looking at ballots outside of the quota this is no longer the case which is why I prefer just electing the candidate with the highest average (weighted average if fractional allocation) score among the candidates tied in support among the hare quota.

This kind of reasoning for using Monroe selection in SMV is also why I don’t like lazily mixing the Monroe selection with other methods: The reasoning for using such a discrete cut-off point as to which ballots count and which don’t when selecting a winner no-longer exists and that cut-off point and the discreteness of it seem very arbitrary when using it with other methods.

When you talk about “consensus PR”, you have to be clear about what context there is consensus. For example, you give Monroe and Sequential Monroe as examples of “non-consensus” PR. But, there’s strong consensus for that candidate within that candidate’s assigned quota.

1 Like

Yeah I agree. I think that terms like consensus PR and utilitarian selection (@Keith_Edmonds) are very misleading, because what we are talking about is only how individual winners for the earlier rounds are chosen, not how utilitarian/consensus election each election result as a whole is. In-fact 'utilitarian 'selection and ‘consensus’ PR can result in more extremists getting elected as a result of choosing consensus candidates in initial rounds, plus all the methods that we are considering are more utilitarian then STV.

I somewhat agree, especially considering that PR itself is often called a “consensus-based system” (one needs a majority coalition rather than a majority of local majorities/pluralities.) But I think it’s fairly obvious that a system that still highly prioritizes quota satisfaction over overall utility can’t be called “consensus PR”, especially when there are systems that go so much further (like RRV) towards a consensus bias for all.

Keith’s simulations seemed to show that the systems we’d nominally consider more “consensus-biased” vs. less (i.e. Utilitarian Selection vs. Monroe for Sequentially Spent Score) tend to offer more utility in the winner set as a whole. I’d be interested in an additional metric just looking at the majority of each winner set’s utility; it’s possible that a winner set with a majority of somewhat-extremists and a minority of extremely utilitarian representatives offers more overall utility than a minority of extreme-extremists and a majority of somewhat consensus-biased representatives, but the latter offers better utility when judging legislative action.

Actually, two metrics would be useful: legislative majority utility, and median legislator utility. It’s possible for the legislative majority to be comprised overwhelmingly of utilitarian winners except for one who is extremist, and that extremist could force a disproportionate amount of utility loss.

For some methods. This might be different for other methods.

Also, it is often the case that the more utilitarian a voting method is the less proportional it is and the more proportional it is the less utilitarian it is. If you want a voting method that is more on the utilitarian on the utilitarian proportional spectrum, a much better way to get more utilitarian results for voting methods with a proportional/utilitarian tuning parameter would be to slightly nudge that tuning parameter in the more utilitarian direction then use a more greedy approach to winner selection. Methods that use such a parameter have a Goldilocks zone for what that parameter can be while still being proportional and the methods that use the most utilitarian end of that zone are the kind of methods that I would actually consider being the most consensus/utilitarian versions of PR.

Failing to approximate the quality function of the optimal method that a sequential method is designed to approximate is more of a bug then a feature even if it often results in a more utilitarian set of candidates being elected. If you prefer that set of candidates being elected, you should instead make your sequential method based on a quality function that actually prefers that set of candidates to the more proportional set of candidates.

The methods with tuning parameters are all unviable though, right? Also, do you know of any methods (viable or not) that you’d say allow an electorate to decide how proportional or utilitarian they want the result to be through their ballots? Sequentially Spent Score seems the simplest such method, though it still is consensus-biased or preferential-biased based on the selection mode. I think a very utilitarian PR method is ideal, but a lot of real-world voters are probably more comfortable with majority rule, so the best method would be one that allows for either consensus or proportionality, possibly being capped on how utilitarian the results can theoretically get.

This is true but we need to consider how this averages out over many multi-member districts. You want the final parliament to represent the ideological distribution of the population. Maybe you need more fluctuation in each multi-member riding to get that.

I know what you mean here and it might be true in spirit but I do not think the word “proportional” really has any meaning in this context. What we should really be talking about is the polarization or consensus bias. In the single winner case IRV is polarizing and Approval is consensus biased. In Multiwinner I think STV is polarizing and RRV is consensus. I would think the the goal would be no bias. This would be the same as a fair/ideal representation of the population. This is what you would get from a sortition. Trying to find a way to measure this would be a huge step. Finding a system that satisfies it would be the holy grail. Keep in mind that it should average out to this. It need not be unbiased in each district. Parliaments are made of many districts and will have many elections. Bias is a problem because it influences the party positions relative to the population.

This sounds very much in line with something like Sequential Monroe: represent the consensus in each quota of voters but also try to find the most “representable” quotas of voters to begin with. That way, you get the most reasonably good approximation of the ideological distribution as possible. This means a majority of voters will have a majority of representation on as many issues as possible, which is pretty much the best you can ask for from a sortition of the voters.

I think a “tuning parameter” to consider with quota-based PR methods is the quota itself. If it is lowered from the outset, then theoretically more utilitarian candidates win (because the compromising voters retain more of their points), and if it is increased, results become more factional (possibly even unproportional.)

I think it’s worth pointing out: one good use for a consensus-biased method would be just to see what compromises are possible. So for example, a great proportional method might be one where the results are consensus-biased under honest voting, but then voters can strategize a bit and get proportional results. This way, everyone is clear about what the true consensus is, and can choose to work with or against that. One of the biggest problems in current politics is that politicians can obscure what sorts of compromises are possible, popular, etc. Having the data from a cardinal method would clear things up tremendously. That alone is enough to make cardinal better as the input method, though that doesn’t stop you from converting it into cumulative results (though that might not be as instructive as to who or what is the best compromise.)

How do we determine what point along the deep support (“polarization”) vs broad support ("[general] consensus") continuum is no bias? Certainly we can make relative comparisons, between different methods, but I’m not sure how you justify an absolute standard.

I have no idea. This is essentially the crux of the problem. It is clear in a statistical sense. We want an even down sampling like you would get from a sortition but there are two issues.

  1. We do not know a measure of this. This property does not really even have a name within election science. I have been calling it Ideal Representation for lack of a better term. It is not Proportional Represenation.
  2. We do not have a general theory of what sort of methods would lead to better or worse performance. I have heard that STV is polarization biased because IRV is but I have not really seen a proof of this. I think RRV is consensus biased since it does not down weigh very fast and uses a global utilitarian selection.

Just for clarity about what I mean by polarization vs consensus biased. If the distribution of the population is gaussing then a candidate/party is more viable if it is further away from the center or closer to the center than it would be in an even down sampling.

Visually you can look here https://www.people-press.org/2014/06/12/section-1-growing-ideological-consistency/#interactive

Clicking back and forth between “general population” and “politically active” you can see that the political people are more polarized than the general population.

If you then look at the distribution of house candidates who won their primaries there is no representation in the middle even though the bulk of the people are in the middle.

This means that the political system in the united states is polarizing. This is bad because it means less Ideal Representation. In this case lower representation for the moderates. In theory a consensus biased system is also bad but arguably not equally bad. There are a few reasons for this polarization but the plurality voting is likely one.

I think that this is largely a game theory effect so it grows in strength over a few elections. This makes the problem harder.

This polarization largely seems to be the result of FPTP and primaries. Any method that encourages competition between clones would fix that. And to counter the game theoretical aspect of voters polarizing, you’d want a system somewhat consensus-biased, but which could be proportional under more strategic voting. I think something as simple as “score selection + allocate Hare Quota” would fit those purposes. Using Sequentially Spent Score with Monroe selection is close too. But using Score selection (I’ll call it that instead of utilitarian selection) doesn’t really give you ideal representation so much as overrepresentation of the center. Even if you have a polarized Hare Quota of voters, if they’re unsure whether they’re a quota, then they have to choose a representative unlike them. So clearly Utilitarian Unitary is a more consensus-biased system, because the voters whose representation most matters for Ideal Representation are the ones who will be less likely to get it.

I do not know what you mean by this.

I do not see how you can know this