Score Voting ideal quorum


#21

Isn’t “You must receive a minimum of N votes or you are disqualified” an easier-to-understand “quorum” rule to deal with “no opinion”/write-ins? Does it work out to mathematically the same thing?

(So someone who gets a single write-in with a score of 5 doesn’t qualify to beat someone who has 1000 votes and an average score of 4.8.)

(WDS seems to speak of everything in mathematical terms which don’t carry over well into real world reforms. “Each vote is a real C- vector, each entry of which is nonnegative and with all the entries summing to 1”, etc.)


#22

A “hard quorum” rule may lead to a tactical voting where you say “no opinion” to the person you hate so they go below the quorum. And besides, it feels a lot more arbitrary.

(Adding 1000 fake zero votes would give the first guy approximately 0.005 but the other guy a 2.4, and if you only used 100 as that election is small the other guy would get approximately 4.55 points.)

(Each vote is a list of real scores assigned to each candidate where the scores add up to at most 1.00, and when it comes to writing laws, complicated language is the norm.)


#23

where you say “no opinion” to the person you hate so they go below the quorum

Wouldn’t it be better to explicitly vote against them?

it feels a lot more arbitrary. … (Adding 1000 fake zero votes

Adding lots of fake zeros to every candidate feels a lot more arbitrary and unusual to me. :man_shrugging:


#24

This is where participation failure comes in.

Suppose a candidate has a very high average, but too few votes and so would be disqualified.

If someone then gives them a low score, their average might not change much, but they could suddenly qualify and win.


#25

So then is there any incentive to vote against people? Or it just becomes a popularity contest? I guess there’s a tipping point where one is more advantageous than the other?


#26

It’s actually much worse then that. If the voter changing their vote from a “no opinion” to a minimum score causes them to have enough “opinion” scores to pass the hard quota and thus win, then what you have is not just a failure of participation, but a failure of even monotonicity.

Monotonicity is defined as giving a candidate more support should never hurt a candidate and giving a candidate less support should never help them. Opportunity, it’s hard to measure how much support a “no opinion” score is worth, which makes the definition of monotonicity fuzzy when you apply it to voting methods with such scores, but a “no opinion” score is at the very least better then or equal to a min score, thus swapping a min score with a “no opinion” score should obviously not hurt a candidate. However with that quota, swapping a min score with a “no opinion” score can hurt a candidate.


#27

Yeah. If both candidates already qualify, then it’s just like normal.

Of course, the soft quorum also has participation failure. In fact, I think all true quorums fail participation. Unless you count treating a blank/abstention as some default score.


#28

With simple score voting you can give between one vote and ten votes to as many candidates as you want. There is no averaging, no fake or actual zero votes, and no “no-opinion” votes. All the votes are simply added up and the candidate with the most votes wins. That’s why it’s called simple.


#29

Someone had the five properties NO_DARK_HORSE, NO_MAGIC_NUMBERS, PARTICIPATION, EXTREME_PARTICIPATION (deleting an A=max B=min cannot alter the winner from B to A), and DEFER_TO_OTHERS (there is a way to give a “no opinion” vote without it being interpreted simply as a fixed regular score like 0) on the old forum. I lost the link.

Simple score voting fails DEFER_TO_OTHERS, and the problem is, a voter may be uninformed about a candidate and wish to not have any impact on that candidate’s scores. Adding no opinions and 1000 fake min-votes fails PARTICIPATION and NO_MAGIC_NUMBERS.

(These names are too long and unwieldy. I have to wonder why they were styled LIKE_THIS and not Like This or LT.)

The half cutoff rule fails PAR, EXP, and NMN. (Half is still a magic number, as some may argue that is should be 1/3 or 3/5 or something else.)
In that regard, it seems like soft quorum is already better so that we can safely disregard the cutoff rule.


#30

I happen to have it, it was in the same thread where Eric Sander’s quorum was discussed. The properties were used by Andrew Jennings. Eric’s method fails PAR only.


#31

NoIRV

    December 13

Someone had the five properties NO_DARK_HORSE, NO_MAGIC_NUMBERS, PARTICIPATION, EXTREME_PARTICIPATION (deleting an A=max B=min cannot alter the winner from B to A), and DEFER_TO_OTHERS (there is a way to give a “no opinion” vote without it being interpreted simply as a fixed regular score like 0) on the old forum. I lost the link.

Thanks, Skyval, for the link to the discussion. Here is a link to the actual post;

https://groups.google.com/d/msg/electionscience/SpLDc1Z0hzE/v7dSAizLV78J

Simple score voting fails DEFER_TO_OTHERS, and the problem is, a voter may be uninformed about a candidate and wish to not have any impact on that candidate’s scores. Adding no opinions and 1000 fake min-votes fails PARTICIPATION and NO_MAGIC_NUMBERS.

(These names are too long and unwieldy. I have to wonder why they were styled LIKE_THIS and not Like This or LT.)

Well, you remembered them, didn’t you? :slight_smile:


#32

This “DEFER_TO_OTHERS” criterion, whereby voters are de facto “empowered” to bestow votes on candidates whom they know nothing about, must be the greatest invention yet in the development of complicated score voting.


#33

Ordering candidates by
SUM/sqrt(NUM_SCORES)
provides a compromise between sums and averaging.

I’m not necessarily advocating this as a good rule, particularly not for the ballot initiative, for which “don’t get greedy” is probably the best approach.


#34

I think I’m liking this one. To use Andrew’s criteria, I think it fails Participation. And well, Magic Numbers too (why a square root). But I like its simplicity.

Some sanity tests against 100% known candidates:

  • A candidate 5/5 candidate needs to be known by more than 36% of voters to defeat a 3/5 candidate known by 100%.
  • A 10/10 candidate known by 90% can still defeat a 8.5/10 candidate known by 100%.
  • For a 5/5 candidate known by just 10% to defeat a competitor known by 100%, the competitor’s score must be less than ~1.6/5.

#35

Well, the formula isn’t entirely arbitrary. The effect is to score candidates by the geometric mean of their sum and average score. Thus, if summing and averaging agree on the order of two candidates, then this rule maintains that order (actually, that should probably be its own criterion, SUM_AVG_AGREEMENT). When summing and averaging disagree, then the ordering of the method with the more lopsided ratio wins out. As averages are bounded but sums are not, there’s more room for a lopsided ratio of sums than a lopsided ratio of averages, so sum is probably going to overrule average more often than the reverse, making this quorum rule rather aggressive. However, this may not be a bad thing - whether or not the average of a large number of scores (say, a million) coming from an even larger population (say, a hundred million) is trustworthy, anyone who wins like that will have issues appearing legitimate.

One thing to very much dislike about this rule is that while average scores and sums of scores have plain meaning, there’s no obvious way to express this without math. Even an ‘automatic zeroes’ rule can at least be described as a pessimistic estimate of what a candidate’s average would be if they were more widely known.

It does indeed fail participation, although the margins have to be very thin.
For example, in a 0-5 score election, if Candidate A has sum of 115 on 30 ballots for an average of 3.833, and thus a score of 20.996, and Candidate B has a sum of 210 on 99 ballots for an average of 2.121 and thus a score of 21.106, then if the next voter gives B a score of 5, A needs to be scored at least 4.707 to overtake B.


#36

How is that weird? You changed Bob’s support such that no fewer than 1/3 of them thought that he was the best possible, and the rest of them scored him no lower than 2.5 on average.

Unless you meant that Bob only had 20 voters, in which case everyone who expressed an opinion on them did so at maximum support.

On the other hand, no less than 63% must have scored Alice at less than Bob’s new average. Or, put another way, Alice has more votes below Bob’s adjusted average than Bob has votes total.

I suppose it feels weird, but I suspect that if we were looking at collections of ballots, rather than averages thereof, it would be clearer why I think that perfectly reasonable.

[Edit: Also, if we’re adding votes, rather than changing them, then the denominator would increase to a minimum of 55, and the results would be Bob 1.(81) and Alice 2.0]


#37

You add the same amount of support to both candidates and it causes them to change order.

http://scorevoting.net/UtilFoundns.html

But as has been pointed out, that happens with average-based regardless. The point I was getting at was that using a variable denominator (50% or actual number of votes, whichever is greater) creates a discontinuity. Once you get a certain number of votes, each additional vote helps you less. That is certaitnly wrong in a social choice theory sense. It could have political benefits that make up for that perhaps.


#38

I am not so certain of that, actually.

Yes, there is a discontinuity in the effective output, but consider where that discontinuity occurs: at the point where the expressed opinions constitute the opinion of the majority.

I mean, sure, I would prefer a statistically valid formula, one that basically gave the lower bound of the 95% confidence interval created by the “sample” of voters scoring them (using “registered voters” as the “total population” number) or some such, but good luck explaining that to legislators and/or voters, let alone why it’s valid.

No, the reason I like the “Minimum Denominator” method of smoothing is that it speaks to the same thing that IRV’s meaningless “final support” numbers do: the general belief in the will of the Majority, and it does so simply and directly.

Because the discontinuity is between “Majority expressed opinion” and “less than a majority expressed an opinion,” with the latter being penalized (sometimes quite severely), that would give people who don’t understand Statistics (or Laplace Smoothing, or Square Roots, or any of the other offered solutions) are still likely to accept the idea that such a rule ensures the will of the majority.


#39

Yeah, “majority” is a politically practical. No disagreement there.