At this link you can find a voting system for single winner; do you know if this system has already been proposed by someone else?

# Is this a new voting system?

If this is a Condorcet method, then it seems like Copeland, but using Score as a tiebreaker.

For now I will call â€śTMâ€ť the method, for convenience.

1: A[5] B[2] C[1] D[0]

2: A[4] B[5] C[1] D[0]

TM wins A. In head-to-head matches he wins all times A so he is the â€śCondorcet Winnerâ€ť.

But if I convert those votes into rank, I get:

1: A[1st] B[2nd] C[3rd] D[4th]

2: A[2nd] B[1st] C[3rd] D[4th]

and here the Condorcet Winner is B.

If you donâ€™t redefine the concept of head-to-head, then TM itâ€™s not a Condorcet method.

Anyway, thanks for the information, you are very knowledgeable about the various types of method!

Is there a mistake in the example? The Condorcet pairwise matrix here would be: A and B have 2 votes to C and Dâ€™s 0, and A 1 to B 1. So therefore A and B are tied and both are weak Condorcet winners.

The second line is 2 votes, so the extended example is:

A[1st] B[2nd] C[3rd] D[4th]

A[2nd] B[1st] C[3rd] D[4th]

A[2nd] B[1st] C[3rd] D[4th]

Oh, thanks.

Itâ€™s interesting how â€ś2â€ť can mean either â€ś2 votes of this typeâ€ť or â€śthis is the 2nd voterâ€™s voteâ€ť.

Itâ€™s always best to explain methods in something approximating English. But anyway, it seems that for a pair of candidates A and B, for each voter you divide their score for A by their score for B. Then you multiply all these together and if itâ€™s greater than 1, A wins the head-to-head and if itâ€™s less than 1, B wins.

It looks terrible to be honest. Heâ€™s had to come up with these extra caveats for when 0 is involved in the division, but it looks like heâ€™s got it wrong anyway because heâ€™s got zero divided by something as MAX and something divided by zero as 1/MAX, unless Iâ€™ve misunderstood something. Also where heâ€™s talked about 9 or 5 options, heâ€™s actually got 10 and 6, although heâ€™s separated the zero box.

But anyway, I donâ€™t see anything good coming from this method.

Edit - Oh is he you? My post looks a bit rude now. But anyway, Iâ€™d need to see some pros of the system.

But aside from anything else, it uses the ratios rather than the difference between scores, which I think is also the case in your Distributed Voting. And as covered in this thread, it doesnâ€™t really make sense to do that.

If I evaluate the pair [A,B] then I will make in each vote A / B and then multiply all the values â€‹â€‹(fractions) between them.

If > 1 wins A, if < 1 wins B, if = 1 they both win.

If I vote like this:

A [0] B [1] C [2] D [3] E [4] F [5] G [5]

Evaluating the following couples I obtain the following proportions:

[F, G] -> 5/5

[E, G] -> 4/5

[D, G] -> 3/5

[C, G] -> 2/5

[B, G] -> 1/5

[A, G] -> 0/5 -> 0 must be the lowest possible value among all the comparisons (but it cannot be 0 otherwise it makes 0 the product among the various votes) therefore its value will always be 1/5 (1/MAX) fixed.

Difference between 0 and 1:

A [0] B [1] C [2] G [5]

[A, G] -> 1/5

[A, B] -> 1/5

[A, C] -> 1/5

[B, G] -> 1/5

[B, C] -> 1/2

0 always returns 1/5 compared to another candidate (except two candidates at 0, who are considered equal).

1, it is worth 1/5 only with respect to the candidates with 5 points, but not with respect to the others (1/2 in the case [B,C]).

Youâ€™re right about the options, they would be 9 + 1 and 5 + 1.

But anyway, I donâ€™t see anything good coming from this method.

Show me a case where it causes problems, because I canâ€™t find it.

It also meets several criteria. If you then define head-to-head in the way indicated by the method, then it also meets all the majority criteria based on head-to-head comparisons.

But aside from anything else, it uses the ratios rather than the difference between scores, which I think is also the case in your Distributed Voting. And as covered in this thread, it doesnâ€™t really make sense to do that.

I read it roughly but at some point it seemed to me that he was starting to make too many hypotheses.

In general, if 0 is given in the DV to a candidate considered disapproved, it is very likely that this will also happen using another voting method. Speaking of facts, I still havenâ€™t found a case where DV actually fails (excluding those rare cases with loops).

This is one reason I advocate for people using their real names in the forums.

This looks like the Cardinal version of Copeland. Pretty much all version of rank voting can be turned into a cardinal system. I propose that, as a rule if it is roughly the same as a rank system we just throw a â€śCardinalâ€ť in front of the name. That would make it much easier for people who find the cardinal system to find the literature on the ranking system.

As a side note, it seems we are quickly gathering cardinal sequential elimination systems. These all have a full sequential system and a top two runoff version. The former is nonmonotonic (always?) and the later is monotonic but less accurate. Interestingly the top two runoff version can be the same system. For example the top two runoff for both Cardinal Baldwin and Cardinal Copeland is STAR (unless I am confused about something). Perhaps somebody keen can make a page of Cardinal sequential elimination methods on Electowiki. Might be interesting to compare and contrast. The list which I am aware of is as follows and is largely characterised by normalization.

- @Brian_Olson 's IRNR: Remove lowest sum score then normalize by dividing each score by the sum of scores that voter give to each candidate
- Cardinal Baldwin: Remove lowest sum score then normalize by applying f(x) on each score where f(x) = max_score * (x - smallest_score_given) / (highest_score_given - smallest_score_given
- My Leveling method in STLR: Remove lowest sum score then normalize by multiplying each score by max_score / highest_score_given
- This one @parker_friedland just mentioned: Normalize by dividing all the voterâ€™s scores by the standard deviation of the scores they gave to the remaining candidates times sqrt(2) (the square root 2 part is just so in the final round the score they give to their preferred candidate is one more then the score they gave to the other).
- Cardinal Copeland: Remove candidate with leave pairwise wins minus pairwise losses. You need to have the pairwise losses to help with ties. Further ties can be solved by removing lowest sum score.

I am sure there are more.

Cardinale Copeland

- is Smith Efficient, TM isnâ€™t. Calling TM â€śCardinal Copelandâ€ť would give false expectations regarding the criteria.
- specifically the P Table (used in TM) isnâ€™t to be confused with Copelandâ€™s pairwise table comparison; they contain information with different meaning and value (for this TM doesnâ€™t meet the Copeland criteria).
- If you say that â€śCardinal Copelandâ€ť is a sequential elimination methods then itâ€™s certainly not TM, since in TM eliminating any candidate, the results donâ€™t change.

At this point, you could invent â€śCardinal Copelandâ€ť with sequential elimination methods.

Other sequential elimination methods are:

- Cardinal Baldwin with my rearrangement of the IRNR normalization in which the vote changes only if the maximum value is eliminated, if instead I eliminate all the mins, it remains unchanged.

[0,1,3,5] -> delete candidate with 0 and normalize to range [0,10] -> [2,6,10] - Cardinal Baldwin who uses my normalization that set only the min value of the vote at MIN, and only the max value of the vote at MAX. The others unchanged.

[0,1,2,3,5] -> cancel the 5 -> [0,1,2,5] -> cancel the 0 -> [0,2,5].

Hooray! I accidentally invented a system

I am confused. Cardinal baldwin does not use the IRNR normalization. Could you write this out more mathematically?

So only the highest and lowest get pushed to MAX and MIN? What if more than one is at the highest/lowest value?

To start with, all the caveats you need when zero is involved in the division means it loses its â€śpurityâ€ť as a method, so it comes across as a bit of a fudge. And also how it looks at ratios of scores rather than the difference between them, which Iâ€™ll address along with DV:

The long and short of it is that scores are a proxy for the utility someone expects to get from a candidate. And it is generally accepted that itâ€™s the difference between utility scores, not the ratios that are important. See this that @RobBrown linked to previously. So that is to say that if you score candidates A, B and C 1, 2 and 3 respectively then the difference between A and B is the same as the difference between B and C. Your normalisation approach would consider 1, 2 and 4 to be have this equidistant relationship.

As for where DV fails - well it depends on what you mean by fails, but imagine these ballots:

1 voter: A=99, B=1

1 voter: A=0, B=1

There may be other candidates as well that these voters have given scores to, but Iâ€™m just considering A and B.

Your normalisation philosophy says that the preference for B over A is stronger than the preference for A over B. This philosophy is wrongheaded, and would clearly give â€śfailureâ€ť results. Using utility (and therefore scores) properly, 99, 98 is the same difference as 1, 0.

If I understand this correctly, itâ€™s Copeland where the runoffs are done like in STLR, and Score is the tiebreaker. That means the following example:

would be a potentially useful example of a cycle for your method, and one where all candidates would be tied on pairwise wins but not on points.

Also, have you considered doing Smith//Score but using the STLR runoff reweighting?

Now I donâ€™t remember it well, I did it some time ago in the Distributed Score Voting (point 3 of counting) which I then put aside because to complex; it should be like this:

Rating: [3,5,7] -> to the range [0,10]

W = 10 points (if I want to normalize to MAX = 10; 100 points if I want to normalize in DSV to 100).

max = maximum mark value = 7.

Vold = old value of a candidate

Vnew = new value of the candidate.

Vnew = (Vold / max) * W

The mark becomes: [4.3, 7.14, 10]

So only the highest and lowest get pushed to MAX and MIN? What if more than one is at the highest/lowest value?

The precedence is this:

- I put all the values â€‹â€‹= a min (of the vote) to MIN.
- I put all the values â€‹â€‹= a max (of the vote) to MAX.

If the initial values â€‹â€‹were all the same, I will all have MAX at the end; if it was half and half, I will only have MIN or MAX at the end. The difference from other normalizations can be seen when there are many candidates.

They are not caveats; simply if the range were ex. [1,7] then the range of proportions will be [1/7, 7] (ie, a candidate could be at least 7 times better than another or 7 times worse).

Handling 0 simply uses those values to manage the range caused by the 0 that is [0,+inf] --> [1/7, 7].

And it is generally accepted that itâ€™s the difference between utility scores, not the ratios that are important.

In SV if I vote like this: A[10] B[5]â€¦ then how many more votes will it take to make B reach A? 1 other vote having B[5] therefore for 1 A[10] you need 2 votes B[5] to equal A and B.

The sum of the points also hides the concept of proportion.

So that is to say that if you score candidates A, B and C 1, 2 and 3 respectively then the difference between A and B is the same as the difference between B and C.

In SV no, because in SV if I have 1: C[3], then 3: A[1] are needed to make C be equal to A.

If what really matters to you is only the difference (distance), then you could create a â€śCardinal Copelandâ€ť voting system in which in head-to-head you only consider the distance between two candidates.

In this case with A vs B, a vote like A[3] B[2], would earn A 1 point (and lose B 1 point); end immediately found the best, or find the worst to delete, normalize, etc â€¦ @Keith_Edmonds also done a type of â€śCardinal Copelandâ€ť.

Using utility (and therefore scores) properly, 99, 98 is the same difference as 1, 0.

This is what you donâ€™t understand. In my normalization only the scores [1, MAX] can be compared to a range with sum.

The 0 is equivalent to the absence of an AV cross.

See it like this: DV and TM is as if they were AVs, but in which the candidates to whom from the X you can evaluate them more precisely (with a range).

This your example:

1 voter: A = 99, B = 1

1 voter: A = 0, B = 1

equivalent to:

1 voter: A = X, B = X

1 voter: A = 0, B = X

The candidates who receive points are as if they had the X of the AV, and those who do not receive the X are the 0.

This is said to the voters so that they take it into consideration: â€śchoose which approved (as in AV) and then evaluate them precisely, (not caring about those without X, who you know will not be favored in any way, regardless of how do you rate your favorites)â€ť.

All this makes the voter be more honest about favorites candidates.

P.S.

In your example, B would not have received X in TM (A win), and in DV would have won A because B is the worst.

well it depends on what you mean by fails

Any DV problem that other cardinal methods have not.

@Keith_Edmonds If you are interested in methods with sequential elimination and normalization, then you will also be interested in the Honesty Criterion.

I had invented it for DV but it should also work on methods like Cardinal Baldwins.

@Keith_Edmonds there is another normalization to consider, which I would call â€śMin Norm-Maxâ€ť:

- if the MIN (0) in the vote is eliminated, then only the candidates with the min value in the vote are put to MIN (0), which is the Min-Normalization.
- if the MAX in the vote is eliminated instead, normalization is applied to all values, making the final range [0,MAX] (this, the usual one).

Why exactly this form?

- if from a vote like this: [0,2,4,10] remove the 10, I would like it to become like this: [0,5,10] (and not [0,2,10]).
- if from a vote like this: [0,8,9,10] remove the 0, I would like it to become like this: [0,9,10] (and not [0,5,10]). I think this norm better respects the wishes of the voter.
- a voter is less encouraged to vote like this: [0,0,1,1,10,10,10]; rather it would vote like this: [0,0,1,1,8,9,10] because of point 2); the 9 canâ€™t go down.

Summary: If I assign a score to a candidate, that score during normalization can only increase. Itâ€™s canceled (0) only if the candidate becomes the worst of all.

This tactic [0,0,1,1,8,9,10] (bullet voting or min-max) remains the same as a problem, but is reduced.

I am only interested in sequential elimination if they can be made monotonic. For now I am going to stick to just a top to runoff instead of doing it at each round since I know that saves monotonicity.

My understanding was that it was the ratio between the scores that were important, so if you allow 0 as a score, it would break the system, so you need a workaround, or caveat.

SV being score voting? I donâ€™t see anything that contradicts what I said. If my ballot is:

A=1, B=2, C=3

Then this ballot will cancel it out:

A=4, B=3, C=2

This ballot wonâ€™t:

A=6, B=4, C=2

The differences, not the ratios, matter.

It seems strange to allow people a range of scores but then tell them that they should only distinguish between candidates they like. If there is a chance of the election being between two candidates I donâ€™t like, why wouldnâ€™t I indicate that I prefer one over the other so that the lesser of two evils gets in? Still, wherever you set the zero point is still arbitrary and doesnâ€™t make utility ratios any more reasonable as a concept (see previous link).

OK, Iâ€™ve just had a quick look on the wiki to check I have the system right. The following are the ballots:

10 voters: A=9, B=1, C=0

10 voters: A=0, B=1, C=9

15 voters: A=0, B=8, C=9

The raw score totals are A=90, B=140, C=225. C also beats both A and B head to head by a 2.5:1 ratio. So under score voting or Condorcet, C wins easily.

However, under DV this normalises to:

10 voters: A=90, B=10, C=0

10 voters: A=0, B=10, C=90

15 voters: A=0, B=47, C=53

The totals are: A=900, B=906, C=1694. A is eliminated. Normalisation happens again:

10 voters: B=100, C=0

10 voters: B=10, C=90

15 voters: B=47, C=53

The totals are: B=1806, C=1694.

B wins. This is a bad result under DV.