Does this sequential variation of STAR exist?

I considered that. In my implementation of the method (before hearing it already had a name), my goal was to extend upon STAR while making the fewest changes as possible. STAR maximizes the strength of each ballot after eliminating candidates, so I do the same. I think doing so is the core selling point of STAR. I.e. “don’t worry if your favorite candidate isn’t going to be a front runner, if that happens, we’ll do the right thing for you.”

I just do it in a series of steps rather than one big step, but of course if there are only three candidates, my implementation behaves identically to STAR.

I don’t know why someone would give the scores as 3,2,1, that’s kind of a psychology question. Personally, I’d consider it an error, since I assume that since someone spent the effort to go to the polls, you’d think they’d want their vote to count for as much as allowed. Maybe they are OCD and they really want candidate B to be right in the middle between A and C, but in that case they could at least do 5,3,1 to get a bit more milage out of their ballot.

I understand that not everyone thinks as strategically as I do, but my general philosophy is to design the system as if everyone did. Because my observation – in politics and everywhere else – is that things tend to converge on “maximum strategy” over time. (i.e. people often “play nice” initially, but when they get tired of feeling like a sucker, they get more wiley) This is the whole concept that game theory is based upon, where you talk about equilibriums and such. A scenario where voters submit ballots that don’t use the full range is not an equilibrium.

My solution is a compromise between extremes. One extreme is to normalize even the first round. The other extreme is to infer that if their initial ballot is diluted, that they wish for the tabulation method to use a similarly diluted ballot on subsequent rounds. Honestly the only reason I care between the 3 options is in terms of making it easiest to explain, easiest to compare to STAR, and therefore easiest to sell.

1 Like

Is there a distinction between this and the one you proposed back in may?

Ha, I guess not. Technically the difference is that the previous one normalizes before the first round while this doesn’t.

It seemed new when I actually implemented it in the Codepen and saw its benefits. I’ll go back and reread the previous thread. For some reason I thought the one I had asked about was … well nevermind I’m not sure what I thought it was. Different. Obviously Cardinal Baldwin is something that keeps coming back to me, it is as close to ideal to me as any I’ve encountered.

I like Cardinal Baldwin too. I actually came into it thinking about only doing the initial normalization. When I looked at real ballots I saw that many people did not use the MAX or MIN values. There is a post here about that. Cardinal Baldwin can solve that issue if the normalization is applied before. It also solves what I think is a bigger issue where the existence of unviable candidates can distort the scores for voters who favour them. I have two issue with it

  1. Its not monotonic unless you do it only on the last round and it becomes STAR.
  2. It does not preserve the amount of relative preference. It maximizes preference. This loses a fair amount of utilitarian accuracy.

This lead to STLR. It gets around 1. by using only the last round normalization. It gets around 2. by only “stretching the scale” in one direction. I know you do not agree with my justification for doing that but I have no better options and seems better than just accepting 2. Actually, if you read the initial post I was going to use the IRNR normalization but this one felt more fair.

1 Like

But this also depends on how you look at Condorcet methods. Some of them explicitly say something like “If there is a Condorcet winner, elect them. Otherwise do this…” That’s what I see as having to resolve cycles because they have to click into tie-resolution mode. Other methods just have a single thing you do, but that happens to elect the Condorcet winner. So they don’t have to explicitly resolve cycles - it just happens as part of the method.

I suppose one of the reservations I have about Cardinal Baldwin is the IRV-ness of it. With ranked methods, instead of doing IRV and eliminating one candidate at a time, you can look at each pair of candidates separately, and you end up with Condorcet methods, which most people would probably say are better than IRV. But if you do the same with score voting + normalisation, every pair of candidates ends up as a 0 and a max anyway, so it ends up being the same as a ranked Condorcet method. So it leads me to wonder whether if pairwise comparisons are better than pairwise elimination for ranked methods, might they be for score as well, so is this the best use of a cardinal ballot?

However, eliminating the candidate with the lowest score (cardinal Baldwin) seems a lot less risky in terms of eliminating the wrong candidate than eliminating the candidate with the fewest first place rankings (IRV).

And generally I think it would probably give good results, and it definitely normalises in the right way if you’re going to normalise!

I think all of Jameson’s code and results are here.

Yeah, that’s exactly what I was hoping for here. I mean I don’t care all that much if it is 100% Condorcet… I could be wrong, but I’m guessing there could be situations where there is a Condorcet winner but Cardinal Baldwin picks another candidate. But I would expect it to be very near to a tie anyway, and even then a rare (or carefully contrived) situation.

I also think of Cardinal Baldwin as being very simple and elegant, but part of my feeling on that is that I think I could explain it with a little animated graphic, that I’d bet money my 6 year old would understand in half a minute. (she’s a sucker for the colorful animated graphics with glowy effects that are my specialty :slight_smile: ) Even describing it with words could be very simple: “rate things with 0 to 5 stars like on Amazon, but don’t worry who you think is in the lead, we’ll make sure to do the right thing with your vote.” STAR sort of promises the same thing, but it’s just not as true (as in my example, where there are 3 front runners and voters get their “zero” because they honestly rated someone a 1 instead of something higher)

Right, I mean… kind of? The actual scores count for something beyond just establishing a ranking, as it decides who to eliminate in early rounds. The very first round, it’s pure scores, so it makes a difference if you do A[5] B[1] C[0] as opposed to A[5] B[4] C[0] … B might get eliminated because you ranked them as a 1 instead of a 4. But yes, by the last round that is exactly what happens, your rating will be a 0 or 5… like STAR. (well, STAR doesn’t word it that way, but it’s the same thing)

The only difference between this and STAR is that it smooths out the transition from first to last round, to minimize the chance of a block of voters screwing up by rating someone too low, resulting them getting eliminated when they could have won.

Yeah Cardinal Baldwin only differs from STAR in how it determines which candidates are in the final round, while STLR only differs from STAR in how it scores the final round. (at least that’s how I implemented STLR in my Codepen)

My concern is with that situation where a block of voters wants to give someone a higher score than they otherwise would, out of legitimate fear that if they don’t, even worse candidates will make it to the final round. That seems like a significant problem to me.

Ultimately, I would like for STAR to be just a bit more Condorcet-like, and a little less affected by irrelevant alternatives. The other method which seems (to me) to do almost exactly the same thing is the one where it picks the Condorcet winner if it exists, and if there is none, it eliminates all candidates that aren’t in the Smith set, normalizes the ballots, and picks the one with the highest score. My gut tells me that this is almost always going to be have the same result as Cardinal Baldwin, even though it kind of approaches it backwards.

Voters often have strong positive and negative preferences (min max tactic). This may cause the votes in Cardinal Baldwins to tactically take a form where only 5,1,0 scores are used which would make the count similar to the IRV (or an Approval-IRV).
Aside from the shortcomings of the IRV, the problem with such tactical ratings is:

  • push voters not to take advantage of the representation of interests offered by the ranges (they do not use the values ​​2,3,4).
  • make the count more or less unpredictable, given that when all the candidates with 5 points of a voter are eliminated, his vote “it is not known” which other candidates he will support at most (those to which he gave 1 point).
    First you have a vote like this: [5,1,1,1,1,0,0,0,0], after (eliminating the candidate with 5 points) it becomes: [5,5,5,5,0,0, 0,0].
    4 candidates receive maximum support suddenly when in the previous Step they practically had none.

Honestly, I would 100% use the min-max tactic in that voting method.

What? I never said it was wrong; if I modified it, it was only to remove the NaN and to adapt it to my other codes, but I never said that yours was wrong.

I’d need to see a scenario where that would benefit you to do that.

I demonstrated a scenario where, with STAR there is an incentive to do just that, but with Cardinal Baldwin, there isn’t. Under STAR, if a block of voters gave someone a 1, that candidate didn’t make it to the final round and the candidates they rated as zero did. Had some of them given the candidate a 5, that wouldn’t have happened. Or, if it was Cardinal Baldwin, it wouldn’t have happened. That is demonstrated at the Codpen linked and explained in the first post in this thread.

Can you come up with a scenario where a block of voters would benefit from dishonestly exaggerating (i.e. “min-max tactic”) in Cardinal Baldwin? (and by that I mean, beyond giving their favorite a 5 and their least favorite a 0)

Oh sorry I just saw you said “fixed”. I edited the post above. It looked pretty different from mine but I didn’t into it look further. Yeah I think I’ve since addressed the occasional NaN thing. This is what I have:

if(rating1 > 0 || rating2 > 0) {
      var scaler = maxVal / ((rating1>rating2)?rating1:rating2);
      total1 += rating1 * scaler;
      total2 += rating2 * scaler;
     }

The tactic is not based on the individual case but on statistics, using only the information you have (before voting).
A voter who wants to favor his favorite candidate as much as possible, even to the detriment of his other interests, will be encouraged to vote as follows:
A[5] B[0] C[0] D[0] E[0]
or at most like this:
A[5] B[1] C[1] D[0] E[0]
because as the counting works:

  • Positive: it’s a fact that this vote disadvantages the other candidates to the maximum compared to the A with 5 points (making them statistically lose first).
  • Positive: it’s a fact that in the worst case in which A lose, my vote doesn’t become null, indeed it becomes:
    B[5] C[5] D[0] E[0], and that’s a good thing for me.
  • Negative: it’s a fact that my true preferences regarding B and C are falsified (they are the same even if in reality they could have different values), but:
  • Positive: it’s also true that they still remain favored to the maximum compared to my hated candidates (D and E).

Summary: If I am willing to give up the difference in preferences between B and C to get all the other positive sides in return, then I will use that tactic.
As hypothesized, a voter who has an interest in supporting his favorite candidate A as much as possible, even to the detriment of the others if necessary (which is not uncommon), will use this tactic.
All of this applies if the voter has no information on the order in which the candidates are eliminated (otherwise this tactic could be reduced but giving way to worse tactics).

To deny this thing you have to deny the positive and negative sides indicated.
Asking me for a single example of failure, it doesn’t make much sense when it comes to tactics based on statistics.

I’m not convinced of that. I’d think if there are strategic incentives, you should always be able to show an example where a block of voters got a worse result by voting in a way that differs from their true preferences. (as I showed in my example, where Cardinal Baldwin did not have the strategic incentive problem, but Score, Star and STLR did)

If there is no such case that can exist, that wouldn’t meet my definition of strategic incentive.

I’m a voter (ignorant) and I ask you: “why shouldn’t I vote using that tactic, which statistically seems sensible to me?”
And you answer: “If there is no such case that can exist, that wouldn’t meet my definition of strategic incentive”.
Okay, but you have to prove that “there is no such case that can exist” otherwise I will still use that tactic.
You must prove it to me (“voter”), and not me to you.

Having said that, I did your work and found this case:
5: A [5] B [0] C [5]
3: A [4] B [5] C [0]
3: A [0] B [5] C [4]
A wins
Both (or only 1) of the two groups of 3 voters, apply the tactic:
3: A [0] B [5] C [1]
B wins

@Keith_Edmonds this may interest you.
This is a case where monotony fails in Cardinal Baldwin; it’s not the best because it’s the first one I found, but it’s sufficient as proof:

The red value is the one that is increased, causing the defeat of D.

“Seeming sensible” is psychology issue that I am not interested in. It is fundamentally different than there being a true strategic vulnerability.

A strategic vulnerability is something where someone [1] can actually get a better outcome by voting differently than their true preferences. Generally they do so because they have some knowledge of how others are likely to vote.

That’s the definition I’m using and if you are expanding that to include the idea of someone wrongly thinking they can gain such an advantage by voting differently from their real preferences, now math can’t touch it. It’s just psychology.

And anyone who wants to complain about a method can just make up some group of voters that has some incorrect belief that causes them to use it in a way that harms their interests. Sorry, but I’m not going there.

You said you would use a min-max tactic under this system, even though you don’t even qualify your choice to do that as being if I knew who the front runners were likely to be. Really?

We can’t prevent you or anyone else from voting badly. I don’t think there should be an IQ test for voting, but if someone wants to put their least favorite candidate first because they are dumb enough to think that’s a good plan, I’m not losing sleep over that and I’m certainly not going to agree that there is a strategic incentive to do so.

First of all, you aren’t a voter, you are someone in forum about election theory. (if you are a voter and want to use a bad strategy… ok. Let us know how that works out for you.)

If you are saying a voting method has a vulnerability, you need to demonstrate that better than just saying “I’d foolishly use this system in a way that harms my interests, so this system is flawed”

I never made a claim that Cardinal Baldwin doesn’t have any vulnerabilities, and I would guess that there are some, although I think they would probably be hard to exploit in the real world.

But if you can’t show me a sample of ballots that demonstrates this (especially now that I’ve gone to the effort to make it extremely easy to test such ballots), that’s a pretty significant fact in how seriously we should take your concerns.

[1] usually, in a voting context, we say a “block of voters”, simply because a single vote rarely changes the result, or if it does, it often only changes it from a win to a tie or a tie to a win. But it’s fair to say that the hypothetical block of voters in question all have exactly the same preferences.

How does one do this?

I don’t really know what I’m looking at to be honest.

If you don’t normalise in the first round but do subsequently, I think you would be building in a failure of Independence of Irrelevant Alternatives. If you introduced a new candidate with very little support (with all other scores the same), they would be eliminated immediately, and the second round would be the same as the original first round (before the new candidate was introduced) but normalised.

It would be if there weren’t those facts I told you about before, and that you have not denied so for now they are still valid.

[1] can actually get a better outcome by voting differently than their true preferences.

I have given you a clear example where it is very good to use that strategy, while you continue to complain without showing anything.

If you are saying a voting method has a vulnerability…

You’re the one who’s saying it doesn’t have it!
There is the tactical vote min-max and I ask you why I shouldn’t use this tactic? that’s all, you have to show me that your system works, not me.

especially now that I’ve gone to the effort to make it extremely easy to test such ballots

Your simulation is good, and shows some aspects of the voting systems, but if you really think that simulation is enough to say that a voting method is valid, you are very wrong.

I made additions to that simulator and used different graphics, maybe that’s more understandable:

The Codepen allows you to easily paste in score ballot samples (in a format commonly used to describe ballot sets in this forum), and test them under various methods. Since it is a Codepen, you can tweak the ballot samples as well as the JavaScript code itself as you wish.

Here is a quick video I made explaining both the general stuff which that Codepen does, as well as this particular sample set of ballots which demonstrates a vulnerability to strategic voting under Score, STAR and STLR. Cardinal Baldwin does not show the same vulnerability, and instead picks the Condorcet winner.