A new(?) STAR variant

Well, I don’t think the issue is that I don’t “get it,” in the same sense that I don’t get quantum mechanics.

The problem is that I understand that you are asking for half of something where you haven’t defined what zero is beyond glossing over it. Random people might not think about it that hard, and will probably think of zero as “the worst candidate that is likely to run” or something.

But if they are thinking of it that way, of course, zero is relative. If a candidate runs that is way worse than the one that rated as zero in the last election, they are going to move zero downward.

I try to avoid saying anything about my political preferences here, but sometimes it is hard to avoid using real world examples.

The last person I voted for for president was actually quite “meh” for me, at least compared to the president that had served before her, but she was way, way better than the other person running, so I heartily endorsed her.

In the 2008 election, the major party candidate that I did not prefer was someone I actually had a great deal of respect and admiration for and thought was a decent, competent person. But I liked the other major party candidate way, way more.

Sure, but if we agree that zero represents being tortured to death, then I will just rate all candidates as 5, because the idea of anyone who is running occupying the office is way better than being tortured to death.

Just like lots of people know what half of room temperature is. If you had asked me when I was 8 years old, I could confidently tell you that it was right around 35 degrees.

Only when I learned that there are equally valid scales of temperature that have a completely different zeros, would I realize that my assumption is wrong and the question is not so easily answered.

If you define zero as “meh”, well, you’ve just kicked the can down the road. I don’t know what you mean by “meh.” “Meh” is just as relative.

My view is that zero is always going to be relative. If you simply say “zero is your least favorite candidate” – just as you have said “5 is your favorite candidate” – it makes things much more straightforward.

The other nice thing with saying that zero should be your least favorite candidate is that now I don’t have a very obvious way I can “cheat” by exaggerating and giving that candidate a zero, regardless of whether that is what you are telling me I “should” do.

Well, ya found someone right here! Interview away. :slight_smile:

I’d be interested in what she has to say, but since she advocates STAR, she might well be in agreement with me that zero should just represent your least favorite of the candidates that are currently in the running. According to STAR’s wikipedia page:

each voter scores every candidate with a number from 0 to 5, where 0 representing “worst” and 5 representing “best.”

I would interpret “worst” to mean “the worst of the choices presented.”

The data does not show this. Most people can distinguish between what they like and what they do not like. Hell babies have a pretty solid record on this.

This is exactly the point of this whole system and 100% of the reason I invented it. It solves a great deal of that problem which does exist in Score.

yes, relative to your preference. I just do not understand what you are missing here.

No that causes huge problems. This would put everybody on a different scale and not only that they are on a scale which we can’t rescale to something roughly equal. You are missing the whole point. If you feel that way then just ensorse score.

It is not about your worst. It is about the second worst. You are advocating that people score people they only hate a little at a score greater than 0. This is hard to get people to do. The solution is to just tell everybody to do that anyway.

OK so if I gave you the list:

-An apple
-An ice cream cone

  • a steak
  • water
  • dirt
  • dog poop

q1: Do you know which of these things you would enjoy the taste of and which you would not enjoy the taste of?

q2: Of the things you do like the taste of do you have a preference?

q3: Do you have a favorite?

q4: Given that ypur favorite costs you 5 dollars to taste what would you think is fair to pay to taste the others you like.

To speed up precinct-summable vote-counting for STLR (i.e. where you would tabulate the result of every possible runoff and then transmit that, rather than first finding out who entered the runoff and only tabulating the result of that specific runoff), it may be useful for vote-counters to consider that every candidate a voter scores as their 1st choice is guaranteed to get the MAX score in every possible runoff. And similarly, the voter’s 2nd choice will get the MAX score in every runoff except against the voter’s 1st choice(s), the 3rd choice will get the MAX score in every runoff except against the voter’s 1st and 2nd choice(s), etc.

1 Like

I found that the normalization I used in DSV is the same as in the STLR.
I want to clarify that I didn’t invent the normalization formula in DSV; it’s a very famous formula in the field of digital images (to pass from [0,255] to [-128,+128] of colors) specifically, it’s used in “contrast stretching”. In that context there are many range normalizations.
If you are looking for a name for that normalization, you could use the word “stretch” saying like “remove the loser and stretch the range…”

Your formula for increasing contrast in an image throws away data, since everything below middle gray in the original image would become black, since negative values aren’t allowed.

The formula that most matches STLR, though, is to take the brightest pixel in the image, and make it white (255), brightening all other pixels by the same amount (through simple multiplication). It doesn’t throw away data, aside from rounding of floating point values to integers. If the darkest pixel in the image is a dark gray, it will become a lighter gray.

The formula that most matches the normalization in my implementation of Cardinal Baldwin is to take the brightest pixel in the image and make it white (255), taking the darkest pixel and making it black (0), and adjusting all other pixels accordingly, making some darker and some lighter. It subtracts, then multiplies. It doesn’t throw away data either.

(I’m assuming this is a grayscale image, i.e. black and white)

Your formula for increasing contrast in an image throws away data, since everything below middle gray in the original image would become black, since negative values aren’t allowed.

I found the formula I’m talking about, in the area of digital images (about contrast stretching) and it is this:
formula
In the field of images it’s used to obtain negative values but not only for that (mine was just an example).
From this formula many of the formulas used to normalize the votes are obtained, including that of the STLR and also of Cardinal Baldwin.

Given a vote like this:
[1,2.5]
In STLR it normalizes from the range [0,2.5] to [0,5], obtaining [2,5]
In Cardinal Baldwin from the range [1,2.5] to [0,5], obtaining [0,5]

This formula is just a linear mapping. It is in many many fields. This is the one for Baldiwn but the levelling in STLR is different. If can maybe be thought of as a subset of this since it only stretches in one direction. Scores in the levelling can only go up

STLR stretches from [0,max] to [0,MAX] with max = maximum vote value and MAX = maximum range value, but the formula is always the same with min = 0.
To say that it is a subset just because some values are at 0 doesn’t seem enough to me.
If a method (like STAR) has the concept of “elimination of candidates” then it is obvious that “any” type of normalization among the known ones can be applied to this method, to obtain different results.
This also applies to Sequential loser-elimination (SLE) methods.
The discovery is that one normalization works better than another, and I agree that STLR is better than STAR.

I haven’t ever seen a clear explanation for why the “single direction” normalization of STLR is better. I’m not saying it isn’t, I’d just like to hear an explanation for why it is.

What if the direction was opposite? I.e., it only lowered it, not raised it? What would be the difference?

1 Like

As for me, in such a case:
55%: A[5] B[4] C[0]
45%: A[0] B[4] C[5]
I don’t want A win.
STAR wins A, STLR wins B.
I don’t care if that case isn’t realistic; I don’t like the philosophy behind it.

I was hoping for something beyond “it works better for this one example.” What is the reasoning beyond single direction normalization? And what if it was the other direction?

I was hoping for something beyond “it works better for this one example.”

That example shows a STAR mechanism that I don’t like, that is STAR can win a candidate with a sum of points very (too) far from the highest sum; STLR mitigates this difference by keeping the positive sides of the STAR.

What is the reasoning beyond single direction normalization? And what if it was the other direction?

I have talked about STAR vs STLR; I have never considered the direction of normalization.

In the simulator enter this as strategy: [5,4,3,2,1,0]
This strategy is equivalent to reversing the votes and what is expected is that the more inverted votes increase, the more voters lose in using this strategy (that is, the more votes are inverted the more the results must change in negative).
If you observe the behavior of STAR and STLR, you will find that the results of STLR are more sensible (although DV is still better).

Similarly, if someone scores 0 and 1 to the two frontrunners, I don’t think that should outweigh a voter that has scored 5 and 1. I don’t see why one end is more “important” than the other.

Given [A,B], then:

  • if I say [0,1] it means that I like “infinite” times B more than A.
  • if I say [1,5] it means that I like 5 times B more than A.

It doesn’t seem too foolish, although I would prefer to impose maximum proportionality 5 (given that the range has MAX = 5), so for me the best thing would be:
[0,1] --> [1,5]
[0,2] --> [1,5]
[1,5] --> [1,5]
[2,5] --> [2,5]
but STAR, STLR, etc don’t work with the proportions so I can also accept:
[0,1] --> [0,5]

Opposite direction:
[2,5] --> [1,2.5]
or
[2,5] --> [0,?]
here the answer seems to me less evident, if you want 0 in the MIN.

I think that’s the wrong interpretation.

It depends on how the voter thinks. I would give 0 to many candidates below a certain threshold of appreciation, so for me the 0 would be extremely “distant” from the 1.

How a voter thinks is less important than “what is a voter rewarded for.”

Let’s say all those candidates that are below whatever threshold you have, turn out to be the only ones that are front runners. And let’s say you dislike one of them a lot more than the others.

Your failure to differentiate between them means that your vote is essentially ignored. If you and a lot of other people with similar preference do that, you could get a worse option than you had to get.

That’s your option, but you aren’t voting smartly and I don’t think the designers of voting systems should put too much effort into getting into your head to determine why you made that choice.

Let’s say all those candidates that are below whatever threshold you have, turn out to be the only ones that are front runners. And let’s say you dislike one of them a lot more than the others.
Your failure to differentiate between them means that your vote is essentially ignored.

It’s a fact that, if in a vote A is evaluated 4 points, it means that B will need:

  • 1 vote with 4 points.
  • or 2 votes with 2 points.
  • or 4 votes with 1 point.

to tie with A.
These are in all respects proportions.
If I assign 5 points to my favorite candidate A, it means that all the candidates I consider at least 6 times worse than A, will have 0 points to respect the proportions and my interests.
If my real proportions are: A [10] B [5] C [1], narrow them like this: A [5] B [3] C [1] (to make them fall within the range [0,5]) means in fact falsify my true interests.

If you and a lot of other people with similar preference do that, you could get a worse option than you had to get.

It’s actually the exact opposite.
If the candidates that I hate are actually the winning ones, then it means that there is a majority who supported them and if that majority did my reasoning, they would have supported them in a way that would not be beaten.
This is the classic min-max problem which is in fact an effective tactic (in the worst case it doesn’t work, but if it works you can win your favorite candidate who otherwise would have lost).

True designers of voting systems take these things into consideration. If a voting method encourages the use of a certain tactical vote, it is the fault of the designers and not of the voter who uses this strategy to his advantage.

Example:
you have to choose a job among 4, which give you the following salaries (which represent the utility for you):
A: $ 20
B: $ 100
C: $ 200
D: $ 1000
E: $ 1080
between A and B there is the same difference, which is between D and E, but between A and B I would choose much B over A (5 times more), while between D and E there is little difference .
If you force me to represent these real interests of mine with a range [0,5], my vote will inevitably be this: A[0] B[0] C[1] D[5] E[5] but in your opinion I should also represent the difference between the worst candidates so I should vote like this: A[0] B[1] C[2] D[4] E[5] or not?
How would you (intelligent) vote in this context?