# A new(?) STAR variant

The data does not show this. Most people can distinguish between what they like and what they do not like. Hell babies have a pretty solid record on this.

This is exactly the point of this whole system and 100% of the reason I invented it. It solves a great deal of that problem which does exist in Score.

yes, relative to your preference. I just do not understand what you are missing here.

No that causes huge problems. This would put everybody on a different scale and not only that they are on a scale which we canāt rescale to something roughly equal. You are missing the whole point. If you feel that way then just ensorse score.

It is not about your worst. It is about the second worst. You are advocating that people score people they only hate a little at a score greater than 0. This is hard to get people to do. The solution is to just tell everybody to do that anyway.

OK so if I gave you the list:

-An apple
-An ice cream cone

• a steak
• water
• dirt
• dog poop

q1: Do you know which of these things you would enjoy the taste of and which you would not enjoy the taste of?

q2: Of the things you do like the taste of do you have a preference?

q3: Do you have a favorite?

q4: Given that ypur favorite costs you 5 dollars to taste what would you think is fair to pay to taste the others you like.

To speed up precinct-summable vote-counting for STLR (i.e. where you would tabulate the result of every possible runoff and then transmit that, rather than first finding out who entered the runoff and only tabulating the result of that specific runoff), it may be useful for vote-counters to consider that every candidate a voter scores as their 1st choice is guaranteed to get the MAX score in every possible runoff. And similarly, the voterās 2nd choice will get the MAX score in every runoff except against the voterās 1st choice(s), the 3rd choice will get the MAX score in every runoff except against the voterās 1st and 2nd choice(s), etc.

1 Like

I found that the normalization I used in DSV is the same as in the STLR.
I want to clarify that I didnāt invent the normalization formula in DSV; itās a very famous formula in the field of digital images (to pass from [0,255] to [-128,+128] of colors) specifically, itās used in ācontrast stretchingā. In that context there are many range normalizations.
If you are looking for a name for that normalization, you could use the word āstretchā saying like āremove the loser and stretch the rangeā¦ā

Your formula for increasing contrast in an image throws away data, since everything below middle gray in the original image would become black, since negative values arenāt allowed.

The formula that most matches STLR, though, is to take the brightest pixel in the image, and make it white (255), brightening all other pixels by the same amount (through simple multiplication). It doesnāt throw away data, aside from rounding of floating point values to integers. If the darkest pixel in the image is a dark gray, it will become a lighter gray.

The formula that most matches the normalization in my implementation of Cardinal Baldwin is to take the brightest pixel in the image and make it white (255), taking the darkest pixel and making it black (0), and adjusting all other pixels accordingly, making some darker and some lighter. It subtracts, then multiplies. It doesnāt throw away data either.

(Iām assuming this is a grayscale image, i.e. black and white)

Your formula for increasing contrast in an image throws away data, since everything below middle gray in the original image would become black, since negative values arenāt allowed.

I found the formula Iām talking about, in the area of digital images (about contrast stretching) and it is this:

In the field of images itās used to obtain negative values but not only for that (mine was just an example).
From this formula many of the formulas used to normalize the votes are obtained, including that of the STLR and also of Cardinal Baldwin.

Given a vote like this:
[1,2.5]
In STLR it normalizes from the range [0,2.5] to [0,5], obtaining [2,5]
In Cardinal Baldwin from the range [1,2.5] to [0,5], obtaining [0,5]

This formula is just a linear mapping. It is in many many fields. This is the one for Baldiwn but the levelling in STLR is different. If can maybe be thought of as a subset of this since it only stretches in one direction. Scores in the levelling can only go up

STLR stretches from [0,max] to [0,MAX] with max = maximum vote value and MAX = maximum range value, but the formula is always the same with min = 0.
To say that it is a subset just because some values are at 0 doesnāt seem enough to me.
If a method (like STAR) has the concept of āelimination of candidatesā then it is obvious that āanyā type of normalization among the known ones can be applied to this method, to obtain different results.
This also applies to Sequential loser-elimination (SLE) methods.
The discovery is that one normalization works better than another, and I agree that STLR is better than STAR.

I havenāt ever seen a clear explanation for why the āsingle directionā normalization of STLR is better. Iām not saying it isnāt, Iād just like to hear an explanation for why it is.

What if the direction was opposite? I.e., it only lowered it, not raised it? What would be the difference?

1 Like

As for me, in such a case:
55%: A[5] B[4] C[0]
45%: A[0] B[4] C[5]
I donāt want A win.
STAR wins A, STLR wins B.
I donāt care if that case isnāt realistic; I donāt like the philosophy behind it.

I was hoping for something beyond āit works better for this one example.ā What is the reasoning beyond single direction normalization? And what if it was the other direction?

I was hoping for something beyond āit works better for this one example.ā

That example shows a STAR mechanism that I donāt like, that is STAR can win a candidate with a sum of points very (too) far from the highest sum; STLR mitigates this difference by keeping the positive sides of the STAR.

What is the reasoning beyond single direction normalization? And what if it was the other direction?

I have talked about STAR vs STLR; I have never considered the direction of normalization.

In the simulator enter this as strategy: [5,4,3,2,1,0]
This strategy is equivalent to reversing the votes and what is expected is that the more inverted votes increase, the more voters lose in using this strategy (that is, the more votes are inverted the more the results must change in negative).
If you observe the behavior of STAR and STLR, you will find that the results of STLR are more sensible (although DV is still better).

Similarly, if someone scores 0 and 1 to the two frontrunners, I donāt think that should outweigh a voter that has scored 5 and 1. I donāt see why one end is more āimportantā than the other.

Given [A,B], then:

• if I say [0,1] it means that I like āinfiniteā times B more than A.
• if I say [1,5] it means that I like 5 times B more than A.

It doesnāt seem too foolish, although I would prefer to impose maximum proportionality 5 (given that the range has MAX = 5), so for me the best thing would be:
[0,1] --> [1,5]
[0,2] --> [1,5]
[1,5] --> [1,5]
[2,5] --> [2,5]
but STAR, STLR, etc donāt work with the proportions so I can also accept:
[0,1] --> [0,5]

Opposite direction:
[2,5] --> [1,2.5]
or
[2,5] --> [0,?]
here the answer seems to me less evident, if you want 0 in the MIN.

I think thatās the wrong interpretation.

It depends on how the voter thinks. I would give 0 to many candidates below a certain threshold of appreciation, so for me the 0 would be extremely ādistantā from the 1.

How a voter thinks is less important than āwhat is a voter rewarded for.ā

Letās say all those candidates that are below whatever threshold you have, turn out to be the only ones that are front runners. And letās say you dislike one of them a lot more than the others.

Your failure to differentiate between them means that your vote is essentially ignored. If you and a lot of other people with similar preference do that, you could get a worse option than you had to get.

Thatās your option, but you arenāt voting smartly and I donāt think the designers of voting systems should put too much effort into getting into your head to determine why you made that choice.

Letās say all those candidates that are below whatever threshold you have, turn out to be the only ones that are front runners. And letās say you dislike one of them a lot more than the others.
Your failure to differentiate between them means that your vote is essentially ignored.

Itās a fact that, if in a vote A is evaluated 4 points, it means that B will need:

• 1 vote with 4 points.
• or 2 votes with 2 points.
• or 4 votes with 1 point.

to tie with A.
These are in all respects proportions.
If I assign 5 points to my favorite candidate A, it means that all the candidates I consider at least 6 times worse than A, will have 0 points to respect the proportions and my interests.
If my real proportions are: A [10] B [5] C [1], narrow them like this: A [5] B [3] C [1] (to make them fall within the range [0,5]) means in fact falsify my true interests.

If you and a lot of other people with similar preference do that, you could get a worse option than you had to get.

Itās actually the exact opposite.
If the candidates that I hate are actually the winning ones, then it means that there is a majority who supported them and if that majority did my reasoning, they would have supported them in a way that would not be beaten.
This is the classic min-max problem which is in fact an effective tactic (in the worst case it doesnāt work, but if it works you can win your favorite candidate who otherwise would have lost).

True designers of voting systems take these things into consideration. If a voting method encourages the use of a certain tactical vote, it is the fault of the designers and not of the voter who uses this strategy to his advantage.

Example:
you have to choose a job among 4, which give you the following salaries (which represent the utility for you):
A: \$ 20
B: \$ 100
C: \$ 200
D: \$ 1000
E: \$ 1080
between A and B there is the same difference, which is between D and E, but between A and B I would choose much B over A (5 times more), while between D and E there is little difference .
If you force me to represent these real interests of mine with a range [0,5], my vote will inevitably be this: A[0] B[0] C[1] D[5] E[5] but in your opinion I should also represent the difference between the worst candidates so I should vote like this: A[0] B[1] C[2] D[4] E[5] or not?
How would you (intelligent) vote in this context?

First I should note that your example shows us running into the granularity of the 0 - 5 scale. This is a different thing than an arbitrary threshold. It is accepted that there is a tradeoff of granularity with complexity: for purity, weād love to have 0 - 99 or even more, but that complicates ballots etc.

We assume the differences average out, so to speak (which is the fundamental assumption of Approval voting, but at least we have 0 - 5).

I would certainly want to differentiate between \$20 and \$100 if the system had enough granularity to allow me to. (especially if I thought it was likely to come down to A and B) It seems to, so I guess I will:
A[0] B[1] C[2] D[4] E[5]

But Iād prefer I had 20x finer granularity, in which case it might be:
A[0] B[3] C[25] D[90] E[99]