# A new(?) STAR variant

I found that the normalization I used in DSV is the same as in the STLR.
I want to clarify that I didn’t invent the normalization formula in DSV; it’s a very famous formula in the field of digital images (to pass from [0,255] to [-128,+128] of colors) specifically, it’s used in “contrast stretching”. In that context there are many range normalizations.
If you are looking for a name for that normalization, you could use the word “stretch” saying like “remove the loser and stretch the range…”

Your formula for increasing contrast in an image throws away data, since everything below middle gray in the original image would become black, since negative values aren’t allowed.

The formula that most matches STLR, though, is to take the brightest pixel in the image, and make it white (255), brightening all other pixels by the same amount (through simple multiplication). It doesn’t throw away data, aside from rounding of floating point values to integers. If the darkest pixel in the image is a dark gray, it will become a lighter gray.

The formula that most matches the normalization in my implementation of Cardinal Baldwin is to take the brightest pixel in the image and make it white (255), taking the darkest pixel and making it black (0), and adjusting all other pixels accordingly, making some darker and some lighter. It subtracts, then multiplies. It doesn’t throw away data either.

(I’m assuming this is a grayscale image, i.e. black and white)

Your formula for increasing contrast in an image throws away data, since everything below middle gray in the original image would become black, since negative values aren’t allowed.

I found the formula I’m talking about, in the area of digital images (about contrast stretching) and it is this:

In the field of images it’s used to obtain negative values but not only for that (mine was just an example).
From this formula many of the formulas used to normalize the votes are obtained, including that of the STLR and also of Cardinal Baldwin.

Given a vote like this:
[1,2.5]
In STLR it normalizes from the range [0,2.5] to [0,5], obtaining [2,5]
In Cardinal Baldwin from the range [1,2.5] to [0,5], obtaining [0,5]

This formula is just a linear mapping. It is in many many fields. This is the one for Baldiwn but the levelling in STLR is different. If can maybe be thought of as a subset of this since it only stretches in one direction. Scores in the levelling can only go up

STLR stretches from [0,max] to [0,MAX] with max = maximum vote value and MAX = maximum range value, but the formula is always the same with min = 0.
To say that it is a subset just because some values are at 0 doesn’t seem enough to me.
If a method (like STAR) has the concept of “elimination of candidates” then it is obvious that “any” type of normalization among the known ones can be applied to this method, to obtain different results.
This also applies to Sequential loser-elimination (SLE) methods.
The discovery is that one normalization works better than another, and I agree that STLR is better than STAR.

I haven’t ever seen a clear explanation for why the “single direction” normalization of STLR is better. I’m not saying it isn’t, I’d just like to hear an explanation for why it is.

What if the direction was opposite? I.e., it only lowered it, not raised it? What would be the difference?

1 Like

As for me, in such a case:
55%: A[5] B[4] C[0]
45%: A[0] B[4] C[5]
I don’t want A win.
STAR wins A, STLR wins B.
I don’t care if that case isn’t realistic; I don’t like the philosophy behind it.

I was hoping for something beyond “it works better for this one example.” What is the reasoning beyond single direction normalization? And what if it was the other direction?

I was hoping for something beyond “it works better for this one example.”

That example shows a STAR mechanism that I don’t like, that is STAR can win a candidate with a sum of points very (too) far from the highest sum; STLR mitigates this difference by keeping the positive sides of the STAR.

What is the reasoning beyond single direction normalization? And what if it was the other direction?

I have talked about STAR vs STLR; I have never considered the direction of normalization.

In the simulator enter this as strategy: [5,4,3,2,1,0]
This strategy is equivalent to reversing the votes and what is expected is that the more inverted votes increase, the more voters lose in using this strategy (that is, the more votes are inverted the more the results must change in negative).
If you observe the behavior of STAR and STLR, you will find that the results of STLR are more sensible (although DV is still better).

Similarly, if someone scores 0 and 1 to the two frontrunners, I don’t think that should outweigh a voter that has scored 5 and 1. I don’t see why one end is more “important” than the other.

Given [A,B], then:

• if I say [0,1] it means that I like “infinite” times B more than A.
• if I say [1,5] it means that I like 5 times B more than A.

It doesn’t seem too foolish, although I would prefer to impose maximum proportionality 5 (given that the range has MAX = 5), so for me the best thing would be:
[0,1] --> [1,5]
[0,2] --> [1,5]
[1,5] --> [1,5]
[2,5] --> [2,5]
but STAR, STLR, etc don’t work with the proportions so I can also accept:
[0,1] --> [0,5]

Opposite direction:
[2,5] --> [1,2.5]
or
[2,5] --> [0,?]
here the answer seems to me less evident, if you want 0 in the MIN.

I think that’s the wrong interpretation.

It depends on how the voter thinks. I would give 0 to many candidates below a certain threshold of appreciation, so for me the 0 would be extremely “distant” from the 1.

How a voter thinks is less important than “what is a voter rewarded for.”

Let’s say all those candidates that are below whatever threshold you have, turn out to be the only ones that are front runners. And let’s say you dislike one of them a lot more than the others.

Your failure to differentiate between them means that your vote is essentially ignored. If you and a lot of other people with similar preference do that, you could get a worse option than you had to get.

That’s your option, but you aren’t voting smartly and I don’t think the designers of voting systems should put too much effort into getting into your head to determine why you made that choice.

Let’s say all those candidates that are below whatever threshold you have, turn out to be the only ones that are front runners. And let’s say you dislike one of them a lot more than the others.
Your failure to differentiate between them means that your vote is essentially ignored.

It’s a fact that, if in a vote A is evaluated 4 points, it means that B will need:

• 1 vote with 4 points.
• or 2 votes with 2 points.
• or 4 votes with 1 point.

to tie with A.
These are in all respects proportions.
If I assign 5 points to my favorite candidate A, it means that all the candidates I consider at least 6 times worse than A, will have 0 points to respect the proportions and my interests.
If my real proportions are: A [10] B [5] C [1], narrow them like this: A [5] B [3] C [1] (to make them fall within the range [0,5]) means in fact falsify my true interests.

If you and a lot of other people with similar preference do that, you could get a worse option than you had to get.

It’s actually the exact opposite.
If the candidates that I hate are actually the winning ones, then it means that there is a majority who supported them and if that majority did my reasoning, they would have supported them in a way that would not be beaten.
This is the classic min-max problem which is in fact an effective tactic (in the worst case it doesn’t work, but if it works you can win your favorite candidate who otherwise would have lost).

True designers of voting systems take these things into consideration. If a voting method encourages the use of a certain tactical vote, it is the fault of the designers and not of the voter who uses this strategy to his advantage.

Example:
you have to choose a job among 4, which give you the following salaries (which represent the utility for you):
A: \$ 20
B: \$ 100
C: \$ 200
D: \$ 1000
E: \$ 1080
between A and B there is the same difference, which is between D and E, but between A and B I would choose much B over A (5 times more), while between D and E there is little difference .
If you force me to represent these real interests of mine with a range [0,5], my vote will inevitably be this: A[0] B[0] C[1] D[5] E[5] but in your opinion I should also represent the difference between the worst candidates so I should vote like this: A[0] B[1] C[2] D[4] E[5] or not?
How would you (intelligent) vote in this context?

First I should note that your example shows us running into the granularity of the 0 - 5 scale. This is a different thing than an arbitrary threshold. It is accepted that there is a tradeoff of granularity with complexity: for purity, we’d love to have 0 - 99 or even more, but that complicates ballots etc.

We assume the differences average out, so to speak (which is the fundamental assumption of Approval voting, but at least we have 0 - 5).

I would certainly want to differentiate between \$20 and \$100 if the system had enough granularity to allow me to. (especially if I thought it was likely to come down to A and B) It seems to, so I guess I will:
A[0] B[1] C[2] D[4] E[5]

But I’d prefer I had 20x finer granularity, in which case it might be:
A[0] B[3] C[25] D[90] E[99]

Voting like this in SV: A[0] B[1] C[2] D[4] E[5] with that context, for me it doesn’t make sense,
For me the threshold is proportional, in fact 20 and 100 \$ are so low compared to 1000 \$ that with a range from [0,5] I would not vote for them, and if I vote them more granularly, the fact remains that I give him very few points (almost useless).