A new(?) STAR variant

As for me, in such a case:
55%: A[5] B[4] C[0]
45%: A[0] B[4] C[5]
I don’t want A win.
STAR wins A, STLR wins B.
I don’t care if that case isn’t realistic; I don’t like the philosophy behind it.

I was hoping for something beyond “it works better for this one example.” What is the reasoning beyond single direction normalization? And what if it was the other direction?

I was hoping for something beyond “it works better for this one example.”

That example shows a STAR mechanism that I don’t like, that is STAR can win a candidate with a sum of points very (too) far from the highest sum; STLR mitigates this difference by keeping the positive sides of the STAR.

What is the reasoning beyond single direction normalization? And what if it was the other direction?

I have talked about STAR vs STLR; I have never considered the direction of normalization.

In the simulator enter this as strategy: [5,4,3,2,1,0]
This strategy is equivalent to reversing the votes and what is expected is that the more inverted votes increase, the more voters lose in using this strategy (that is, the more votes are inverted the more the results must change in negative).
If you observe the behavior of STAR and STLR, you will find that the results of STLR are more sensible (although DV is still better).

Similarly, if someone scores 0 and 1 to the two frontrunners, I don’t think that should outweigh a voter that has scored 5 and 1. I don’t see why one end is more “important” than the other.

Given [A,B], then:

  • if I say [0,1] it means that I like “infinite” times B more than A.
  • if I say [1,5] it means that I like 5 times B more than A.

It doesn’t seem too foolish, although I would prefer to impose maximum proportionality 5 (given that the range has MAX = 5), so for me the best thing would be:
[0,1] --> [1,5]
[0,2] --> [1,5]
[1,5] --> [1,5]
[2,5] --> [2,5]
but STAR, STLR, etc don’t work with the proportions so I can also accept:
[0,1] --> [0,5]

Opposite direction:
[2,5] --> [1,2.5]
or
[2,5] --> [0,?]
here the answer seems to me less evident, if you want 0 in the MIN.

I think that’s the wrong interpretation.

It depends on how the voter thinks. I would give 0 to many candidates below a certain threshold of appreciation, so for me the 0 would be extremely “distant” from the 1.

How a voter thinks is less important than “what is a voter rewarded for.”

Let’s say all those candidates that are below whatever threshold you have, turn out to be the only ones that are front runners. And let’s say you dislike one of them a lot more than the others.

Your failure to differentiate between them means that your vote is essentially ignored. If you and a lot of other people with similar preference do that, you could get a worse option than you had to get.

That’s your option, but you aren’t voting smartly and I don’t think the designers of voting systems should put too much effort into getting into your head to determine why you made that choice.

Let’s say all those candidates that are below whatever threshold you have, turn out to be the only ones that are front runners. And let’s say you dislike one of them a lot more than the others.
Your failure to differentiate between them means that your vote is essentially ignored.

It’s a fact that, if in a vote A is evaluated 4 points, it means that B will need:

  • 1 vote with 4 points.
  • or 2 votes with 2 points.
  • or 4 votes with 1 point.

to tie with A.
These are in all respects proportions.
If I assign 5 points to my favorite candidate A, it means that all the candidates I consider at least 6 times worse than A, will have 0 points to respect the proportions and my interests.
If my real proportions are: A [10] B [5] C [1], narrow them like this: A [5] B [3] C [1] (to make them fall within the range [0,5]) means in fact falsify my true interests.

If you and a lot of other people with similar preference do that, you could get a worse option than you had to get.

It’s actually the exact opposite.
If the candidates that I hate are actually the winning ones, then it means that there is a majority who supported them and if that majority did my reasoning, they would have supported them in a way that would not be beaten.
This is the classic min-max problem which is in fact an effective tactic (in the worst case it doesn’t work, but if it works you can win your favorite candidate who otherwise would have lost).

True designers of voting systems take these things into consideration. If a voting method encourages the use of a certain tactical vote, it is the fault of the designers and not of the voter who uses this strategy to his advantage.

Example:
you have to choose a job among 4, which give you the following salaries (which represent the utility for you):
A: $ 20
B: $ 100
C: $ 200
D: $ 1000
E: $ 1080
between A and B there is the same difference, which is between D and E, but between A and B I would choose much B over A (5 times more), while between D and E there is little difference .
If you force me to represent these real interests of mine with a range [0,5], my vote will inevitably be this: A[0] B[0] C[1] D[5] E[5] but in your opinion I should also represent the difference between the worst candidates so I should vote like this: A[0] B[1] C[2] D[4] E[5] or not?
How would you (intelligent) vote in this context?

First I should note that your example shows us running into the granularity of the 0 - 5 scale. This is a different thing than an arbitrary threshold. It is accepted that there is a tradeoff of granularity with complexity: for purity, we’d love to have 0 - 99 or even more, but that complicates ballots etc.

We assume the differences average out, so to speak (which is the fundamental assumption of Approval voting, but at least we have 0 - 5).

I would certainly want to differentiate between $20 and $100 if the system had enough granularity to allow me to. (especially if I thought it was likely to come down to A and B) It seems to, so I guess I will:
A[0] B[1] C[2] D[4] E[5]

But I’d prefer I had 20x finer granularity, in which case it might be:
A[0] B[3] C[25] D[90] E[99]

Voting like this in SV: A[0] B[1] C[2] D[4] E[5] with that context, for me it doesn’t make sense,
For me the threshold is proportional, in fact 20 and 100 $ are so low compared to 1000 $ that with a range from [0,5] I would not vote for them, and if I vote them more granularly, the fact remains that I give him very few points (almost useless).

Using range [0,100], however, I don’t understand your vote. I would vote about this: A[0] B[10] C[20] D[98] E[100]
By observing your vote, are you aware that giving 3 out of 100 points to candidate B is practically useless?
In general, all of this seems very (too) ambiguous to me. Our ways of reasoning with range [0,100] are both sensible in their own way but they are also very different, yet we have the same interests (theoretically).
From this point of view, the ranking is extremely better (it would not have any of these ambiguities).

The point is that, if there is an ambiguity problem, to say that it is the fault of the voters who (in your opinion) vote in a non-intelligent way, does not solve the problem (which, however, is resolved by other types of votes, such as the rankings).
Anyone who designs a voting system must think about these things.

When we are in Game Theory land, this isn’t opinion, it is math. Normally we use terms such as “rational self interest” as opposed to “intelligence,” but still. All we are talking about is whether their actions result in a greater or lesser outcome according to their preferences. This isn’t a matter of opinion.

Ok, and in that area it seems to me that using a threshold below which 0 is assigned to all candidates, and one above which very high scores are assigned, on average it works very well as a strategy.

The ambiguity of which I speak, consists in the fact that it is not easy to understand which is the best way to vote, given certain honest interests (while in other methods, such as those that use rankings, the best way to vote is clear and equal for all, given certain interests).
I don’t want to support the rankings, I just want to point out that there is a problem to be solved.

This is based on an incorrect assumption from the start. Wealth is not proportional to utility. If I have nothing and someone gives me a million dollars, then I have a much bigger gain in utility than someone with a million dollars doubling their wealth. The scores would logically reflect that.

In your example, it makes perfect sense not to give the same score difference between A and B as between D and E.

I also still wonder if you have read this short excerpt from the Wikipedia.

Wikipedia:

One cannot conclude, however, that the cup of tea is two thirds of the goodness of the cup of juice, because this conclusion would depend not only on magnitudes of utility differences, but also on the" zero "of utility.

Okay, and a voter can have a subjective “zero utility” to use to derive the proportions.

In politics, however, the “zero utility” I think it can be considered zero, that is:
a voter wants a candidate to invest money in reducing pollution; this voter will be able to use the proportions by putting the “zero utility” at $ 0 (which is the minimum investment).
It could also include the concept of “disapproval” for those candidates who want to invest in things that increase pollution.

[0, 5] would be treated the same as [0, 1].

Given a vote like this:
A[0] B[1] C[2] D[4] E[5]
the way B is from C is the same to me as C is from D.
The 0 in the proportions is always problematic; it would be better to start with ratings of 1, or create an ad hoc system like TM.

A lot of people think like you about this stuff. It is based on intuition, and it is a very common intuition.

But there is a reason that economics and game theory have completely, 100% discarded this way of thinking. The more you analyze it, the more you try to work it into the math, the more you approach it with formality and rigor, the less it makes sense.

Your pollution example should make it obvious. So a candidate “invests in things that increase pollution” and that makes them a zero, to you. Now a candidate comes along that not only invests in things that increase pollution, but is a convicted and unremorseful baby rapist. While that is an obviously over-the-top example, hopefully you can imagine someone who is a polluter but also has some other negative (to you) quality the other polluter does not. Presumably you care about more issues than pollution, right?

You have defined away your ability to express a difference in preference between two candidates that are below some arbitrary threshold. That doesn’t make sense, and is not rational, under any formal analysis.

I’ll admit I’m at a loss as to why you think that the juice example makes sense but politics is an exception. You can come up with hypothetical extremes for both (such as comparing orange juice to radioactive waste), etc. “Yes, but pollution!” is not helping your argument.

And I should also note that you are confusing psychology and rationality. Math can’t really touch the former. There are people who might wish to score their candidates based on numerology or based on rolling dice. No one is stopping them from doing so. Voting based on an arbitrary threshold is similarly not rational. You can explain it by psychology, maybe, but fields such as game theory very carefully distance themselves from that kind of psychology, for good reason.

I really think you should spend some time studying game theory. This goes into some detail about these issues: https://plato.stanford.edu/entries/game-theory/

Look for " receptive to the efforts of the economist Paul Samuelson ([1938]) to redefine utility in such a way that it becomes a purely technical concept rather than one rooted in speculative psychology."
It goes on with more stuff along the same lines:

Economists and others who interpret game theory in terms of RPT should not think of game theory as in any way an empirical account of the motivations of some flesh-and-blood actors (such as actual people). Rather, they should regard game theory as part of the body of mathematics that is used to model those entities (which might or might not literally exist) who consistently select elements from mutually exclusive action sets, resulting in patterns of choices, which, allowing for some stochasticity and noise, can be statistically modeled as maximization of utility functions.

Most of your arguments seem to indicate you aren’t overly familiar with this entire way of thinking.

2 Likes