Allocated KP vs SSS

I am trying to understand the difference between allocation and score spending. The relevant system for allocation is just the standard allocation after the KP_Transform. For spending score the system is Sequentially Spent Score ( SSS )

Lets run some scenarios and ignore surplus handling so we can look at 1 ballot. A useful ballot for a max score 5 system is.
A:1 B:3 C:4 D:0

The KP-Transform turns this into 5 approval ballots.

A B C D
1 1 1 1 0
2 0 1 1 0
3 0 1 1 0
4 0 0 1 0
5 0 0 0 0

If D is elected
Both ballots stay the same in Allocated KP and SSS

If C is elected
In SSS 4 is subtracted from the points to spend and it goes to A:1 B:1 C:N/A D:0
In Allocated KP ballots 1-4 are exhausted and the 5th does nothing so the voter is done

If B is elected
In SSS 3 is subtracted from the 5 points to spend and it goes to A:1 B:N/A C:2 D:0
In Allocated KP ballots 1-3 are exhausted leaving

A B C D
1 0 0 0 0
2 0 0 0 0
3 0 0 0 0
4 0 0 1 0
5 0 0 0 0
which is equivalent to A:0 B:N/A C:1 D:0 different from SSS

If A is elected
In SSS 1 is subtracted from the 5 points to spend and it goes to A:N/A B:3 C:4 D:0
in Allocated KP ballot 1 is exhausted leaving

A B C D
1 0 0 0 0
2 0 1 1 0
3 0 1 1 0
4 0 0 1 0
5 0 0 0 0
which is equivalent to A:N/A B:2 C:3 D:0 different from SSS

There is clearly a difference between spending and allocation.

The best way to summarize the difference is that Allocated KP takes a ballot, B, and subtracts the score to the winner, S. Giving a simple B-S. While SSS takes into account the max score, M, and follows the formula min(M-S,B).

The question is which is better. SSS was designed to follow the concept of Vote Unitarity which as its inventor I clearly favour. What are other people opinions?

If the formulas above are accurate we could just have a new system for Allocated KP but use the B-S method on the original score level with no need for the KP transform. This is likely simpler to explain to the public.

Is there a real difference in surplus handling? The example above ignored that so that I could explain better. I think the ballots are scaled differently there too since in SSS the whole ballot is handled as 1 but in Allocated KP each score point is handled independently.

1 Like

I prefer allocated KP because it feels less arbitrary for two reasons:

  1. SSS doesn’t actually use weight and it’s capping power feels arbitrary to me.

  2. Even if we used the scaling version of SSS where each ballot’s weight works like how you think it would, the rule stating that the amount of voter weight allocated to a candidate each round only equals a quota of weight when the sum of score given to a candidate is greater then a quota also feels arbitrary to me. If you’re going to base your method off of this spending concept, then shouldn’t each unit of score spent on a candidate always cost the amount of ballot weight such that the total weight spent by all voters will always be a quota? I don’t think it makes sense for each unit of score to cost 1/max_score weight since 1. when reduceing weight linearly it makes more sense to make supporting more popular candidates cost more and 2. see footnote.

When you get rid of these two (what I view as) flaws, what you get is the proportionally allocated weight method that I suggested.

Probably not because of surplus handling

I think there is.

Footnote for the 2nd thing I’m not a huge fan of about SSS: using this metric gives the method bipolar disorder where at some point it discretely switches from one behavior to another. I dislike discretely switch-points from one behavior to another because you can visualize how arbitrary the look on diagrams like this. Methods that have discrete switch-points (when what candidate is eliminated in a given round switches from one candidate to another, when which candidates makes a runoff switches, or in the case of Bucklin: whether or not everyone’s next ranking has to be counted) cause them to have arbitrary sharp corners between the regions where one candidate wins and where another candidate wins. Proportional sequential methods will always have at-least some switch-points since when which candidate wins a given round is a switch-point however additional switch-points are unnecessary and in my view they make the imperfections in a voting method stand out more so I feel it’s better to avoid them when there isn’t a strong reason for having them. This is one of the interesting things about proportionally allocated weight: it’s only switch points are when the only voters supporting a candidate don’t have a quota of weight and the switch points forced on the method for being sequential.

I forgot you’re colorblind (I assume red green because that’s the most common) so perhaps download them and use Microsoft paint’s paint-bucket tool on different colored regions to colors you can distinguish if you want to see what I mean (though if you’re red green colorblind you should still be able to see the blue regions). Or you could just take my word that these switch-points don’t look right.

It is not. The capping power is set to align with Perfect Representation. I would think you would like this since it is a very Monroe way of thinking about it.

You only think of it that way because you are thinking like Thiele not as if you are spending. Say you want a burger, hotdog, fries, ice cream and a coke. They cost $4, $3, $2, $2 and $1 but you only have $5. You think those prices are reasonable. If you buy the hotdog for $3 and have $2 left. You would be willing to spend your $2 on the Burger if you could get it for $2 but you likely can’t. This does not mean that you would now only be willing to pay $1 for the fries or ice cream. If you thought it was a reasonable price originally then you would still be willing to pay the original price. Provided you thought that original price was reasonable. You should not scale down the amount of utility you would gain from them. This is intended to act like spending. The amount you are willing to spend on one alternative should not depend on those who have already been elected. This is sort of an extension to the Morgenbesser way of explaining IIA. See this for details.

If no group of candidates is willing to buy a candidate for a quota then the cost lowers until any group can afford any candidate. I think of this like an auction. The opening bit is not always met. In this case the auctioneer lowers the price until somebody bids. There is no arbitrary part and it seems to fit the idea of spending.

I understand the idea of taking a whole quota of ballot and charging the people who “bid” more the most. I actually quite like Sequential Monroe for this idea. I still get stuck on somebody giving a 2 to a candidate and it costing them their whole ballot. They clearly stated they were only willing to spend 2/5 of their ballot on this person.

Maybe there is a system that fixes these issues and makes it an all around better system. We all know there is no perfect system. I think SSS is internally self consistent and encourages voters to vote in a way that it handles the votes. There are other options as I have pointed out before.

  1. Sequentially Shrinking Quota
  2. Sequential Phragmen
  3. Sequential Ebert

Personally I think Sequentially Shrinking Quota looks the best but it does not really solve the issue of people not having to spend the same amount on each winner. Its worth noting that RRV does not have equal cost per winner either. This idea is really a Phragmen idea.

You mean by this that the quality metric is not smooth. Yes Warren and I had long discussions about this. He thought that he had discovered the largest class of good optimal systems but he was wrong. He had made the assumption of it being a smooth function. He rather liked the discovery.

You know I am a big fan of qualitative visualization. I thought you were going to do some multimember visualization. I would like to see them. I can distinguish between colours pretty much as well as you I would just group them into different groups in naming. This makes communication hard. If the colours are distinct I think I would be OK. These switching points would be hard colour change edges.

So you prefer Allocated KP to SSS then? hmmmm Do you think Sequential Monroe Voting is superior to both?

I’m not a huge fan of the Monroe way of thinking. I think it has a positive in that it’s a lot more intuitive to voters (it’s easier to explain how methods like RRV work but not why they work) but that doesn’t mean I think it’s right. I think there probably exists some instances where violating ‘perfect’ representation is actually the more correct behavior.

I like it a lot too.

I don’t think this is an issue.

RRV also doesn’t decrease weight linearly either so this spending analogy. When you decrease weight linearly, at some point you have to make giving score to unpopular candidates cost more then giving score to popular candidates. If you don’t decrease score linearly then this no longer has to be the case.

I’m working on it though I’ve been very busy these past few weeks. I’ll put it on github and send you a link if you would like to help. Though the approval cases of most of the optimal methods are done so I could show that separately.

I still prefer SMV, though perhaps I am a bit biased because I made it. Though I also prefer RRV to all three.

I am too busy to help at the moment.

Webster RRV with KP is starting to win me over. I still prefer SSS for a few reasons though. Not the least of which is that it is likely more viable to the public.

1 Like

I presume this should be “If B is elected”.

This is interesting. I’ve always used KP to make methods scale invariant, so it makes sense to me that SSS would take into account the max score, whereas Allocated KP wouldn’t. Which makes more sense for this type of method is another question though. But my hunch has always been that KP is a good catch-all way to turn an approval method into a score one, and is generally better than using raw scores. Not that that is really an argument though.

To me the only allocation system which makes sense is Sequential Monroe. It has a selection which is closely related to the reweigting. Using that reweighting with a different selection seems unfair since those who score may lose their full ballot without full endorsement. While the KP transform does fix that, it seems to just turn it into an imperfect version of SSS.

Basically there are two different underpinnings to what we ask the voters to think about when we are asking them to vote. In the Sequential Monroe you are asking “how much you would like your whole ballot to be grouped with other voters for each candidate?”. This is how Monroe thought of PR. It is a very different question from “how much ballot weight are you willing to spend on getting each candidate elected?” which is the underpinning of Unitary methods like SSS. I have always thought of Unitary as sort of an extra requirement on the Monroe philosophy but the question we are proposing to voters is quite different even if the math is sort of similar. I would think it is best to make this distinction firm in the theory because it makes Sequential Monroe’s full allocation of full ballots for partial support totally fine. And SSS’s not spending as much ballot as possible totally fine too. The other theories are Theile and Phragmen. Theile is asking the voters to rate the candidates which is again a very different way to think about it. If you apply the thinking of one PR philosophy to a system from another PR philosophy it will not really make sense. Systems need to be viewed under the correct lens. I think this means that the eventual proposal would have one system per philosophy. Each system would be tuned for that philosophy. This would take the philosophy out of it. Does this make sense @parker_friedland ?

Yup.

Also
this
20
character
rule
is
very
annoying
.

:roll_eyes:

Makes sense.

1 Like

I tried my best to get this expressed here. You wrote the original version so would you be able to do a quick pass. I think the comparison section need a fair bit of a rewrite.

Does optimal SSS reduce to score in the single-winner case? That could affect my view of it as a system. (Yes = good.)

Just looking at that, I’m not sure I agree with the Thiele bit:

Under the Thiele interpretation, every voter has an honest utility of each candidate, and even if you completely resent a candidate, it is statistically impossible for your honest utility of any individual candidate to equal 0 exactly. Under this interpretation, the more an outcome maximizes the sum among all voters: ln( the sum of utilities that voter gave to each winner), the more proportional it is. Since candidates can’t choose their honest utilities, they can choose the scores they give to candidates which means that it is much more likely that a candidate will give a set of candidates all zero scores which will blow up the natural log function (see footnote), so to counter-act this, the most Thiele voting methods instead use the partial sums of the harmonic function, which are closely related to the natural log (The natural log is the integral of 1/t from t=1 to t=x and the partial sums of the harmonic series are the summation of 1/n from n=1 to n=x).

As I look at it, Thiele is just a method that uses the harmonic function. The fact that it close to logarithms doesn’t mean that it would somehow “prefer” logarithms but just can’t because it would blow up to infinity. Log utility and harmonic sums are different things. Thiele is harmonic sums.

Of course it does.

The Optimal version of SSS is

When W = 1 the sum in the denominator is at most V so the max is always 1. This means the whole denominator can be removed. The sum inside the min() is always less than 1 so the min reduces to S_vw for each candidate. Then the quality function reduces to the sum over the voters for each candidate. ie the score winner.

SSS was not designed to work as an optimal system so this is not really one I would propose. This system can get multiple winner sets with which maximize the utility. Warren and I have talked about how to resolve ties in this case.

@parker_friedland wrote it. While I agree with your criticism I do not think that is the major problem. Just following a function is not a philosophy. How is the voter intended to think about the score they give the voter? It seems to lack a deep explanation of the motivation. The other 3 have intuitive non-mathematical explanations. Without that how else can we explain how it is motivated or even something like PR?

I would have thought the thinking for the voter would be the same in every case. Rate the candidates as you see fit (and presumably in terms of utility if you want to get any deeper), and the voting system does the rest.

Sure, you can look at a system in terms of how much of your vote do you want to spend on this candidate etc., but I’m not sure that’s really how the voter would or should be thinking. It’s getting more into the mechanics of the system.

I don’t remember writing it. I think you got that from Warren.

Edit: I thought you were referring to the unitary quality function

Even if there is no difference in the score they give the philosophy should be written in a clear way.

You wrote it here. Thiele vs. Phragmen/Monroe: Two very different interpretations of proportionality

I just put it on the wiki. Maybe you got it from warren.

Oh sorry I thought you were talking about your unitary quality function (which now I remember you wrote that and in that email thread Warren explained that it wasn’t monotonic). I did write the Thiele vs Monroe/Phrag post.

@Toby_Pereira what is it that you don’t agree with about the Thiele bit of this

He said this

I think he is correct. However, it is not harmonic sums for Webster. There should be an intuitive explanation for this type of reweighting. Just saying “you reweight by some formula and it works” is not a compelling argument.

It’s partial sums of 1/(Δ + n) for both. Or another way to to express it is with the digamma function: In Jefferson it’s Ψ(1 + n) + γ and in Webster it’s 0.5*Ψ(0.5 + n) + γ (or disregard the γ’s and 0.5 and instead start at the much weirder value of -γ). What I meant by it being logarithmic is that the only functions that work become the logarithmic function when the number of seats approaches infinity. In any approval ballot version of a Thiele method, when you have infinite seats (and each candidate was cloned infinite times), the quality function for each voter becomes ln(p) where p is the percentage of the candidates in the legislature that voter approved of. So in a way, all Thiele based approval quality functions are logarithmic.

OK, thats all well and good but how do we motivate the ln(x) philosophically. “It works” is not going to cut it with the average person.

Watch Aaron get exactly that question at minute 32 of this. https://youtu.be/tULyNuBzIjA?t=1924

We need to be able to do better. How do we answer “Why not 1/(Δ + n^2)?”? If you go on about it being ln(x) in some limits that only shifts the question to “Why ln(x) not sqrt(x)?”. There is a real question of viability here. STV has a very simple understanding since it is Monroe. The STV supporters have been pushing the narrative the RRV is not PR for a while.