Codepens as a tool for this forum

So I’ve been thinking about ways we can use Codepens – or JsFiddles which are basically the same thing – to enhance our discussions on this forum (and the new forum to come).

For those who don’t know, Codepen is a web site / web app where you can create combinations of Javascript, HTML and CSS, and can make little sharable widgets and the like. You can very easily share them with others, who can then modify them and reshare them, etc. You don’t need to be a programmer for this to be useful, especially for the kind of stuff we talk about in this forum.

I made three Codepens as starting points. The second one uses code from the first one (they can reference each other, which is super cool), and the third one uses code from the first two.

These are made to take Score/STAR “ballot descriptions” in this general format:
25: A[5] B[4] C[0]
12: A[0] B[4] C[5]
and read them into javascript, and do useful stuff with them. So when people post example elections to show a strength or a flaw or what have you, anyone can paste them in and start experimenting without having to do all the math and such in your head.

This video explains these particular pens, and hopefully is enough for people to get an idea of the potential. I expect to do a lot more with these as time goes on, and I’m sure hoping that others do so as well. I expect to create a page where they are all listed (not just mine of course).

Here are the actual codepens shown in the video:

https://codepen.io/karmatics/pen/MWKzXBE

https://codepen.io/karmatics/pen/MWKzXPE

https://codepen.io/karmatics/pen/KKVrejX

1 Like

The result of the 2nd and 3rd code, I recommend you do it like this:
A,B,C
5,4,0
5,4,0
5,4,0

instead of:
A[5] B[4] C[0]
A[5] B[4] C[0]
A[5] B[4] C[0]

It’s more concise and can be easily copied to spreadsheets.

1 Like

Just for the results?

I can easily support that as well (or you know, anyone can), but for starters I figured I’d support what people are already using in the forum, and do it in a way that allows round-tripping it. So if I did it that way I’d make sure it supports showing the count, and also make sure I do it for parsing (i.e. input).

Have you been using spreadsheets with this stuff? The more you can tell me your use case, the more I can adapt stuff to make it flexible.

This is more about hinting at the potential of the tools, than making something for a very specific use case, though. The initial use case I was thinking about is when someone says “consider the problem highlighted by this set of ballots” and I want to experiment with or demonstrate various perturbations, so we can better see if it is a real problem or just a very unnaturally contrived sets of ballots. That’s something I have seen over and over again, and I am frustrated with the tedium of analyzing the issues brought forward in these sorts of posts.

Ultimately, though, regular uses of Codepens, with some shared libraries of code, could be a huge benefit to this forum in a ton of ways.

Do you know any javascript?

1 Like

Here’s one that does spreadsheet compatible output. It’s a tiny change. I did have to split out as a separate function getCandidateList(), which looks at a list of ballots and gets you a nice array of candidate names in alphabetical order, which is now used in several places.

https://codepen.io/karmatics/pen/jOWQgBJ

The format I have indicated to you is the classic format for .csv files to represent tables or matrix (easily imported in spreedsheets but also in other applications), specifically:

  • the first row, the names of the columns
  • the first column, the names of the rows (if they have a name)
  • all values use a separator, which in my example is a “comma” but can also be “space” or “tab”.

Do you know any javascript?

I did this (note that it’s completely dynamic in fact it’s always on the “home” page).

Of your codes I don’t understand the meaning of “blurred”.
Given a vote like this: [10,8,6,4,2,0]
examples of blurred would be these in my opinion:
[10,9,5,3,1,0]
[10,8,6,5,4,0]
[10,9,3,2,1,0]
that is, the values change but the order of the candidates remains unchanged.
In your blurred instead it can also happen:
[8,10,7,1,2,0]
which makes no sense to me.

Right, I know about csv but don’t personally work with spreadsheets much. I added output in that format. Let me know if you need help using it, if you are wanting to do something with spreadsheets or whatever. My main thing is to make stuff that can be easily pasted to and from the forum, but glad to support anyone doing spreadsheet stuff.

The idea of blurred is based on the idea of mixing more than one vote, sort of like when you blur an image you take a bunch of pixels and average them. So you don’t blur an individual ballot, you blur a few of them. I guess I could call it blending.

So say you start with this set of 100 ballots:

Red 51: A[5] B[4] C[0] D[0]
Blue 49: A[0] B[4] C[0] D[5]

My blurring algorithm would select some ballots from that set randomly. I currently have it set to pick between 2 and 6 ballots. Then it averages their scores. Then it scales them and rounds them so there is always at least one zero and at least one five.

Here’s an example of 4 blurred ballots:

A[0] B[4] C[0] D[5]
A[4] B[5] C[0] D[2]
A[1] B[5] C[0] D[5]
A[4] B[5] C[0] D[2]

Notice how all scores for C are zero, since all the input ballots have C as zero. Some of the B’s are 5 because it got “scaled up” because averaging resulted in A and C’s scores being 4 or less.

Okay, but this process of yours changes the initial example.
I could tell you:
55%: A[5] B[4] C[1]
45%: A[1] B[4] C[5]
those indicated are the “average of the votes, separated into 2 groups”, that is to say, e.g. I group these 3 votes:
[5,5,0]
[5,3,0]
[5,4,3]
in an average grade like this:
[5,4,1]
then changing the % based on how many I have grouped.

The example in its initial form serves to show an extreme case and making blurred has the only purpose of making it less extreme (change the starting hypothesis).

P.S.
Note that, 55%: A[5] B[4] C[1] more realistically would be:
55%: A[4.8] B[4.1] C[1.3] , but the meaning of the example doesn’t change.

Well that’s kind of the point.

Look at it this way. In regular old majority/plurality voting, you can have ties. If you’ve got 4 people voting on something, ties are quite likely and are potentially a significant problem.

But the more people voting, the less likely you’ll have a tie. So we don’t worry too much about them in government elections beyond having some means of breaking a tie. It’s not a big problem because when thousands of people are voting, ties are extremely unlikely.

When I see example ballots like the one you posted, I want to know, is this a likely thing? Or is this just a weird chance thing roughly equivalent to a tie, where the likelihood of it happening goes down the more voters there are? In a different thread @Keith_Edmonds said:

In real elections you will rarely fall on these knife edge unstable points. If an example is broken under a infinitesimal perturbation I do not think it is an issue.

The blurring stuff gives an interactive but automated way of exploring this question. It allows you to quickly experiment with perturbing the ballots to see what degree of perturbation makes the problem tend to go away. And I consider it realistic because all the new ballots are just mixes of existing ballots.

I don’t see how that is more realistic. (anyway, aren’t we dealing with star ballots that only accept whole numbers of stars?)

Regardless, it’s not perturbing the thing that needs perturbing. The example had B being extremely popular, across a dramatically polarized electorate, yet not single person liked B the best.

Look at it this way. In regular old majority / plurality voting, you can have ties.

Ties are a different problem.
Given 4 voters, if you use AV (range [0,1]) you will easily have ties; if you use range [0,5] you will have fewer ties; if you use range [0,100000] and the voters are accurate, the ties will almost disappear, even if there are only 4 or even 2 voters.

Or is this just a weird chance thing roughly equivalent to a tie, where the likelihood of it happening goes down the more voters there are?

The probability of this happening decreases by increasing the voters if the voters had “random” or “almost random” interests, but in reality it seems to me that groups of people with similar interests are easily created, with consequent extreme contexts.The reason I don’t like Yee diagrams is because the voters are distributed too uniformly (in “one group”).

More generally, the criticism made to STAR is that it can win candidates who do not have the highest sum (greatest utility). The example shown above is an extreme case in which the difference is very high between the higher sum and that of the candidate who then wins.
The problem described occurs every time the STAR returns a value different from the SV, and this is not an unrealistic thing (it’s unrealistic at best only in the very extreme case shown).

Eventually you could say “the STAR votes are more honest than the SV” and I agree but this makes it even more foolish when the candidate who doesn’t have the highest sum wins.

anyway, we aren’t dealing with star ballots that only accept whole numbers of stars?

I think you didn’t understand the grouping; assume that you have these exact blurred votes (A,B,C):
[5,4,0]
[5,5,1]
[5,3,3]
[5,4,0]
[0,3,5]
[2,5,5]
I could group these votes into 2 groups like this:
4: [5,4,1]
2: [1,4,5]
and using the % it becomes:
66% [5,4,1]
33% [1,4,5]
The grouping can also generate votes with decimal values like 4.3 .

You say you will tend to have fewer ties with range[0,5], and yet you have an example with range[0,5] but everyone simply rates things in exact lockstep with one another.

The blurring simply perturbs things so it uses a bit of that range.

Ties are different, sure, but they are also similar in that the chance of things like this happening get rarer as you get more voters. In this case it is because you expect more voters will have more variance in their voting, especially when they have a whole range to choose from.

Not like the example you are using. It’s one thing to be polarized, but entirely another when you have a candidate that is universally seen as a 4, by both groups. Regardless, I’m sorry you don’t see the blurring tool as useful or meaningful, but in my view it is very good for calling attention to things that are highly sensitive to the most tiny perturbations.

I’m sorry you don’t see the blurring tool as useful or meaningful

I clarify, to say that a case is extreme or rare, does not eliminate it.
If we want to compare two “strong” voting methods, then we must also consider extreme cases.

  • STAR vs FPTP -> the extreme case is a minor problem when compared to the problems of the FPTP.
  • STAR vs SV -> we must also compare the extreme cases because it is not trivial which of the two methods is better.

Exactly as I admit the problem of monotony in DV, even if rare, STAR should admit its problems, even if rare.

That said, “denying” the example with blur does not deny the problem in which candidates who do not have the greatest utility can win, in STAR.
Most of the honesty of the STAR votes, it seems to me that it is lost when the candidate who does not have the highest sum is won, so you might as well use SV (which at least is simpler).

It is not denying the problem, it is putting it in perspective.

I would also say that the limitations of a forum like this tend to distort this sort of supposed problem. It is easier to post an example where you just say “49 ballots like this, and 51 like this” than it is to post 100 separate ballots that are more realistically distributed. Even if you do post all 100, it is hard for others to look at them, in text form, and make a mental model of them. (that is one of the reasons I am pushing for more use of tools like Codepen, various visualizers and simulators, etc)

But when you do that, you potentially highlight problems that don’t happen in more natural scenarios.

When you say “candidates who do not have the greatest utility can win in STAR” you are wording it in a very black and white way. You are telling us that something is possible, with no indication of how likely it is.

If you were designing a building, you could state the obvious and tell us “an earthquake can cause the building to collapse.” But you could also say something like “an earthquake large enough to cause the building to collapse is predicted to happen about once in 6000 years.”

The first is black and white, resulting in rather useless, but seemingly alarming, information. The second puts it in perspective and allows us to better judge whether it is acceptable.

You are telling us that something is possible, with no indication of how likely it is.

I say that if it were very unlikely, to the point of being negligible, then you might as well use SV.

Try this example in Codepen with blur 50 (high value I think), and compare STAR and SV:
33%: A[5] B[4] C[0] D[0]
33%: A[0] B[5] C[0] D[1]
33%: A[1] B[0] C[5] D[0]

1 Like

Well first I should acknowledge how cool it is that you’ve forked my Codepen and added DV to it. :slight_smile:

That is a more interesting example. It takes more blurring to “correct” for the issue you are concerned about.

Notice that if you remove C and D (irrelevant alternatives), those in the first group would presumably now rank B as zero. Then score would pick A as well. (I don’t know about DV)

The reason STAR and Condorcet methods are like they are is to protect against irrelevant alternatives altering the outcome so much. Condorcet takes that to the extreme. Some would argue that party politics is highly related to the irrelevant alternative issue, in that strategic nominations are used to defend against spoilers, and that causes polarization.

Anyway, thanks for getting on board Codepen and I’m sure there is a ton more analysis that can be done using it.

It takes more blurring to" correct "for the issue you are concerned about

More than 50? ok… in the meantime the other methods all work without problems.

Notice that if you remove C and D (irrelevant alternatives), those in the first group would presumably now rank B as zero. Then score would pick A as well.

By reducing the starting candidates to only 2, many problems are solved.
Anyway, thank you because you pointed out that D wasn’t necessary in the example, 3 candidates are enough.

I don’t know about DV

If you don’t know, from Codepen remove the candidates C and D, put B at 0 and you will see the result with DV (I have corrected the NaN in DV).

The reason STAR and Condorcet methods are like they are is to protect against irrelevant alternatives altering the outcome so much. Condorcet takes that to the extreme.

For this reason, rather than STAR it’s better STAR-RRV (which also avoids the problem of clones, not trivial).

Anyway, thanks for getting on board Codepen and I’m sure there is a ton more analysis that can be done using it.

Thanks to you, I had been using Codepen for a while, but I never wanted to sign up, even if I should have.

1 Like

No I just meant more than in the other example.

There are other ways of analyzing some of these issues beyond blurring. Another visualizer, that I hope to get into a nice Codepen, iteratively alters the ballots after seeing who seems to be front runners. So, initially it will vote more “honestly”, but then the voters would alter their vote to be more strategic. In your examples, this would tend to get better outcome under many methods.

On the one hand that would be considered a weakness of a method (it is my biggest complaint of Approval), but on the other hand it can indicate unrealistic ballots in the first place. For instance, in a previous example:

Red 51%: A[5] B[4] C[0] D[0]
Blue 49%: A[0] B[4] C[0] D[5]

Imagine how, under STAR, the Blue voters would behave if they were able to predict that A and B would be the front runners. They would give B 5 stars, right? (so they wouldn’t be stuck with A) That’s another indication of a problematic example, that so many people (49%) were so wrong about how it would go that they undervalued a candidate that they liked. All it would take was a very small number of people to be more savvy in their votes to change the outcome.

iteratively alters the ballots after seeing who seems to be front runners

Realistically we start from honest votes, to find honest front runners, and then tactical votes are applied (1 time).
Using so many iterations doesn’t seem realistic to me.
Also consider that, in methods such as DV, to have an effective tactic, you need to know not only the front runners but also the way in which the voters have distributed their points in the vote (to understand how the elimination of the worst works).
The average voters generally do not have this information, and knowing only the front runners in DV, has little use.
I would still be curious to iteratively see what happens, so keep me updated.

Red 51%: A[5] B[4] C[0]
Blue 49%: A[0] B[4] C[5]

If you start with honest votes (you have no other way of starting), the front runners appear as:

  • B in first position (very distant from the others)
  • A, C in second position, almost on par

In a similar context, Red would rather give 0 points to B and risk that A would lose against C, he would vote exactly as in the honest vote (A[5] B[4]). Respectively Blue.
Why risk making C win when I can play it safe with B?
Remember that knowing the front runners does not mean knowing the way the points are distributed in the vote (and it does not even mean knowing the exact % of support so the Reds are not sure of being the majority).

If you use STAR instead of SV to find the front runners, then the winner would look like A (against B) and therefore the Red voters might decide to put B a little bit more behind, to win A (but B is the one who should win).

My simulator that iterates has various techniques to try to make it as realistic as is possible.

The reality is it is really hard to simulate methods that involve strategy such as Approval or Plurality. But yeah the best you can do is run it with them voting “blindly” initially, then eliminate candidates and have them concentrate on the candidates that are still left. “Blindly” is probably a better word than “honestly.” For instance in Approval, they are just setting the approval threshold. I don’t consider that dishonest. (since I consider “approve” and “disapprove” to be relative terms anyway)

I randomly make some voters more savvy than others. The savviest voters will keep adjusting their votes until there are only two front runners, the less savvy will stop adjusting earlier. Although currently it gives each voter a savvyness quotient that ranges from 0 to 100%, I should probably have a bit more control over that to allow easy experimenting.

I don’t currently try to do this for Score, STAR or IRV. I could, but it would be trickier to do. One interesting one is “For and Against”, which is a rarely discussed method that simply allows you to vote for one candidate and against one candidate. It meets the “equal” criterion in that for every vote, there is an opposite vote that would cancel it out. It is interesting that it is probably the simplest way to cancel out vote splitting. It actually works very well, comparable to approval, except that it seems to need more iterations to stabilize. (expected, since each voter gives less data per vote, at least when there are more than 3 candidates)

Anyway, I’ll show it soon enough. It’s a lot more visual and interactive than the current Codepen stuff…

since I consider" approve “and” disapprove "to be relative terms

I don’t, but it’s another matter.

The savviest voters will keep adjusting their votes until there are only two front runners, the less savvy will stop adjusting earlier.

If I broadly know e.g. this information “A and B are front runners”, and I consider A slightly better than B, then I could:
maximize the points to A and minimize those given to B,
but I have no way of knowing what the results would be like if everyone used this tactic, so how do I iterate?
In my opinion you take it for granted that the voters know (in addition to the front runners) also how the other voters have distributed their points in the votes (which is practically impossible to know with sufficient accuracy, in the real world).
I repeat that it would be interesting, but I think unrealistic.

“Equal” criterion in that for every vote, there is an opposite vote that would cancel it out.

I tell you that DV does not meet this criterion because DV wants voters to focus on supporting their favorite candidates rather than canceling others.

Ex: if you hate A so much and you care little about the others, in SV you could have votes like this:
A[0] B[10] C[10] D[10] … (with B,C,D always max favored against A).
In DV instead, a similar vote would divide the 100 points into many small parts at start so it would make more sense to choose, among the candidates opposed to A, those who seem to be better and give the points to them (therefore, the voter who only hates some candidates, will be encouraged to inquire for find a candidate he likes at least a little).

Sometimes I tested the SV in online surveys and I came across contexts with “all 10 and one 0” (in practice, it’s the opposite of bullet voting); all this because of “Equal” criterion.

You don’t “iterate”, that is what a voting simulator does because it can’t follow the news or polls like a human can.

In American presidential elections, for example, people have a good idea who is a front runner and most people avoid voting for the “spoiler.” That is voting strategically. If you are denying that a large number of people will vote strategically under methods such as approval (or plurality), well, I don’t know what to tell you. Much of the theory around voting concentrates on strategy, pretending this doesn’t exist and that everyone is voting blindly seems to miss the point of all this stuff.

This is all the iteration is trying to emulate… the real world situation where voters pay attention to who is likely to win or be a front runner. While you are right that it can get difficult with lots of candidates that all have a lot of support (I have described it as a “hall of mirrors”), still, you are going to have an idea that certain candidates don’t have a chance.

I would consider it an error for a voting method to “want” its voters to vote a particular way. Voters will vote in ways that they see as in their interest.

We have an upcoming election in the US where a LOT of people are thinking “anyone but him.” While we unfortunately are stuck with plurality voting (and most contenders were eliminated in primaries), still, to declare that voters must concentrate only on who they like, and not who they dislike, is both unrealistic as well as seeming to miss the point of all this stuff we are doing.