The use of cardinal methods in single member elections is fairly lucid, despite a fair diversity of detailed approaches. It is trying to come up with an intelligible multimember approach that seems fraught with conundrums and difficulties.

One important aspect of evaluating cardinal choice methods applied to electing multimember bodies is keeping track of subdivisions of the net set of ballots supporting a given candidate or parties. When we go over to cardinal methods, away from single choice for a single candidate or party, we take on the task of balancing the significance of moderate levels of support for many candidates versus strong support for few or just one, aka “bullet voters.” So far, most of the attempt at evaluating the approach to proportionality of cardinal methods I see here devolves to pretending everyone bullet votes under the system and see whether the outcomes match a standard proportionality method in a single choice system. But this gives us little insight into the prospects or meaning of votes cast as cardinal systems intend, for many different choices. When we take on that possibility, we have to figure out ways of fairly distributing total seats to be won to each party, when some of their support comes from bullet voters, and others are more diffuse, and this generally involves identifying and separately noting each possible combination of Approval votes or scores a voter might choose, and noting how many ballots actually cast tally in each of these numerous combinations, and then applying some rule iteratively to first choose the first winner, then discount in a appropriate manner these various blocs in various ways supporting that party, deciding whether we are going to divide them all via common divisor as in Jefferson or Webster’s method, versus say multiplying the ballot tally for each combination by the number of seats to be won divided by the total number of ballots cast, then subtracting one somehow from the total set of all supporting combinations–and by what rule? Proportionally spread across the entire set? Versus say attacking first the bullet votes, and only moving on to subtract from the combinations where the winning party in this iteration is combined with support for another, then with those exhausted move on to any where it is three parties or candidates given some positive score? When the choices are multilevel score choices, how shall we proceed in discounting then–subtract the ballots with the strongest scores for this candidate or party first, discarding as well all choices beside them for other candidates, whether strongly or weakly rated? Note that if we have sorted each ballot into a properly pigeonholed exact combination set, they all form a homogeneous set within that bin, so at least simply striking them wholesale or, if we have reached the edge of the quota to be subtracted, cutting them down into a remnant portion, makes sense, we don’t need to differentiate among them. But should we totally strike the alternate rated candidates, or not touch those ratings and thereby transfer any trimmed ballots to other pigeonholes without the party being deducted here included in the reduced combination? If we don’t discount those choices somehow, the voter can multiply their power by simply rating more candidates, and perhaps with Cardinal voting methods ballots this is as it should be. But should we apply a different discount than simply striking them all, throwing away the voter for the first winner’s slate support for others? Or what?

Now then, if we limit our cardinal scoring system to binary alternatives, support or no support, we have Approval voting. One feature of AV is that we can readily set up a framework for hanging all these different combinations on that is quite mathematically lucid and symmetrical–notably the hypercube!

I cut my teeth on hypercube geometry during a summer internship working on developing a multiprocessor parallel computing strategy based on generalizing the hypercube layout. The concept of an N-cube might seem strange at first glance, but it works well with binary math concepts fairly widespread among people accustomed to modern data processing methods. If one has one dimension, one has one binary line, at one end zero for that value, at the other, one. Going up to two dimensions we get a square–one dimension is that line, with another one parallel to it, and connecting the two line is a jump from zero to one of the second binary digit, allowing us to code 4 combinations–(0,0) being a null vote for neither of two candidates, (0,1) being a vote for one and not the other, (1,0) being for the other one and not the first one, and (1,1) being a saturation approval vote on the far corner from the zero-zero vertex. In a two candidate approval vote then, we have some voters n(0,0) who chose to leave their ballots blank, others n(0,1) and n(1,0) who chose one and not the other, and n(1,1) who chose both. Drawing them as a square on a piece of paper, the choice to vote for one is a step to the right on the conventional X axis, a choice to vote for the other is a step up on the Y axis. Off on the fourth corner are those voters who took both steps and are diagonally catty-corner to the null voters. Clearly then if we add a third candidate, we can take it up a third dimension up on the Z axis, and have there a square that in some ways, in the other two dimensions that is, mirrors the lower square, but with a third digit added that is 1 instead of zero. (Note I am adding new binary-decimal places from right to left as is conventional with digital number systems, thus we have transformed the original no-third-candidate square to n(0,0,0), n(0,0,1), n(0,1,0) and n(0,1,1). I can in fact drop the commas since each binary digit stands on its own and write n(000), n(001), n(010) and n(011), and so above this square on the grid below is another grid above numbered n(100), n(101), n(110) and n(111). I might even be able to get away with dropping the parentheses too, and one can read n101 with the binary digits as a subscript, clarifying we have a list of different n from n000 to n111, that is to say in conventional decimal numerals, n0, n1, n2, n3…n7. Geometrically we have a cube formed by the top decimal positive square otherwise mirroring the top decimal zero square below; clearly vertex number 6 is above vertex number 2, all the top ones are the index of the bottom ones plus 4. And we have a vertical slab with index 0, 1, 4, 5 facing another one 2, 3, 6, 7 and another pair of slabs 0, 2, 4, 6 facing 1,3,5,7.

And we can do this trick, balance the 3-cube on its zero vertex with the 7 vertex directly above, now we have vertices 1,2,4 forming a triangle at some height above the plane the zero vertex is in, and antisymmetrically the vertices 3,5,6 above those three in a plane between that and the one the “omega” number 7 vertex is in. In binary notation we see the first level has one one digit and two zero, the second has 2 1 digits and one left zero, and the apex at vertex 7 has all three digits flipped to one. So ignoring the zero level of zero votes, we have bullet votes on level 1, anti-bullet votes (all but one choice) on level 2 and all choices saturated in the single peak vertex. With each vertex we have another integer number n associated with the index, measuring how many votes were cast with that combination.

We can, it might not be obvious, keep going and once again double the vertices for a 16-vertex tessaract in 4 dimensions, and keep track of the vertices by writing n0000, n0001, n0010, n0011, etc all the way up to 15 or 1111–now we can keep tract of an election where the voters can cast approval votes for 4 candidates. We can keep going and clone all the vertices of that tessaract for a 5-cube of 32 vertices numbered 0 to 31 with 5 binary digits, and in fact for any number of candidates whatsoever, it is meaningful to have an N cube with little n numbers parked on each indexed vertex.

In my hypercube computer development internship I learned a general way to draw N cubes using a 2 dimensional piece of paper and to visualize the cloning process at each added dimension to map the old N-1 cube of say 5 dimensions with 32 vertices onto half a regular polygon and mirror it with another 32 vertex N-cube with the sixth dimension added by just adding 2^5 or 32 to the index of each of the old 5-cubes, and drawing lines to each mirrored vertex differing by the added 32, record the sixth dimension for a complete polygon of 64 vertices–it was helpful to have different colored lines to keep track, because the morphing process to squeeze a given polygon into its half of the next dimension would, in the 2-D projection turn some of the lines through an angle and the same dimension involved opposite, mirrored angles! This mapping also illustrated a feature of the hypercube relevant to the computational strategies they meant to use, because a subset of the lines connecting vertices in any N-cube form a ring taking one all through the vertices, though not in conventional index number order! But if you think about it an ordinary cube as a die for instance has a bent path around it that visits each vertex once only before it returns to the starting point–many paths in fact because thanks to hypercube symmetry we don’t need to make the zero vertex the base.

Thus in terms of connectivity, hypercubes are real objects projected, via cables connecting processors in a certain pattern, into concrete 3 D space. And if we draw one as a polygon, so that a vote involving 10 candidates produces a polygon with cross connections across the middle of the diagram in the correct pattern, we have a polygon with 1024 vertices representing every possible approval vote combination, and we can just write down how many voters cast ballots for each, and use this as our database.

In fact, knowing the index is an integer between 0 and 2^N-1, we can just write a linear list with all those numbers in conventional order, write in the number of ballots cast for each binary combination that index number corresponds to, and handle it that way, provided we have set the candidates who could be approved or not in some conventional order, each one arbitrarily given their own dimension, and we can reconstruct various hypercube relationships from these number pairs, index and count.

It is not at all clear to me we can generalize this for score votes. Say we have 2 levels of approval above zero, shall we record these, for a 2-candidate race, as either 1 or 0.5 (written 0.1 in binary of course!) on the candidate’s dimension? So then the full combination of choices goes from 4 possible (including null and saturation at the far 11 vertex of course) to 9. A 3 choice ballot with 3 total levels counting zero for each candidate gives us 27, which is what would happen if we cloned the 2-D version twice, one for adding 1/2 in the new dimension, and one for adding 1, the indices get a lot more complicated, and it is not clear to me what other neat tricks we might do. Clearly if we allow 10 levels of approval score, from 0 to 9, we wind up with 100 points within the four vertices spanned by full approval for either or zero–generally it is (levels of choice counting zero)^(number of candidates). Each defined point is unique and represents a unique possible combination of scores for all N candidates, including of course leaving some blank and thus marking them zero.

I am not nearly enough of a mathematician to clearly see how to use such an N-dimensional matrix, with or without scoring levels beyond binary yes-or-no, as an object equivalent to a simple list of single-choice votes for various parties or candidates. I can say a few things;

For instance, if we decide to cut our electorate into X number of bailiwicks, but assume the framework of parties are the same across each (so if there is some unique local party somewhere spanning just a few districts, we reserve a dimension just for it in all records of all districts and simply leave it zero in the other districts, and that every party has the same dimension in every district) then we can, after recording the approval vote in a given district and just add up the choice scores for each candidate (by the expedient of jumping up to the 1-level in that that party’s dimensions and totalling up all the ballots in the half set of all vertices included there–or in the real world we just add up the the digits in each separate binary decimal place of course, ignoring which combination each vote was fully categorized into), transmit the string of integers of any size, delimited to define their digital place in order, to be collated by simply adding up each district’s vote for each party, to get a grand global N-cube organized matrix of the systemwide partisan vote. The matrices add linearly in other words, and I suppose that means the filled in score-vote matrices would add the same way.

Now I think that is as far as my highly limited math can take me without venturing into terra incognita to me, but I hope someone here can take this N-dimensional symmetrical (except for the lopsidedness the various ballot counts associated with the vertices give it!) shiny object and work math wizardry with it. Some well known procedure or theorem I might know if I ever got to tensor math might tell us exactly how to get something meaningfully proportional out of approval votes this way, applying some generalization of the procedure of multiplying party votes cast by total seats and dividing by total ballots-single choice votes, to get raw shares to apply some integer proportionality mapping to for a proportional legislature with single choice votes. We want some procedure, or at least I do, other than iterating seat by seat to crank out seat shares one by one.