The Missing Link in Gerrymandering Jurisprudence

Edward B. Foley

- Moritz College of Law
Charles W. Ebersold and Florence Whitcomb Ebersold Chair in Constitutional Law; Director, Election Law @ Moritz
Posted on September 12, 2017, 4:35 pm

The key advance is the ability to identify whether a redistricting map is not merely biased against a political party but whether it is an extreme outlier in the degree of its partisan bias relative to other maps that might have been drawn to achieve the mapmaker’s permissible redistricting objectives.


The difficulty up to now, in framing a constitutional challenge to partisan gerrymandering, has been one of linking together two necessary components of a complete claim.  One component is the metric for identifying when a redistricting map deviates from impartial fairness to the competing political parties.  The other component is the standard for determining when a partisan motive for drawing the particular district lines runs afoul of a federal constitutional requirement.


As a policy matter, it is easy to establish a metric for identifying redistricting maps that deviate from neutrality between the parties.  Indeed, as political scientists and statisticians frequently explain, as they do in multiple amicus briefs submitted to the Supreme Court in Gill v. Whitford, the pending case from Wisconsin, there is no shortage of such metrics.  One such metric is the so-called “mean-median difference” (or, as some prefer, “average-median difference”). This metric measures a party’s share of the vote each district in the map and, listing the districts in order of the party’s vote share from largest to smallest, the compares the party’s vote in the median district—the district that is the midpoint of the list—with the party’s share of the vote across the entire map (which is the same as the party’s share of the vote in an “average” district, controlling for different turnout rates across districts).  To the extent that the party’s share in the median district is smaller than the party’s overall (or average) share, the map is structurally biased against the party.


To consider an extremely simple example: suppose there are five districts, each with 20 voters, for a total of 100 voters.  Suppose these 100 voters split 60%-40% between Party A and Party B, but the district-specific splits are:

     A     B

1   20     0

2   20     0

3     8   12

4     8   12

5     4   16


District 3 is the median district, and Party A’s share of the vote in that district is only 40% (8 of 20 votes cast), whereas Party A’s overall vote share is 60% (60 out of 100, or an average of 15 votes across the five 20-voter districts).  This difference between 40% and 60% measures the map’s structural bias against Party A.  Thus, measuring a map’s deviation from neutrality just straightforward arithmetic—as Princeton mathematician Sam Wang is eager to emphasize.


But what does this arithmetical observation have to do with federal constitutional law?  It is easy to argue, as a policy matter, that a redistricting map is undesirable insofar as it exhibits this kind of bias against either of the two major political parties that compete head-to-head in legislative elections in order to win governing control in the legislature.  A fair map would harbor no such bias (at least not long-term, in election after election).  But the federal Constitution contains no explicit requirement that legislative maps be neutral with respect to   the competing political parties.  Indeed, the most important electoral feature of the federal Constitution—the Electoral College system for presidential elections—egregiously deviates from any such conception of partisan neutrality, as the result in 2016 most recently demonstrates.  (Hillary Clinton’s share of votes in the median state—and states are districts for Electoral College purposes—was far below her vote share overall or in an “average” state.)


Thus, measuring a map’s partisan bias is easy.  The difficulty is linking this measurement to constitutional law.


We can come at the linkage problem from the other direction.   There is no doubt that an extreme partisan gerrymander violates the Constitution.  As Justice Kennedy vividly put it in his Vieth concurrence: “If a State passed an enactment that declared ‘All future apportionment shall be drawn so as most to burden Party X’s rights to fair and effective representation, though still in accord with one-person, one-vote principles,’ we would surely conclude the Constitution had been violated.”  The problem, however, has been how to tell when a partisan gerrymander that is not so explicitly blatant contravenes constitutional law.  This problem is compounded the Court’s previous pronouncements that some degree of partisanship in the drawing of district lines is constitutionally permissible.  When the mapmaker does not expressly announce a desire to go “too far” in a partisan direction, how is the judiciary to determine from the map itself whether it reflects an excessive degree of partisanship?  


In short, the constitutional principle is clear: egregious partisan gerrymandering violates the First Amendment right of political parties to participate in politics free from government efforts to suppress that political participation.  The challenge is how to measure a partisan gerrymander that is egregious rather than merely routine partisan tinkering with district lines.


The difficulty, again, is one of linkage.  Measuring partisan bias, independent from constitutional principle, is easy.  Articulating the constitutional principle, independent from measurement, is straightforward.  It is the marriage of principle and measurement that has proved elusive.


Until now.


As Justice Kennedy also anticipated, the increasing power of computer technology has enabled the development of new statistical techniques that can identify whether a redistricting map is an outlier compared to all possible maps that would achieve a mapmaker’s constitutionally permissible objectives, including compactness and respect for existing political subdivisions.  A computer can do this by drawing thousands, even millions, of alternative maps, all of which are constrained by the stipulated set of constitutionally permissible criteria, and then the computer can measure the degree of partisan bias for each of these alternative maps using the same voting data applicable to the actual map under consideration.  For example, the computer could calculate the mean-median difference for each of these alternative maps.  (In other words, the computer could measure for each possible map the extent to which a party’s vote share in the median district diverges from the party’s overall, or average, vote share.)


Crucially, the key metric is not the absolute value of mean-median difference for the actual map, or how much this difference deviates from the ideal of zero, the score of a perfectly neutral map.  Instead, the key metric is where the mean-median score of the actual map falls within the distribution of mean-median scores of all the alternative maps that the computer is able to draw.  If the score for the actual map falls outside the normal range of scores for all these maps—falls, in other words, along the tails of the distributional curve—then the actual map is an outlier in terms of the degree of its partisan bias. 


The distributional approach of this statistical technique, it is important to understand, does not judge—even indirectly—an actual map with respect to a standard of perfect neutrality.  In a given state, it might well be the case that the normal distribution of possible maps drawn by the computer does not center on maps with mean-median scores of zero.  Instead, geographic factors applicable to the particular state might cause the typical map drawn by the computer (in other words, the mode of the computer’s distribution of maps) to have a mean-median score disadvantageous to one political party.  This could occur, for example, if one party’s voters are geographically clustered in tight political subdivisions, while the opposing party’s voters are more advantageously dispersed throughout the state.  All the maps generated by the computer would reflect this natural geographical advantage of one political party.  Still, the process of generating these alternative maps would determine whether or not the actual map was an outlier even with respect to this natural geographical advantage, or instead fell within the normal range of partisan bias given this natural geographical advantage. 


Thus, this new computer-assisted statistical approach can be used to identify what the constitutional principle was looking for: an egregious partisan gerrymander.  Strictly defined, and precisely measured, an egregious partisan gerrymander is one that is identified as an outlier using this new computer-generated statistical technique.


Several amicus briefs in Gill invoke this new statistical technique as the method for enabling the Court to articulate a judicially manageable standard to identify unconstitutional gerrymanders.  One brief that discusses the technique in particular detail—and does so lucidly—is submitted on behalf of Eric Lander, the President of the Broad Institute of Harvard and MIT.  The ACLU’s brief, in turn, does an effective job linking the statistical technique to the First Amendment’s requirement that the government regulate political competition between parties without improperly giving one party an excessive competitive advantage.


For Justices on the Court who are historically minded in their overall constitutional jurisprudence, and who thus wish to ground the constitutional analysis of partisan gerrymandering on relevant historical considerations, the new computer-generated statistical technique also can be linked to a history-based approach.  How so? First, the relevant history demonstrates that the original Gerry-mander of 1812—along with all partisan manipulations of legislative maps that are similarly egregious—has been regularly and vigorously condemned as inconsistent with the fundamental principles of popular sovereignty established in the original Constitution and reaffirmed in the Fourteenth Amendment.  Indeed, throughout the nineteenth century, the very practitioners of these egregious partisan gerrymanders recognized that they were acting contrary to constitutional principles, but the pressure of partisan politics prevented them from adhering to the Constitution as they knew they should.  This point is made effectively in an amicus brief submitted by a group of distinguished historians, and it is also emphasized in my own recent scholarship.


Second, the unconstitutionality of the original Gerry-mander can generate a judicially manageable test for evaluating modern redistricting maps in two ways.  The first way, which I have explored in a contribution to a William & Mary Law Review redistricting symposium, is more direct.  It measures the degree to which the original Gerry-mander was a distortion of district lines, and requires a mapmaker to justify any new map that is equivalently or even more distorted.  The other way is more indirect.  It identifies the original Gerry-mander as the archetype of egregiously partisan districting and, in condemning the archetype itself as quintessentially unconstitutional, necessarily also condemns as unconstitutional the whole class of egregiously partisan gerrymanders of which the original Gerry-mander is the archetype.  The way to measure whether a redistricting map is egregiously partisan, apart from having districts as distorted as the original Gerry-mander, is to determine whether it is an outlier according to the new computer-generated statistical technique. 


Using the statistical technique in this way is consistent with what I have termed “particularistic,” rather than “universalistic” reasoning in constitutional cases.  (In my William & Mary contribution, I explain how particularistic reasoning lends itself to historically-oriented constitutional analysis, whereas universalistic reasoning lends itself to more philosophically-oriented approaches to constitutional interpretation.)  One of the best examples of particularistic reasoning in Supreme Court jurisprudence is the invocation of the Sedition Act of 1798 as the basis for holding that the First Amendment constrains a state’s use of its libel law to suppress criticism of government officials.  But this exercise of particularistic reasoning did not yield the conclusion that only state laws that are exactly congruent with the Sedition Act of 1798 are unconstitutional.  Instead, the Court appropriately identified the Sedition Act as the archetype of a larger class of laws comparably suppressive of political dissent and thus necessarily comparably unconstitutional.   Once the archetype was determined to be unconstitutional—because it had been deemed so “in the court of history”—that constitutional determination was an anchor, and it became necessary for the Court to craft a contemporary doctrine for which the archetypal determination served as a foundation but which treated the entire relevant class of politically suppressive laws in a coherent and principled way.


So too with respect to the archetype of the original Gerry-mander.  Its unconstitutionality is established in the “court of history,” but that determination simply generates the necessity of crafting the contemporary doctrine that renders unconstitutional all comparably egregious partisan gerrymanders.  The new computer-generated statistical technique can identify the outliers that form the class of egregiously partisan maps that are unconstitutional according to the principle derived from the archetype.


Thus, the new statistical technique can provide the missing link between principle and measurement that heretofore has been so elusive.  Whether grounded in historical analysis, by focusing on the archetype of the original Gerry-mander, or instead rooted in reasoning philosophically based on general First Amendment principles (as the ACLU brief does), it is possible to articulate the relevant constitutional principle as the prohibition of egregious partisan gerrymanders, not the purging of all partisanship from redistricting.  Once this principle is articulated, the new statistical technique can be employed to determine whether the map under review is an outlier relative to all possible maps that might be drawn to achieve the map’s constitutionally permissible redistricting goals.  If the map is indeed an outlier, and if the mapmaker cannot justify it as appropriate despite its outlier status, then the map should be condemned as inconsistent with the fundamental constitutional principle at stake. 


In this way, the missing link finally has been found.


Edward B. Foley is Director of the Election Law at Moritz program. His primary area of current research concerns the resolution of disputed elections. Having published several law journal articles on this topic, he is currently writing a book on the history of disputed elections in the United States. He is also serving as Reporter for the American Law Institute's new Election Law project. Professor Foley's "Free & Fair" is a collection of his writings that he has penned for Election Law at Moritz. View Complete Profile

Back to top

To Commentary

Back to Election Law


Election Law at Moritz is nonpartisan and does not endorse, support, or oppose any candidate, campaign, or party. Opinions expressed by individuals associated with Election Law at Moritz, either on this web site or in connection with conferences or other activities undertaken by the program, represent solely the views of the individuals offering the opinions and not the program itself. Election Law at Moritz institutionally does not represent any clients or participate in any litigation. Individuals affiliated with the program may in their own personal capacity participate in campaign or election activity, or engage in pro bono representation of clients other than partisan candidates or organizations.