In 2005, Tony Blair’s Labour Party won reelection in the United Kingdom. The result severely cut the overwhelming parliamentary majority that his “New Labour” brand had won in 1997 and 2001, but his party still retained a governing majority of 65 seats. That was larger than many majorities that both Labour and its main rival the Conservative Party have enjoyed since World War II. Yet, there’s something odd about Blair’s 2005 victory: Labour only received 35% of the vote.
While this was a postwar low for a party wining a majority of seats, it actually wasn’t all that odd. In fact, no British party has received more than 50% of the vote during that period. Granted, Labour won more votes than any other party in 2005, lending it some “legitimacy” to govern. But given that the party won barely more than a third of the vote, however, can we call the system keeping it in power “democratic”?
There is no obvious answer to that question. And that’s because there is no identifiable thing called “democratic government.” It exists in some incorporeal form where there are boundaries that don’t qualify—think today’s China or Russia—but within large, rambling, unclear mists of indeterminacy, “democracy” is an existentially elusive substance. And so is the thing it’s often said to measure, the “Will of the People.” And democracy’s value, the reason why we should prefer it to its very different alternatives—absolute monarchy, military dictatorship, Venetian-style oligarchy, etc.—is misunderstood.
Yet most people, whether voters, journalists, academics, politicians, or judges, believe democracy and the “Will of the People” are identifiable things. Constitutional debates rage about what mechanisms best measure this “will,” like it’s a butterfly we’re trying to catch in a net. And outrage about both how we conduct our elections and how we make our laws emanates from this zoological view. Anything that deviates from the most scientific measurements fails the “democratic deficit” test.
This conception of democracy doesn’t just have implications for elections and legislatures but also for another institution: the courts. There’s a rising worry that when the courts don’t enforce “democratic values” their actions are illegitimate.
Some concerns about the fragility of democracy are absolutely valid. Stealing an election through unproven claims of fraud is anti-democratic. But to the extent those fraudsters nevertheless believe the “Will of the People” is a measurable butterfly, and not an elusive mist, they’re also wrong. Just as Tony Blair’s 2005 reelection turned out to not spell the fall of democracy, the dire prophecies of many wannabe Cassandras in today’s America probably won’t pan out either.
The idea that we can accurately measure the “Will of the People” pops up in a lot of areas, but this essay only scratches the surface of one of them. It’s something lawyers call the “nondelegation doctrine.” It is notable because it’s a constitutional issue where both its proponents and opponents appeal to “democracy” and its ability to measure the people’s “will.” Thus, looking at this particular debate serves as a case study of how what I’ll call the “butterfly fallacy” warps how we view the United States Constitution and how judges interpret it. My hope on reporting on these contemporary debates and peeling back their assumptions is that we can better understand both why democratic government is vitally important, but also the constitutional limits we rightly place on it. If nothing else, I wish to convince you of one thing: Like the democratic nature of Tony Blair’s 2005 government, the “Will of the People” is fundamentally indeterminate. And perhaps I’ll further convince you that this means judges should more fully engage with the Constitution instead of worrying about what “the people” really think.
First, we’re going to sketch out the legal debate over this “nondelegation doctrine.” Then we’ll take a step back and examine what the latest social science research tells us about how democratic government relates to the “Will of the People.” Finally, we’ll put the two together and see what it means both for how the courts should treat the nondelegation doctrine, and more broadly how all of us should view democracy itself.
A long-brewing controversy in the Supreme Court and the legal academy is how much discretion Congress can give—delegate—to the President and the executive branch. The beginning of Article I of the Constitution states: “All legislative Powers herein granted shall be vested in a Congress of the United States.” The inference from this and other language is that there is a separation of powers between Congress and the President beyond just the various tasks explicitly given to each. In other words, the Constitution provides each with certain powers, but at least to some extent one branch can’t give its powers to the other. They’re “vested,” meaning you can’t just pick them up and hand them over, even if you retain the right to get those powers back later.
Just about everyone agrees there’s something to this idea. For example, if Congress passed the following law hardly anyone thinks it would be constitutional: “All legislative power is now in the hands of the President. He can do whatever he wants as though we had passed a law allowing whatever it is. Go ahead and tax, spend, regulate, anything. He doesn’t need our permission anymore.” But short of that extreme is where the controversy starts.
For being a “doctrine,” though, the nondelegation doctrine isn’t very doctrinal. The Supreme Court has only found acts of Congress to violate it twice, and both in 1935 at the height of the massive expansion of federal power in the New Deal. Since then, the Court has greenlit all kinds of vague delegations of broad powers to the executive branch. A light nondelegation doctrine, in short, allows the federal government to do a lot more, as instead of passing a lot of laws Congress can just leave that to staffers in administrative agencies. In recent years, however, various justices have said they would like to tighten the doctrine up, suggesting new standards to use beyond the current malleable one requiring only that Congress provide the executive branch an “intelligible principle.” And, unseen by most, state courts have used the doctrine to much greater effect under their own state constitutions—and, it should be said, without modern administrative government grinding to a halt.
Amid the signs of hope or doom (depending on your “side”) emanating from the Supreme Court and various state courts, law professors have waged a battle over whether the original meaning and design of the Constitution was to prevent the kinds of broad delegations we’re familiar with today. I’m not writing here on the merits of that dispute, a subject where scholars have marshaled all kinds of historical evidence and obscure pieces of parchment from the late eighteenth century (and even earlier). In full disclosure, the law firm I work for and I side with those who think the courts should strike down more laws and regulations under the nondelegation doctrine, although the nuances of that are open to evidence amid the historical debate. But what I’m interested in here are the reasons supporters and detractors of the doctrine give for why it’s a good or bad idea.
And that’s because both sides say “democracy” is on their side. With the butterfly fallacy fluttering in the background, supporters of the nondelegation doctrine have long clothed it in democratic garb. There are other pragmatic arguments in support of the doctrine (the one I subscribe to is that it mitigates against the concentration of power), but “democracy” and its ability to channel the “Will of the People” seems to come up the most. For example, in a 1980 case about Congress’s delegation of workplace safety standards, Justice William Rehnquist said that the purpose of the doctrine is to keep political decision-making in the hands of the branch “most responsive to the popular will.” Just earlier this year in a federal vaccine mandate case, Justice Neil Gorsuch stated that the nondelegation doctrine “ensures democratic accountability by preventing Congress from intentionally delegating its legislative powers to unelected officials.” And state courts make similar statements about it when applying their own versions. The Kentucky Supreme Court has emphasized it prevents legislators from escaping accountability by passing off decisions on “some executive branch bureaucrat.”
But forceful critics of the doctrine have stood up to say they are the real small-d democrats. In the past, some have argued that the President, by representing the entire nation and being separately elected, is more “democratic” than Congress. Thus, it isn’t an affront to democracy to allow agencies under his control (even if actually staffed by “unelected bureaucrats”) to make the law. If you believe in the butterfly fallacy, however, that’s a hard argument given the Electoral College and recent elections.
The argument seems to have more currency, though, at the state level. Since even before founding father Elbridge Gerry drew his famous “gerrymandered” map in Massachusetts, political actors have charted political lines for political gain. Extreme concentrations of power in rural districts whose lines hadn’t changed in decades led in the 1960s to the Supreme Court requiring “one person, one vote” (or close to it) in Congressional and state elections. But simple equality in voters has led to gamesmanship in all kinds of other ways. That is true with race, a delicate area where Congress has actually required its use in drawing lines to some degree and the Court has enforced that mandate but only so much amid its many dangers. And gamesmanship is even more prevalent when it comes to partisanship, which the Court recently said it will not police at all. Due to a number of factors, including partisan voters clustering themselves together and the sophistication of modern databases, the Elbridge Gerrys of today are—goes a common argument—increasingly creating “minority governments.”
Many have written about this phenomenon, but University of Wisconsin law professor Miriam Seifter has most adroitly paired it with the nondelegation doctrine. She examines “legislative minoritarianism” what she calls states so gerrymandered that a party will have a majority in the legislature even when it doesn’t win the most votes. This might be equivalent to Tony Blair’s government winning a majority in Parliament with 35% even if through quirks of geography the Conservatives won 37%. While she admits it’s hard to count exactly what is truly minoritarian (as there are seats with only one candidate and many potential voters don’t bother to vote in lopsided districts), she makes a strong case that some states fit in the “minoritarian” camp. In her state of Wisconsin, for example, Democratic candidates for the state assembly won 53% of the vote in 2018 but only 36% of the seats. Even when taking out uncontested districts (most of which were won by Democrats) the imbalance is striking. Further, she demonstrates this is something that happens to both parties. Between 1968 and 2016 there were “minoritarian” outcomes in at least one house of a state legislature well over one hundred times, the majority of which favored Democrats.
After establishing that this legislative minoritarianism seems to exist, she examines the nondelegation doctrine in state courts. And she finds its democracy arguments wanting. Most particularly, she reviews recent litigation in Michigan and Wisconsin where their supreme courts applied it and related doctrines to the state’s governor’s COVID-19 shutdown orders. Both states at the time had “minoritarian” legislatures. Yet in both a majority of the court found the relevant orders unconstitutional or invalid.
Seifter points out that governors are unambiguously elected with a majority (or at least plurality) of the vote, making the governor, she argues, more “democratic” than the legislature. Any argument, she continues, grounded in “democracy” supporting the nondelegation doctrine or similar doctrines is therefore highly suspect. Yale law professor David Schleicher has similarly argued that governors and mayors are more identified with policy decisions and more responsive to public opinion than Presidents, making the doctrine less applicable in the states.
Seifter’s ultimate point—that there’s something wrong with a legislature where a majority party wins a minority of votes—is a strong one. If nothing else, it should raise an eyebrow. Most of us generally agree that a candidate with a majority of votes should win an individual race. If that doesn’t “scale up” across the state things do seem a bit off. But that’s about as far as it goes. The problem with democratic arguments in support of or against the nondelegation doctrine isn’t that legislatures or governors best “capture” the democratic will. It’s that that “will” is extremely hard to capture in the first place—and might not exist at all. Whether it’s the legislature or the governor catching butterflies the ultimate problem is that the butterflies aren’t there.
Democratic fairy tales
For years I’ve given a lecture to law students about Schoolhouse Rock’s “I’m Just a Bill.” For non-Gen Xers who didn’t grow up watching it on Saturday mornings, the three-minute cartoon was an effort to teach kids about how a bill becomes a law. A group of citizens call their Congressman and ask him to try and make school buses stop at railroad crossings. Drama then ensues with a bill dodging death in committees, ping-ponging between houses, fighting its way to the President’s desk, and somehow not suffering a veto. The message is it’s extremely difficult for a bill to become a law, but if the people work together it can happen.
I raise the cartoon in my lecture because it represents how many judges think they’re supposed to believe government works. And also because it’s a fairy tale. Hardly any legislation becomes law because citizens collectively direct their elected representatives to make it so. Most legislation is not driven by voters but by lobbying from interest groups who represent, at best, an extremely small slice of those voters. The vast majority of the electorate are blissfully unaware of most issues their representatives consider, let alone have an informed opinion about them. Instances like citizens desiring buses to stop at railroad crossings and calling to try and pass a law do happen. But they’re a minute part of politics.
The bigger lesson to draw from this well-known reality isn’t just that voters aren’t directly responsible for legislation, but that voters do not even have an opinion on most public policies and most decisions legislators make. And further, at a deeper level, even if individual voters did have their own various opinions, they do not scale up to any “will” of the people.
The nonexistence of a “Will of the People” has been well understood in social science and philosophical circles for years. That very uncomfortable truth is having a hard time sinking in with the rest of us. It’s not a secret and several books have brought these ideas to the non-specialist reader. The Myth of the Rational Voter, a 2007 book by George Mason University economics professor Bryan Caplan, convinces the reader of what the title claims. Another, from 2017, is Democracy for Realists: Why Elections Do Not Produce Responsive Government by political science professors Christopher Achen and Larry Bartels.
Achen and Bartels do not call the standard model of democracy a “fairy tale.” But their term—folk theory—isn’t all that much more flattering. At the core of their critique is something hard to argue with: “the great majority of citizens pay little attention to politics.” Even when they do vote, it’s generally not based on a deep review of facts or theory but rather the current economy and “political loyalties typically acquired in childhood.” Thus, “election outcomes turn out to be largely random events from the viewpoint of contemporary democratic theory.” Citizens don’t have honed political ideologies but belief systems that “are generally thin, disorganized, and ideologically incoherent.”
Public opinion polling can get respondents to give their views on issues of the day, but those views are highly dependent on how the questions are asked. To give just one example, in the mid-1980s about 64% of Americans said the federal government was not spending enough on “assistance to the poor.” However, only 20% to 25% said the same about “welfare.” That’s a 40% difference based on terminology. Often voters will agree or disagree with a policy stance based solely on being told that a politician they like or dislike holds a certain view. Like the observer effect in physics, measurement itself creates the underlying reality.
Further, voters not only don’t know what they believe, but are ignorant of basic facts that would be important in forming those beliefs. As law professor Ilya Somin details in his 2013 book, Democracy and Political Ignorance: Why Smaller Government Is Smarter, majorities of voters often do not know incredibly basic things like what party is in control of a house of Congress. And more nuanced matters are even worse. For example, one survey found Americans overestimated what we spend on foreign aid by a factor of 25 and that the large majority do not know that Social Security, Medicare/Medicaid, and defense are the three biggest spending items. Thus, not only do voters generally not have coherent views on what the government should do, they do not know the facts necessary to formulate those views.
None of this is to say that voters are “stupid” or “lazy.” It just reflects human nature. Voters are correct that any one individual vote doesn’t make too much of a difference. It’s therefore not worth it to spend hours dispassionately learning the relevant facts and policies. Instead, they are “rationally ignorant.”
But the ironic thing is, even if voters weren’t ideologically incoherent and rationally ignorant it might not give us a democratic “will.” That’s because the very conception of a collective “Will of the People” is shaky on a logical basis. Arrow’s impossibility theorem, which economist Kenneth Arrow announced in his 1950 PhD thesis, is largely unknown outside academia but an extremely important idea. Its bottom line is that it is often impossible to aggregate the ranking preferences of individual voters into a group’s preferences. For instance, an individual voter may rank (1) health care, (2) defense, and (3) education in order of priority, and another may have a different ranking. But Arrow demonstrated that there is no method—literally none—to aggregate the various individual rankings into a community-wide one when there are more than two alternatives. The theorem rests on certain assumptions, such as having multiple voters and voters’ preferences between options not changing if irrelevant options are removed, but those assumptions aren’t unrealistic.
Another, similar idea, and one that’s been known for centuries, is the Condorcet paradox, named after a liberal French aristocrat. Say you have three candidates, Amy, Beth, and Claire. A majority of voters can prefer Amy over Beth, Beth over Claire, and yet also Claire over Amy, leaving no clear preference out of all three. This is because the majorities of each group are composed of different individuals with differing preferences. These individual voters may have understandable views, but collectively they are incoherent.
With Arrow and Condorcet in mind, even a simple statement like “Americans believe the government should prioritize health care before defense and defense before education” is highly suspect, at best. And it’s only more true for more complicated matters, such as the details of legislation itself. When it comes to whether “the people” support a specific rule that an administrative agency has come up with via delegated power from the legislature, that is a fatally flawed question even if we assume voters are well informed about the specifics of that policy option. And, of course, they almost always are not.
This leaves us with the following view of “democracy”: Voters have little knowledge of public policy and the facts underlying that public policy. Their policy preferences—such as they exist—are not based on a deep analysis of ideology or facts and even change based upon how questions are asked of them and what others on their “team” support. Even if individual voters had their own coherent views (which, again, most don’t) we could not aggregate those views into a “Will of the People” that elected representatives could actually discover. Really the only thing we can know about the electorate is who gets the most votes at an election. Why did that person win? Even that is extremely difficult to answer. The bottom line? There is no butterfly. Only a mist.
The implications of swallowing this “red pill” can seem unsettling. To name one, how should we structure our elections in light of it? As the examples of Wisconsin in 2018 and the UK in 2005 demonstrate, partisan electoral outcomes can depend greatly on the electoral rules. For example, if the UK had had a pure system of proportional representation in 2005, where voters vote for parties and votes are aggregated nationwide (not by district) and proportionally translated into seats, then Tony Blair would have only had 35% of seats in Parliament and would have had to form a coalition with another party to stay in power. If the same thing happened in Wisconsin then the GOP majority would likely have evaporated, or at least have been much diminished.
But would these alternative systems be any better at actually “discovering” the “Will of the People” and translating that “will” into public policy? Did the British public in 2005 actually “want” Labour to go into coalition with another party and produce whatever mix of policies would have come out of that stew? Did Wisconsin voters in 2018 (who narrowly elected a Democratic governor) in fact “want” a closely divided legislature and the resulting policies? The answer is unknowable. There’s something there, and we can conjure up worse ways to translate individual preferences into a government, such as the old “rotten borough” system in the UK, where a large number of districts had extremely small numbers of voters relative to others. But that doesn’t mean what we think are more “democratic” systems are going to better translate what “the people” want into law. It certainly seems unfair that Democrats received more votes in Wisconsin but won far fewer seats. But less “democratic”? I myself think that’s true, but only in a nominal sense of “democratic,” not the sense we usually mean of “figuring out what the voters want.” When democracy is a mist, the type of net starts to look a lot less relevant.
Even for someone committed to a more “democratic” system, these hard truths of social science are ones that need to be addressed. Yet they have not much reached the judiciary or, by and large, the legal profession. On the bench, before the bar, and even in the legal academy, a belief in fairies and the butterfly fallacy (to mix my metaphors) runs strong, especially in constitutional cases. Judges admit that the Constitution is (as it says) the “Supreme Law of the Land” and that statutes and regulations which contradict the Constitution are void. But like a lot of legal text, the Constitution is susceptible to diverse interpretations. And in most cases judges will do their best to interpret the Constitution to not restrict what the government wants to do. For all the headlines about the Supreme Court “striking down” various laws every year, those are a tiny fraction of the constitutional work that judges face. And one of the primary reasons courts give for exercising this “judicial restraint” is they don’t want to thwart the “Will of the People.”
As my college Adam Shelton and I recently tabulated, the Supreme Court and lower courts repeatedly rest upon the existence of a “Will of the People” and related concepts in refusing to hold laws unconstitutional. For example, the Supreme Court has discouraged invalidating laws because it prevents an implementation of something “embodying the will of the people.” It has explained that judicial review can allow a court “to substitute its judgment for the will of the people of a State as expressed in the laws passed by their popularly elected legislatures.” These justifications indicate that, were it not for the people’s “will,” judges would try to simply interpret the law—in this case the Constitution itself—as judges are usually proud to do, independently and impartially. But because we live in a democracy, they must only do so if what the Constitution means is painfully obvious. If there’s doubt, the “collective will” must be followed. The notion that such a thing doesn’t exist or at least is unknowable hasn’t yet pierced the judicial consciousness.
This includes when it comes to the nondelegation doctrine. In most areas of constitutional litigation “democracy” and the Constitution line up on ostensibly different sides. Well-known examples are a free speech dispute where a legislative majority tries to silence an unpopular speaker, or a racist one tries to exclude and suppress a minority group. In the nondelegation context, however, both sides lay claim to democratic values. Justice Gorsuch argues the power to make decisions must stay in Congress so it can better express the democratic will, not unelected bureaucrats in the executive branch. On the other hand, his critics state that if the legislative branch doesn’t even reflect a majority of voters we can hardly say that it expresses a democratic will, and that such a will is more properly embodied by the popularly elected executive branch.
But what if neither side expresses the “Will of the People,” at least on whatever issue has been delegated? On most subjects it’s almost fantastical to believe that “the people” have an opinion, either at that time the statute authorizing the challenged executive action was originally passed or at the time the executive issues its order or regulation. And it’s hard to believe that even if that “will” does exist it was properly translated by the legislature or the executive into law. Both sides are relying on a fiction to justify their result instead of basing their judgment in the law itself, i.e., what the text of the Constitution means in the court’s independent and impartial judgment. This is true for the nondelegation doctrine, but it’s also true for all kinds of other constitutional questions, such as the regulation of economic liberty, our criminal justice system, protections on private property, and even civil liberties such as free speech where, thankfully, courts already are skeptical of government power. Legislatures are not illegitimate and, unless their acts violate the Constitution, we should follow them. But there should be no thumb on the scale against the Constitution because a judge believes the legislature is good at catching the butterflies of public opinion. We are all just wandering through the mist, doing the best we can.
Where does that leave “democracy” itself? Should we be any less committed to it, or should we entertain other options given that it does such a lousy job of translating nonexistent “will” to power?
We should leave democracy exactly where it is. What all this talk about collective “will” forgets is the best reason we have democracy in the first place: It allows us to disagree peacefully and change leaders peacefully. Democracy also seems to lead to better policy outcomes, such as more prosperity, freer markets, more respect for minorities, security in private property, etc. In short, a more liberal society. Most countries like that these days are democracies. Democracy giving better policies than other systems is a much more defensible claim than that it captures the people’s “will.”
But even setting policy aside (whether or not it’s policy that “the people” want), the fact that we don’t change leaders through their violent demise, or through their sons or their generals fighting each other, makes up for all of democracy’s disappointments. Human beings always have disagreements on who should be in power and what they should do with that power. Dynastic succession has proven a terrible way of dealing with those disagreements, and so have various other antidemocratic schemes. On the other hand, free and fair elections work marvelously. However, what’s “free” and what’s “fair” has wide parameters. Just like the people’s will, some things don’t qualify, like ballot box stuffing or English rotten boroughs. But there are many avenues through which the people can choose their elected representatives and feel they have a say in the result. It was true in the UK in 2005. In this sense that election was “democratic.” It lead to no bloodshed, and five years later Labour finally lost anyway. But we shouldn’t fool ourselves that it lead to a translation of the “Will of the People” into the policies the 2005-10 Parliament enacted.
And this is also true when we go to the polls in the United States. Whether a legislature is unconstitutionally delegating its powers or not, it’s not delegating a “will.” That fairy tale may be told at bedtime or in Saturday morning cartoons, but judges shouldn’t be basing their interpretations of the Constitution on it. And neither should we, when we give thanks for our democracy.
Featured Image is The National Constitution Center