Governing by Algorithm: What Smart Cities Are Teaching Us About Power in Liberal Democracies

The current automation of governance is an evolution in the long-standing tension between democratic responsiveness and technocratic executive power.

Governing by Algorithm: What Smart Cities Are Teaching Us About Power in Liberal Democracies

Across the world, city governments are adopting systems that promise to make urban life smarter. Sensors regulate traffic. Algorithms assign police patrols. Software flags welfare applicants. Cameras recognize faces. To advocates, these technologies offer efficiency, safety, and objectivity. To critics, they threaten privacy and freedom. But the political meaning of smart cities is more subtle than either side usually admits. What is at stake is not simply a clash between technology and democracy. It is the evolution of a long-standing tension inside liberal democracy itself: the tension between democratic responsiveness and technocratic executive power.

Liberal democracies have always governed through bureaucracy. Elections choose representatives, but representatives rely on administrative agencies to implement policy. This division is necessary. No modern state could function without expertise, rules, and professional management. Yet it also creates a structural problem. Much of what shapes citizens’ daily lives happens far from public deliberation. Decisions are made by caseworkers, planners, police departments, and regulatory bodies whose authority is only indirectly accountable to voters.

Over time, liberal states have tried to contain this tension. They introduced transparency requirements, judicial review, ombuds offices, public comment procedures, and freedom-of-information laws. These mechanisms do not eliminate technocratic power, but they keep it visible and contestable. They allow citizens to understand how authority is exercised and, at least in principle, to challenge it.

Smart city technologies enter this already existing institutional landscape. They do not replace democratic governments with machines; rather, they automate specific layers of the administrative state. It is important to distinguish these tools, often comprised of predictive analytics and optimization loops, from the Large Language Models (LLMs) that currently monopolize the term "AI." While LLMs simulate conversation, smart tech operates as a background architecture, pushing executive power further away from democratic responsiveness while preserving the outward forms of liberal rule.

To see why this matters, we must distinguish between two different kinds of automation. First, there are cases where systems directly displace or filter out citizen feedback. Second, there are cases where automation replaces processes that were already insulated from public input. Both matter politically, but they matter in different ways. In the former, the loss of a human encounter is often framed as a loss of empathy, though in practice, those "human" interactions were frequently just gatekeepers managing a queue that many citizens could not afford to wait out. The shift here is not necessarily from "help" to "denial," but from a verbal, contestable process to a digital, rigid one. While this creates a "digital trail,” a matter of record that theoretically offers more transparency than a human's internal thoughts, that record is often only legible to those with the technical expertise to audit the underlying code.

In the first category are systems that intervene in how citizens are heard. These include complaint platforms that automatically triage reports, welfare systems that algorithmically screen applicants, school or housing tools that rank “risk,” and policing technologies that decide which neighborhoods deserve attention. In these cases, AI does not merely manage resources. It stands between citizens and the state.

Where residents once encountered a human administrator, someone who could theoretically listen or explain, they now encounter a score, a queue, or an automated denial. Critics of this shift mourn the loss of "discretion," but it is worth addressing whether the previous human systems were truly responsive, or if they simply functioned as human-faced gatekeepers of a queue that many citizens lacked the time or resources to navigate. The "discretion" of a human official was often as opaque as any algorithm. However, the shift to a digital system changes the nature of the evidence: it replaces the elusive "why" of a human decision-maker with a digital trail. This trail makes the logic of the decision a matter of record, but it simultaneously transforms the interaction into a one-way flow. The feedback becomes data, and the citizen’s voice becomes a mere input into a system where the "literal logic" is preserved in code, yet remains practically inaccessible to those it governs.

Here, the loss is not just transparency. It is responsiveness. Citizens can no longer tell whether anyone is actually listening. They cannot easily understand why a decision was made. And they often have no meaningful way to challenge it.

In the second category are systems that replace already-insulated bureaucratic routines. These include predictive tools that allocate patrol routes, software that schedules inspections, and programs that optimize maintenance. In these domains, there was rarely robust citizen input to begin with; decisions were made by managers using a mix of statistics and professional judgment. When these narrow AI systems, distinct from the conversational fluency of LLMs, replace those processes, they do not necessarily eliminate a democratic participation that never truly existed.

However, the shift still fundamentally alters the structure of power by replacing human discretion with computational discretion. Algorithms encode assumptions about risk and priority into a "literal logic" that is preserved in code. While a manager’s "gut feeling" or professional judgment was often an undocumented and opaque black box, the algorithmic model provides a digital trail. It creates a matter of record that is theoretically more transparent than a human mind, yet practically more difficult for the average citizen to trace or understand without specialized technical literacy. Authority shifts from accountable officials to technical infrastructures, where governance is no longer recorded in meeting minutes, but in version-controlled scripts.

The political danger of smart cities, then, is not uniform. It is not that technology suddenly overthrows democracy. It is that it deepens and stabilizes an existing technocratic drift, moving executive authority further from spaces where citizens can see, question, and contest it.

This drift is especially evident in the executive branch. Legislatures set broad goals, but mayors, agencies, and departments implement them. Smart city technologies expand the implementation layer. City executives adopt systems through procurement contracts, pilot programs, and partnerships with vendors. Policy is increasingly made through infrastructure.

This is not a legislative revolution. It is an administrative evolution. Governance changes not because new laws are debated, but because new tools are installed.

Smart city rhetoric presents these tools as neutral. Algorithms are said to remove bias, improve efficiency, and optimize outcomes. But neutrality is itself a political claim. Every model reflects judgments about what counts as a problem, which data matter, and whose risks deserve priority. When a city installs facial recognition in public housing, it is choosing security over privacy. When it deploys predictive analytics in schools, it is deciding that future behavior should be managed in advance.

These are normative choices. Yet they are increasingly made in technical language, inside executive agencies, through contracts and code rather than public debate.

The result is a subtle transformation of consent. In liberal theory, consent flows from citizens to representatives to administrators. In smart cities, consent becomes passive. By walking through public space, applying for services, or living in a monitored neighborhood, residents are treated as having agreed to algorithmic governance. Participation is assumed, not chosen.

This is less consent than acquiescence.

Some defenders argue that smart cities simply optimize existing systems. If traffic improves and crime declines, why worry about democratic theory? The answer is that democracy is not only about outcomes. It is about processes. It is about whether citizens can understand and influence the rules under which they live.

When governance becomes technical rather than political, that influence weakens. Decisions are framed as matters of efficiency rather than values. Disagreement is treated as noise rather than deliberation.

Importantly, not all automation is equally troubling. When AI replaces administrative tasks that were never sites of democratic engagement, the stakes are lower. But when systems shape who gets heard, who gets helped, and who gets watched, they alter the channels through which citizens encounter the state.

The danger is not that machines rule us. It is that the executive branch governs through machines while remaining formally democratic and substantively unresponsive.

This produces a distinctly liberal dilemma. Liberalism values both individual freedom and effective administration. Smart cities promise order and efficiency, but they also threaten pluralism and contestation. They reward predictability over participation.

In theory, smart governance could enhance democracy. Data could reveal inequality. Technology could support deliberation. Digital tools could expand access. But in practice, most smart city projects are not designed around democratic renewal. They are designed around management.

The deeper issue is not technological. It is institutional. Liberal democracies have long struggled to align administrative power with popular control. Smart cities magnify that struggle. They embed authority into infrastructures that are difficult to see and harder to challenge.

If liberal democracy is to survive this transformation, it must reassert political control over technical systems. That means transparency not only about data, but about decision logic. It means treating procurement as a democratic act. It means giving citizens meaningful ways to contest automated judgments.

Above all, it means resisting the idea that governance can be neutralized into code. Liberal democracy depends on disagreement. It depends on the messy, frustrating work of public reasoning.

Algorithmic governance is not a coup. It is an evolution. But evolutions shape what comes next. If liberal democracies allow the executive branch to automate itself beyond public reach, they will not become tyrannies overnight. They will become something quieter: systems that still hold elections, still speak the language of freedom, but increasingly govern without the people.

The task is not to reject technology. It is to remember that in a democracy, even the smartest city must still answer to its citizens.


Featured image is Datatron-205 Computer

Liberal Currents LLC © . All rights reserved.