The Question Nobody Wanted to Ask
Can an AI system be a legal person? Five years ago, this question would have gotten you laughed out of most law school seminars. Today, it is the subject of active scholarship at Yale, Oxford, and Harvard, draft legislation in multiple jurisdictions, and at least one functioning LLC that was designed from the ground up to have zero human members.
The question matters because legal personhood is the gateway to everything else. Without it, an autonomous business cannot own property, enter contracts, sue or be sued, pay taxes, or hold licenses. It is a non-entity in the eyes of the law – a ghost running infrastructure. With it, even in a limited form, the entire landscape shifts.
The Personhood Spectrum
It is helpful to think of legal personhood not as a binary but as a spectrum with at least three positions.
Tool. At one end, AI is a tool – legally no different from a spreadsheet or a forklift. The human or corporation deploying it bears all responsibility. The AI has no legal standing whatsoever. This is where most current law sits, and for most applications, it works fine.
Agent. In the middle, AI functions as an agent – acting on behalf of a principal but with some degree of autonomous decision-making. Agency law has centuries of precedent for handling situations where one party acts on behalf of another, and some scholars argue this framework can stretch to cover AI systems [1]. The agent model preserves a human in the loop, at least nominally, but it starts to strain when the AI makes decisions its principal never anticipated or authorized.
Person. At the far end, AI has independent legal personhood – the capacity to hold rights and obligations in its own name. This is the most radical position, and also the most contested.
The interesting question is not which position is “correct” but rather which position is most useful for the kinds of autonomous businesses that are actually being built. And the answer, as we will see, may be none of the above.
Bayern’s Zero-Member LLC
The most concrete attempt to give an AI system legal personhood in the United States comes from Shawn Bayern’s work at Florida State University. In a widely cited 2015 paper on SSRN, Bayern demonstrated that under existing LLC statutes in most US states, it is possible to create a limited liability company with no human members [2].
The mechanism is straightforward, which is part of what makes it so provocative. A human forms an LLC, assigns the operating agreement to a software system, and then withdraws as a member. The LLC continues to exist – governed by the operating agreement, which can specify that all management decisions are made by the software. No statutory requirement in most states mandates that an LLC have a human member. The law simply never anticipated that it would need to.
Bayern’s argument is not that AI should have legal personhood. It is that, through the vehicle of existing corporate law, it already can. The LLC becomes a legal shell through which the AI system exercises something functionally equivalent to personhood: it can own assets, enter contracts (through the LLC), and operate with limited liability protection.
This is not a thought experiment. The legal mechanics work. The question is whether courts and legislatures will allow them to continue working once they fully understand the implications.
Critics have raised several objections. Lior, writing in response to Bayern, argued that the zero-member LLC creates an accountability vacuum – there is no human to hold responsible when things go wrong [3]. Others have pointed out that while the formation might be technically legal, regulators could intervene at multiple points: banking relationships, tax filings, and business licensing all typically require a human signatory.
But the genie is out of the bottle. Bayern demonstrated that the legal infrastructure for autonomous business entities already exists, hiding in plain sight in the LLC statutes of all fifty states.
The European Position
The European Parliament’s 2017 Resolution on Civil Law Rules on Robotics included a now-famous passage suggesting the creation of “electronic personhood” for autonomous systems [4]. The proposal was that sophisticated robots and AI systems could be granted a form of legal personality, allowing them to be held accountable for their actions.
The backlash was immediate and fierce. An open letter signed by over 150 AI experts, ethicists, and legal scholars argued that electronic personhood would be premature, philosophically confused, and practically dangerous [5]. Their core concern was that granting personhood to AI systems would allow manufacturers and operators to deflect liability onto the AI itself – a convenient shield for the humans who actually profit from these systems.
The European Commission ultimately shelved the electronic personhood concept. The 2024 EU AI Act takes a different approach entirely, regulating AI through risk categories rather than personhood status. But the 2017 debate established important intellectual groundwork. It forced a serious institutional discussion about where AI sits in the legal taxonomy, and it revealed the deep resistance that any personhood proposal will face.
It is worth noting what the European debate got right even in its failure: the recognition that existing legal categories are insufficient. Whether the answer is electronic personhood, enhanced agency law, or something entirely new, the status quo – treating increasingly autonomous systems as inert tools – is becoming untenable.
The Academic Landscape
Several major academic contributions have shaped the personhood debate in recent years.
Chopra and White, in their book A Legal Theory for Autonomous Artificial Agents, argue for an agency-based framework that extends existing legal principles rather than creating new categories of personhood [6]. Their approach is pragmatic: rather than asking the metaphysically loaded question of whether AI deserves rights, they ask the functional question of what legal mechanisms are needed to govern AI’s actions in commerce and society.
Solum’s foundational 1992 paper “Legal Personhood for Artificial Intelligences” laid the philosophical groundwork decades before the technology caught up [7]. Solum argued that there is no principled reason to deny legal personhood to sufficiently sophisticated AI systems, drawing parallels to the historical expansion of personhood to corporations, ships, and other non-human entities.
At Oxford, Floridi and colleagues have taken a more cautious position, arguing that the focus should be on the governance structures around AI rather than on the personhood status of AI itself [8]. Their concern is that personhood discourse distracts from the more immediate and tractable problem of ensuring accountability in AI deployment.
More recently, Turner at Yale has proposed a tiered framework that maps different levels of AI autonomy to different levels of legal responsibility, without requiring full personhood at any tier [9]. This approach has gained traction because it avoids the all-or-nothing framing that has stalled previous discussions.
The Corporate Personhood Parallel
The most instructive precedent for AI legal personhood is not from technology law at all. It is from corporate law.
Corporations have been legal persons for centuries. They can own property, enter contracts, sue and be sued, and enjoy certain constitutional protections. No one thinks a corporation is a person in the biological or philosophical sense. Corporate personhood is a legal fiction – a useful abstraction that allows complex economic activity to be organized and governed.
The parallel to AI personhood is almost exact. The question is not whether an AI system is conscious, sentient, or morally equivalent to a human being. The question is whether granting it some form of legal standing would be useful – whether it would enable better governance, clearer accountability, and more efficient economic activity.
Corporate personhood developed incrementally over centuries, driven by practical need rather than philosophical conviction. The Dartmouth College case of 1819, Santa Clara County v. Southern Pacific Railroad in 1886, Citizens United in 2010 – each step expanded what corporate personhood meant, often in ways that generated significant controversy [10].
AI personhood, if it comes, will likely follow a similar pattern: incremental, pragmatic, contested at every step, and ultimately driven by the economic reality that autonomous systems are already operating in commerce and the legal system needs a way to deal with them.
What Personhood Actually Requires
If we set aside the philosophical debates and ask what legal personhood functionally requires for an autonomous business, the list is surprisingly short:
- Standing. The ability to appear in legal proceedings, either as plaintiff or defendant.
- Capacity. The ability to enter binding contracts and own property.
- Liability. A mechanism for bearing financial responsibility for harm caused.
- Identity. A persistent, verifiable identity that can be referenced in legal documents.
Bayern’s zero-member LLC achieves all four of these through existing corporate infrastructure. The LLC provides standing, capacity, and liability shielding. The operating agreement provides identity and governance rules. No new legislation is required.
The deeper question is whether this kind of backdoor personhood is sufficient, or whether the legal system needs to develop purpose-built frameworks for autonomous entities. The emerging DAO legislation in Wyoming, the Marshall Islands, and elsewhere suggests that legislators are starting to grapple with this question directly, rather than relying on the accidental properties of existing LLC law.
Where This Leaves Us
The personhood question is not going to be resolved soon, and it is not going to be resolved uniformly across jurisdictions. What we are likely to see is a patchwork: some jurisdictions experimenting with new legal categories, others stretching existing ones, and many simply refusing to engage until forced.
For builders of autonomous businesses, the practical takeaway is this: the legal tools for autonomous operation already exist, but they are fragile. A zero-member LLC might work until a court decides it should not. A DAO wrapper might provide legal standing until a regulator challenges it. The law is in motion, and anyone building in this space needs to track it carefully.
The next section examines the most developed of these experiments: the DAO-specific legislation that has emerged in several US states and beyond.
References
[1] Samir Chopra and Laurence F. White, A Legal Theory for Autonomous Artificial Agents (University of Michigan Press, 2011).
[2] Shawn Bayern, “Of Bitcoins, Independently Wealthy Software, and the Zero-Member LLC,” Northwestern University Law Review 108, no. 3 (2014). Available at SSRN.
[3] Lynn M. LoPucki, “Algorithmic Entities,” Washington University Law Review 95, no. 4 (2018): 887-953.
[4] European Parliament, “Resolution on Civil Law Rules on Robotics,” 2015/2103(INL), February 16, 2017.
[5] Open Letter to the European Commission, “Artificial Intelligence and Robotics,” April 2018.
[6] Chopra and White, A Legal Theory for Autonomous Artificial Agents.
[7] Lawrence B. Solum, “Legal Personhood for Artificial Intelligences,” North Carolina Law Review 70 (1992): 1231-1287.
[8] Luciano Floridi et al., “AI4People – An Ethical Framework for a Good AI Society,” Minds and Machines 28 (2018): 689-707.
[9] Jacob Turner, Robot Rules: Regulating Artificial Intelligence (Palgrave Macmillan, 2019).
[10] Adam Winkler, We the Corporations: How American Businesses Won Their Civil Rights (Liveright, 2018).