Every Automated Action Should Have a Human Accountable.

As autonomous systems increasingly act on our behalf, we need clear rules about who is responsible when things go wrong.

This proposal ensures that every API call with real-world effects maps to a specific, liable human being.

API Liability Framework

A policy proposal for human accountability in autonomous systems

The Problem

Software systems are increasingly autonomous. AI agents book flights, execute trades, issue refunds, and manage infrastructure without direct human oversight. When these systems cause harm, determining who is responsible has become genuinely difficult.

Current legal frameworks were designed for a world where humans made decisions and machines executed them. That assumption no longer holds. The result is a growing accountability gap: systems act, consequences follow, but no one is clearly responsible.

The Proposal

We propose a simple principle: every automated action that has real-world effects must be legally attributable to a specific human individual. This does not mean humans must approve every action in advance. It means that when an autonomous system acts, there must be a designated person who bears responsibility for that action.

This could be the developer who deployed the system, the operator who configured it, or the executive who authorized its use. The specific allocation can vary by context. What matters is that the chain of accountability never breaks.

Why This Preserves Innovation

This proposal does not slow down AI development or restrict what autonomous systems can do. It simply ensures that the people who benefit from deploying these systems also bear the risks. This is how liability has always worked for other technologies.

Clear accountability rules actually help innovation by providing legal certainty. Builders can deploy autonomous systems knowing exactly what their obligations are. Users can trust these systems knowing that someone stands behind them. Markets function better when responsibility is clear.

Who This Is For

This proposal is relevant to policymakers drafting AI governance frameworks, technologists building autonomous systems, and anyone affected by automated decisions. It offers a practical framework that works within existing legal traditions while addressing genuinely new challenges.

We are not anti-technology. We believe AI systems will continue to become more capable and more autonomous. The question is not whether to allow this, but how to ensure it happens responsibly.

Stay Updated

Get updates on the proposal and related developments.