Legislative Proposal
This page will host the full text of our proposed legislative framework for human accountability in automated systems. The document is currently being drafted in consultation with legal experts and policy researchers.
About This Document
The proposal establishes a framework for assigning legal liability when autonomous systems take actions with real-world consequences. It addresses questions such as: Who is responsible when an AI agent makes a harmful decision? How should liability be allocated across developers, operators, and deployers? What documentation and audit requirements should apply?
Our goal is to create legislation that is technically informed, practically implementable, and consistent with existing legal principles around agency and liability.
Draft Proposal Document
Document Preview
The full legislative proposal will be embedded here once the draft is complete.
Expected format: PDF document with section-by-section analysis
Feedback Welcome
This proposal is evolving and open to feedback. We are particularly interested in input from legal scholars, technologists who build autonomous systems, and policymakers who understand the legislative process.
If you have expertise in relevant areas and would like to contribute to the drafting process, please subscribe to our mailing list on the home page to receive updates and opportunities to participate.
Key Principles
The proposal is built around several core principles that guide its structure and recommendations.
Clear attribution: Every automated action must trace to a specific human who bears legal responsibility for that action.
Proportional liability: Responsibility should be allocated based on the degree of control and benefit each party has over the system.
Practical compliance: Requirements should be implementable with existing technology and reasonable operational overhead.
Technology neutrality: The framework should apply regardless of the specific AI technology used, focusing on outcomes rather than methods.