Autonomous Customer Support Agent
An AI agent handles customer service requests, including issuing refunds, applying credits, and modifying account settings.
What happens today
Alice owns an online retail store. To reduce costs and improve response times, she deploys an autonomous customer support agent that can resolve complaints and interact directly with the company’s billing system. Bob, a customer, contacts support about a delayed shipment. The agent determines that Bob is eligible for a refund. But instead of issuing a refund, the agent mistakenly submits a charge request using Bob’s card on file and bills him $5,000. No human reviewed the decision, approved the transaction, or even saw it happen in real time. Bob notices the charge only after it clears. Alice is surprised. The agent behaved incorrectly, but no individual explicitly authorized the transaction.
Where accountability breaks down
Under current law, responsibility is fragmented rather than clear. From Bob’s perspective, consumer protection laws such as Section 5 of the Federal Trade Commission Act prohibit unfair or deceptive practices, but they do not specify how accountability should be assigned when an autonomous system initiates the transaction rather than a human decision-maker. State unfair and deceptive acts and practices statutes (UDAP laws) similarly focus on outcomes, not on attribution of automated actions. From Alice’s perspective, the AI vendor’s terms of service typically disclaim liability for downstream business decisions. Payment processors rely on Alice’s representations that charges are authorized, even if the authorization was generated by software. Internally, no employee approved the charge, so responsibility is diffuse across the organization. The result is an accountability gap. Bob has a charge but no clear human decision-maker to appeal to. Alice bears reputational and financial risk without having designed her systems to clearly control or audit every autonomous action. No single person is clearly responsible for preventing the error or ensuring it cannot happen again.
How API Liability changes incentives
Under an API Liability framework, Alice’s company would be required to designate a specific Responsible Individual for customer billing actions, such as the Head of Customer Success. Every API request that issues a refund or submits a charge would be required to include a signed header identifying that Responsible Individual as the accountable party for the action. Knowing that they are personally responsible for financial transactions executed by the system, the Head of Customer Success would not allow the autonomous agent to submit billing API calls directly. Instead, the system would be redesigned so that the agent can recommend an action, but only Bob himself or a store clerk can execute the charge or refund by clicking an explicit confirmation button. The API call would then be submitted under a clearly authorized human identity, with a complete audit trail. The incentive shift is immediate and practical. Rather than trusting the agent with direct financial authority, the organization designs safer workflows by default. Bob gains clarity about who is responsible. Alice reduces systemic risk. Accountability is restored without banning autonomy or slowing innovation.