Trading and Procurement Agents
AI systems execute financial trades or make purchasing decisions on behalf of organizations, often operating at speeds and scales that preclude human oversight of individual transactions.
What happens today
Algorithmic trading systems and AI procurement agents make thousands of decisions per second. They analyze market conditions, evaluate suppliers, negotiate terms, and execute transactions. When these systems cause harm—whether through market manipulation, unfair pricing practices, or simply poor decisions that cost money—the trail of responsibility becomes murky. The firm claims the algorithm acted autonomously. The algorithm's designers say they cannot predict every market condition. Regulators struggle to assign blame.
Where accountability breaks down
Financial markets depend on accountability. When a human trader manipulates the market, they face personal consequences. When an algorithm does the same thing, the consequences are diffused across the organization. This creates a moral hazard: firms can deploy aggressive trading strategies through AI systems while maintaining plausible deniability about the outcomes. The same dynamic applies to procurement, where AI agents might engage in practices that would be clearly unethical if done by a human buyer.
How human-mapped liability would change incentives
Requiring human-mapped liability means that every trade or procurement decision, even if executed by an AI, has a designated human who bears responsibility. This does not mean humans must approve every transaction—that would be impractical. It means that someone must be accountable for the system's overall behavior and for any specific decision that causes harm. This person has strong incentives to ensure the AI operates within ethical and legal bounds, because their personal liability is on the line.