← All Questions Accountability

Could you explain to a regulator what your agent did and why?

The EU AI Act requires traceability, risk management, and human oversight for high-risk use cases. But agents break the assumption that you know the use case at build time.

Give a general-purpose office assistant “handle my inbox” and it might draft an email (minimal risk), then screen a job application (high-risk), then assess a customer complaint (potentially high-risk). The risk tier depends on how open-ended the prompt is — and the agent’s use case emerges at runtime, not at build time.

This means generic agents default to high-risk classification unless you explicitly exclude high-risk uses. If an agent autonomously wandered into a high-risk use case, can you reconstruct the chain of decisions that got it there? Can you show what authority existed, what information it acted on, and why it made the calls it made?

None of the Act’s requirements work without infrastructure: governance (who can build and deploy agents), audit (what did the agent do and why), and authorisation (what is this agent allowed to do right now, in this context).

Go deeper: AI Agents and the EU AI Act: Risk That Won’t Sit Still maps how the Act applies to agents and where the gaps are.

See where your organisation stands on this question.

Assess with the Agent Profiler →