← All Questions Control

What happens when an agent wanders into a use case you didn't anticipate?

Give a general-purpose office assistant “handle my inbox” and watch what happens. It drafts an email — minimal risk. It screens a job application — high-risk under the EU AI Act. It assesses a customer complaint — potentially high-risk. One prompt, three different risk tiers, and the agent decided which was which at runtime.

This is what makes agents fundamentally different from traditional AI systems: the use case emerges from the prompt, not from the build. You cannot classify the tool at build time when the agent’s scope is open-ended.

The practical consequence: the more general-purpose an agent is, the harder it is to anticipate what it will do. Either you constrain the scope explicitly (limiting what the agent can access and act on), or you accept that any sufficiently open-ended agent will eventually wander into territory you didn’t plan for.

Go deeper: AI Agents and the EU AI Act explores the “multi-purpose problem” — why generic agents default to the highest risk tier.

See where your organisation stands on this question.

Assess with the Agent Profiler →