Europe has crossed a new threshold by allowing an AI agent to perform a regulated financial transaction.

With Santander and Mastercard completing the region’s first live end-to-end AI-executed payment, the significance goes beyond the technology itself. This was not a simple API trigger but an autonomous system acting with delegated authority, marking a defining moment: AI agents are becoming economic actors.

As payments may be the first proving ground for a broader agentic economy, CISOs and risk leaders must rethink how they govern AI agents by considering three key questions.

What does it mean when AI agents transact under delegated authority and become economic actors?

When an AI agent transacts under delegated authority, it moves beyond generating information and begins acting within economic systems on behalf of a person or organisation. The agent is authorised to perform actions such as purchasing services, executing payments, negotiating terms, or committing resources. At that point it becomes a form of economic actor and an active participant in economic activity rather than simply an interface to it.

The real change is not the technology itself, but the delegation of agency. Once software is authorised to act within economic systems, the questions begin to resemble governance and accountability rather than traditional cybersecurity. Who authorised the action? What limits applied to that authority? And how can we demonstrate that the agent acted within the scope intended?

Similarly, understanding the difference between user intent and agent intent becomes critical, especially when it comes to accountability for financial transactions. As agents enter areas such as commerce, procurement, and financial services, these questions become foundational. The agent is no longer simply an interface to a system. It is an active participant within it.

GlobalData Strategic Intelligence

US Tariffs are shifting - will you react or anticipate?

Don’t let policy changes catch you off guard. Stay proactive with real-time data and expert analysis.

By GlobalData

Why must permissions and governance travel with the agent rather than sit in static policy documents?

Traditional governance assumes relatively static systems. Policies are written, controls are configured, and access rights are granted within defined boundaries. Agents operate very differently. They move across services, call external tools, reuse context, and interact with multiple systems within a single workflow. Governance that exists only as a static configuration or policy document quickly becomes disconnected from how the agent actually behaves.

For that delegated authority to work safely, the authority given to the agent cannot exist only as a static permission inside one platform. The rules that define what the agent is allowed to do need to travel with it as it moves across tools, services, and markets. In practical terms that includes things like spending limits, transaction scopes, approved counterparties, data access boundaries, and contextual constraints about when and why the agent is allowed to act. This will typically come in the form of system prompts or design guardrails.

This is where contextual governance becomes essential. An agent deciding whether to complete a purchase or execute a payment should not rely only on a credential that proves it has access. It also needs policy context that defines the conditions under which that action is appropriate.

The difficulty is that most current systems were not designed for this model. Identity systems, policy engines, and security controls tend to operate within a single platform or organisational boundary. Once an agent begins interacting across multiple services, APIs, and external markets, those governance signals can fragment. The agent’s authority may be clear in one system but will be far less visible in another.

That gap is one of the central challenges emerging in agentic commerce. As software systems begin to act economically on our behalf, the infrastructure that governs authority and behaviour will need to evolve so that policy and context controls can travel with the agent, even when the environment around it changes.

Why does behavioural sequencing become the new risk surface rather than simple access control?

Traditional security models focus on whether an entity has permission to perform a particular action. In agentic systems, that question alone is almost always insufficient.

Agents interpret instructions, make decisions in real time, and pursue goals by selecting tools and combining information across systems. In practice, the agent is continuously evaluating context and deciding what step to take next in order to complete the task it has been given.

Because of that autonomy, risk rarely emerges from a single action. It emerges from how a sequence of decisions unfolds.

An agent might legitimately retrieve internal data, call an external service for analysis, and then generate a response or complete a transaction. Each step may be authorised and technically correct on its own. The issue arises when the agent reuses context from earlier steps, chains tools together in an unexpected order, or interprets instructions in a way that leads it to combine those actions differently than the operator intended. In other words, the agent is not simply executing permissions. It is reasoning about how to achieve an objective.

That changes the nature of the risk surface. Security teams can confirm that the agent had access to a dataset or a tool, but that alone does not explain why the agent chose to use them in a particular sequence or how the final outcome emerged.

Understanding those decision paths becomes essential. Visibility into tool chains, context reuse, and the agent’s interpretation of instructions provides the missing layer that explains how authorised actions can still produce unintended results.

Hanah-Marie Darley, co-founder and chief AI officer, Geordie AI