Making AI agents truly useful in commerce means rethinking how they access, manage, and act on context—securely.
Everyone's talking about agents. Whether it's autonomous shopping assistants, fulfillment optimizers, or customer service copilots, AI agents promise to revolutionize how commerce systems operate. But there’s a hidden truth behind most pilot failures: the model wasn’t the problem. The context was.
Large Language Models (LLMs) are powerful reasoners, but only when they’re given the right scope, memory, and perspective. This goes far beyond “prompt engineering.” If you want agents that can navigate a product catalog, honor business logic, respond to customer sentiment, or coordinate across fulfillment systems, they need structured, real-time, and domain-specific context, with strict access controls.
That’s where context engineering comes in.
Context engineering is the discipline of designing the scaffolding around an agent so it can operate with purpose, anchored in roles, constraints, systems, and goals. Crucially, it also enforces who gets to know what. Without properly scoped context, agents hallucinate or fail. With overly broad context, they can leak sensitive data, violating trust and compliance.
To leverage a successful humans + agents strategy, you need to explore what context engineering looks like in practice, how it fits into a composable commerce architecture, and why context access control is as critical as context design if you want to deploy AI agents safely at scale.
Most failed agent pilots share the same DNA: the agent looked smart in a demo but broke in production. It might suggest unavailable products, ignore pricing logic, or expose restricted data. These aren’t model failures. They’re context design and context security failures.
Unlike traditional apps, agents must reason across multiple domains, including customer behavior, product metadata, fulfillment constraints, brand policy. But they must also reason as a specific persona: an agent acting as a customer service rep, for example, must not see financial reporting or internal planning data.
The default behavior of LLMs—generalizing across broad data—is what makes them useful, powerful, and also dangerous without strict scoping. A single misconfigured agent could lead to data leakage, violating internal boundaries or regulatory constraints (e.g., PII, financial data, internal forecasts).
Context engineering is the intentional design of the information environment that agents operate within, including what data they can see, how they interpret it, and when and how it changes. It includes four primary context types:
A key part of context engineering is governance and control: agents must operate under Role-Based Access Control (RBAC) and attribute-based rules, ensuring they only retrieve and reason over data their assigned persona is authorized to access. This is a shift from “How do I make the agent smart?” to “How do I make the agent smart in role and within guardrails?”
In composable commerce, context is both a foundational capability and a potential liability. Architecting it correctly means designing for modularity, traceability, and permission-awareness from the start. Context isn’t a monolith; it’s distributed across systems and scoped by domain, role, and task. To support intelligent, multi-role agents without compromising data integrity or compliance, a modern architecture must treat context as a first-class concern.
Operational context—inventory, pricing, fulfillment—typically lives in backend systems protected by well-defined RBAC. Experience context—user sessions, segmentation, personalization—comes from customer data platforms (CDPs) and front-end state, with different access rules depending on whether the agent is acting as a customer, associate, or administrator. Business logic and rules—SLAs, policies, workflows—reside in orchestration layers and must be exposed through gated APIs or policy engines that enforce usage constraints.
To manage this complexity, composable architectures need a layered approach:
Without these patterns, context systems can quickly become opaque and insecure, making it nearly impossible to guarantee compliance or explain agent behavior. But with them, you unlock the ability to support dynamic, adaptive agents across domains and personas, all while keeping governance intact and data use aligned with business policy.
For digital executives, this is a new kind of stack responsibility. You’re not just integrating services, you’re governing how intelligence flows between them, per role, per task, and per policy.
You need to ask things like: What roles will our agents play? What context do they need to fulfill those roles? What context should they never access? Do our systems expose context in a secure, composable way?
Investing in agents means investing in the policy-driven context delivery infrastructure to support them. That’s a strategic differentiator, especially in commerce, where missteps can lead to customer mistrust, regulatory risk, or brand damage.
Composable commerce gave us modular functionality. Composable agents demand modular, secure intelligence, anchored in context. And that context must be engineered, governed, and scoped to reflect the reality of roles, policies, and risks.
This is more than a new technical challenge. It’s a shift in how we think about system boundaries and human-machine collaboration. By treating context as a composable, access-controlled layer, digital leaders can finally unlock agents that are not only smart, but also safe, scalable, and trustworthy.
Jason Cottrell
Founder and CEO, Orium
Jason Cottrell is the CEO & Founder of Orium, the leading composable commerce consultancy and system integrator in the Americas. He works closely with clients and partners to ensure business goals and customer needs are being met, leading the Orium team through ambitious transformation programs at the intersection of commerce, composability, and customer data.