Context Graphs: The Missing Layer Between Data and Action

1

Arun — The Forgotten Customer

Arun isn’t a “lost customer.” He’s a customer in the act of being lost — slowly, quietly, almost politely.

It’s Tuesday morning. He’s waiting for his cab, coffee in hand, thumb doing its familiar scroll. A notification buzzes: an email from a brand he’s actually bought from twice this year. The subject line is enthusiastic — “Just for You, Arun!” — the kind of cheerfully generic promise that once worked when the inbox was quieter and attention was cheaper.

He doesn’t open it.

It’s not anger. It’s not boycott. It’s not even disappointment. It’s something more dangerous: indifference. The brand has become wallpaper.

Six months ago, Arun was a good customer in the modern sense of the word. He opened 8 of every 10 emails. He clicked regularly, browsed their app twice a week, bought when the timing was right. He wasn’t “loyal” in a sentimental way, but the relationship existed. The brand had earned a slot in his mind. It felt familiar.

Now, the brand feels random.

The messages arrive like disconnected islands. One day it’s a sale. The next day it’s a new launch. Then a “We Miss You!” after two quiet weeks. Nothing references what he last did, what he last considered, what he last cared about. Nothing suggests continuity. Nothing suggests memory.

Arun bought a jacket in February. No follow-up that felt like a human continuation — just more campaigns. He browsed shoes in June and hesitated. No gentle nudge later when his size returned — just more “Top Picks.” He clicked on a product twice last month, then stopped. The brand didn’t notice. Or, if it did notice, it didn’t act like it noticed.

So Arun swipes. Again. The 7th time this month. He’ll “come back later” — which is what people say to themselves when they’re not coming back at all.

What’s strange is that Arun doesn’t feel like he’s leaving. He feels like he’s drifting. The relationship isn’t ending with a dramatic unsubscribe or a complaint. It’s just… dissolving.

And somewhere inside the brand, there’s probably a dashboard that still classifies him as “Engaged,” because he opened an email 18 days ago. There might be an “AI engine” that has scored him as “high intent” based on old behaviour. There might even be a journey waiting to fire when he crosses a threshold.

But Arun isn’t a threshold. He’s a trajectory.

The brand’s system sees a customer in good standing. The trajectory tells a different story: a customer three weeks from dormancy, eight weeks from being “won back” through an expensive Google ad, twelve weeks from being written off as churned.

The cruelest part? Arun would have stayed. He liked the brand once. The products were good. The prices fair. But somewhere along the way, the relationship shifted from conversation to broadcast, from relevance to noise, from “we understand you” to “we have your email address.”

The brand didn’t push Arun away. It simply stopped paying attention. And in marketing, inattention is abandonment by another name.

Arun wasn’t lost. He was losing. But nothing in his inbox suggested anyone noticed.

That’s the modern marketing tragedy: customers don’t leave in a way that systems can see. They leave in a way that humans can feel — long before dashboards can confirm.

And the most expensive part comes later, when Arun reappears through a retargeting ad — celebrated as “new acquisition,” funded by a budget that never needed to be spent if the relationship had simply been maintained.

Nobody maintained it.

2

The Problem We Can’t See

Arun’s story feels personal, but it’s not exceptional. It’s the default state of modern marketing: relationships decaying invisibly, at scale, with no alarm bell loud enough to change behaviour.

The core problem is drift.

Most customers don’t churn the way businesses model churn. They don’t cancel with a clear timestamp. They don’t announce dissatisfaction. They don’t even unsubscribe. They simply stop paying attention. And attention is the upstream input for everything else — clicks, conversions, repeat purchases, referrals, lifetime value.

The numbers are staggering: 80% of engaged customers vanish every quarter. Not because they’re unhappy. Not because competitors lured them away. Not because the product failed. They vanish because brands stop being relevant before customers stop being interested.

In dashboards, drift is hard to see because it’s a slow-motion failure. It arrives as “a bit fewer opens,” “a bit less browsing,” “a slightly longer gap since last purchase.” Each signal is individually explainable. Together, they are a trend. And trends don’t trigger alerts until they become outcomes.

This creates the second problem: lag.

Marketing teams are responding to corpses, not patients. By the time a customer is labelled “At Risk” or “Dormant,” the relationship has already cooled. The brand is now trying to restart something that has quietly ended — and that restart is usually attempted with the blunt instruments of “win-back journeys” or paid reacquisition.

Reacquisition is treated as a growth tactic. But in reality, it is often an admission of late detection.

By the time dashboards show churn, the customer has already churned. By the time “win-back” campaigns deploy, the relationship has already died. Marketing performs autopsies when it should be doing check-ups. It measures corpses when it should be monitoring vital signs.

Then comes the third problem: the segment lie.

Arun might still be in an “Engaged” segment because the segment definition is simplistic: “opened in the last 30 days.” But segments are static snapshots in a moving world. Arun’s real state is not a category — it’s motion: from frequent engagement to occasional engagement to none. If you looked at his last six months as a line, you’d see the slope bending down.

Segments hide slopes.

Segments describe where customers were. Not where they’re going.

The segment says “Engaged.” The trajectory says “Leaving.” In every martech system on earth, the segment wins.

And because segments hide trajectories, dashboards hide danger. Aggregate “health” metrics look fine even while a large portion of the customer base is quietly slipping away. The green glow of “overall open rate” can conceal the fact that the same small fraction of loyal customers is propping up the number while everyone else fades.

This is why the industry ends up spending staggering sums on reacquisition. Not because marketers are foolish. Because the systems they rely on are not designed to detect drift early enough to prevent it.

When drift remains invisible, the future is predictable: customers fade, pipelines thin, CAC rises, and budgets shift back to ad platforms.

That’s the half-trillion-dollar loop: Acquire → Ignore → Drift → Reacquire → Repeat.

The bitter irony: brands pay Google and Meta premium prices to reach customers whose email addresses already sit in their own databases. They rent access to people they already own relationships with. They fund the platforms that profit from their own retention failures.

This isn’t a marketing problem. It’s a seeing problem.

Martech has given us more data than any previous generation of marketers could imagine. But data without context is just noise. Information without awareness is just storage. Knowing everything about customers while understanding nothing about their trajectories isn’t intelligence — it’s the most expensive form of ignorance ever invented.

Which leads to an obvious question: if we’ve spent a decade building CDPs and “single customer views,” why hasn’t this been solved? Why do brands still feel random to customers, and why do dashboards still lie to marketers?

3

Why CDPs Didn’t Solve This

CDPs promised to fix marketing’s fragmentation. And to be fair, they did solve a real problem: data living in too many places, in too many formats, with too many identities.

The promise was seductive: unify customer data, create a single source of truth, unlock personalisation at scale, enable smarter journeys.

In many organisations, CDPs delivered on the first two: collection and unification. You can now query Arun’s purchases, email clicks, app events, site browsing, maybe even store visits. You can stitch his identity across channels better than before. The brand remembers.

But here’s the critical point:

Memory is not awareness.

A CDP is, at its core, a storage layer. It is built to hold data, normalise it, and make it queryable. Even when it does “activation,” it often behaves like a distribution engine for predefined segments and rules.

That means the CDP answers questions like:

  • “What do we know about Arun?”
  • “What did Arun do last quarter?”
  • “Which segment is Arun in?”

But it struggles to answer the question marketing actually needs to answer in the moment:

  • “What should we do for Arun right now?”

That’s not a storage question. That’s a decision question.

The distinction is crucial.

Data is facts: Arun purchased twice, opened an email 18 days ago, browsed the jacket category last week.

Context is meaning: Arun is three weeks into an engagement decline, his open rate has dropped 60% in six months, his browse-to-buy ratio suggests interest without intent, and a promotional blast right now will accelerate his drift rather than reverse it.

CDPs excel at data. They were never designed for context.

The second limitation is that CDPs are fundamentally human-facing systems. They surface insights to analysts and marketers. They populate dashboards. They feed audiences to campaigns. They do not continuously interpret reality and act in real time unless humans set up those interpretations and actions in advance.

And that leads to the harsh bottleneck: humans.

Even if a CDP can tell Maya that a subset of customers is fading, Maya still needs to notice it, interpret it, decide what it means, design an action, deploy the action, measure and adjust.

That’s manageable for a few cohorts. It’s impossible at N=1 scale across millions of customers, dozens of channels, and a world that changes every day.

Humans cannot process millions of individual trajectories. They cannot detect subtle engagement shifts across a database of 10 million customers. They cannot calculate optimal intervention timing for each person at each moment. The cognitive load is simply beyond human capacity.

So marketers do what they must: they simplify. They segment. They create rules and thresholds. “If no purchase in 90 days, send win-back email.” “If opened but didn’t click, send follow-up.” These heuristics are reasonable. They’re also hopelessly crude compared to what each customer actually needs.

This is why so much “personalisation” still devolves into stereotypes at scale. The tooling can store more, but the operating model can’t act more precisely. The result is a familiar pattern: data gets richer, dashboards get prettier, and execution remains campaign-shaped.

CDPs gave us memory. They didn’t give us awareness.

Awareness requires three things CDPs weren’t designed to provide:

  1. Stateful — knowing what is true now, not just what happened before
  2. Temporal — knowing what is changing and where it’s heading
  3. Action-native — built to trigger decisions, not just store facts

This matters more now than ever because the future of marketing isn’t human-driven. It’s agent-driven.

AI agents — autonomous systems that sense, decide, and act without human intervention — are rapidly becoming the operational layer of marketing. They generate content, optimise sends, personalise experiences, and orchestrate journeys. But agents can only act on what they can perceive. They can only decide based on what they’re given.

CDPs were built to inform humans. They weren’t built to power agents.

An agent needs to know not just what happened, but what’s happening. Not just customer history, but customer trajectory. Not just data, but decision-ready context. CDPs provide the raw material. They don’t provide the real-time, contextualised, action-ready intelligence that agents require.

In short: CDPs helped brands remember customers. They did not help brands behave as if they remembered.

Which brings us back to the human cost of this gap: the marketer who is doing everything “right” — and still missing the drift.

4

Maya — The Blind Marketer

Maya is not a bad marketer. She is exactly the kind of marketer modern martech assumes will exist: competent, data-literate, organised, ambitious. She has seventeen years of experience. An MBA from a top school. A track record of successful campaigns at three different companies.

She runs marketing for a mid-sized D2C fashion brand with 2 million customers on their email list. Diwali is coming. The quarter matters.

It’s Monday afternoon. She’s in her weekly performance meeting. Her dashboard is green. Open rates are stable. Click rates are “within range.” The CDP is stitched. Journeys are running. Her ESP vendor has an “AI send-time optimiser” that recommends the perfect moment to land in the inbox. Her team has even been praised for “data-driven personalisation.”

This is what modern competence looks like.

She pulls up her segments with the confidence of a professional who’s done this dozens of times before:

“Engaged” — 340,000 customers who’ve interacted in the past 30 days. These are the reliable ones. They’ll receive the full campaign sequence: teaser, launch, reminder, last chance.

“At Risk” — 180,000 customers showing declining engagement. They’ll get a special offer, something to reignite the spark.

“Dormant” — 890,000 customers silent for 90+ days. They’ll receive a reactivation attempt, though Maya knows from experience that most won’t respond.

Arun is in Engaged. The system tells her that confidently — because Arun opened an email 18 days ago.

So Maya includes Arun in the Diwali blast.

Why wouldn’t she?

The content is good. The offer is strong. The creative team has done their job. The “AI” layer suggests a subject line variant that will improve opens by 2–3%. This is the kind of incremental optimisation marketers are trained to pursue.

What Maya doesn’t see is what matters.

She doesn’t see that six months ago Arun opened 8 out of 10 emails, and now he opens 2 out of 10. She doesn’t see that his browsing behaviour has shifted from active consideration to quick glances. She doesn’t see that he’s moved from “curious” to “tired” — not as a binary change, but as a gradual tilt. She doesn’t see that he clicked on a product twice last month, then stopped — and the brand didn’t notice. She doesn’t see that he is three weeks away from disappearing.

She doesn’t see his optimal intervention window — three days from now, not today.

She doesn’t see the one message that might actually work — not a Diwali blast, but a specific product recommendation based on his browsing history, at a price point he’s historically responded to.

Maya can’t see any of this. Not because she’s incompetent. Not because she doesn’t care. But because her tools weren’t designed to show it.

The dashboard is not lying in the way people think dashboards lie. The data is accurate. The segment rule is correct. The score might even be statistically defensible.

And yet Maya is blind — because she is being shown labels instead of motion.

Maya’s dashboard said “Engaged.” Arun’s behaviour said “Leaving.” The dashboard won.

This is the cruelty of modern martech: it gives marketers a sense of control while withholding the most important truth — trajectories.

The dashboard shows aggregate health while masking individual fade. It shows segment averages while hiding trajectory variance. It shows what happened while obscuring what’s happening. Maya makes decisions based on the best information her systems can provide, and that information is systematically incomplete.

So Maya executes. The Diwali blast goes out. It lands in Arun’s inbox as yet another “Just for You!” message that doesn’t feel like it’s for him at all. Arun swipes it away, as he has done six times this month.

Maya sees no alarm. The meeting moves on.

And then, weeks later, when Arun disappears and reacquisition costs rise, Maya will blame the usual suspects: competition, fatigue, macroeconomics, rising CPMs. She will increase ad spend. She will approve a win-back journey. She will adjust her segments, tweak her thresholds, refine her targeting.

She will do what the industry does.

Not because she is careless. Because the system she trusts cannot see the difference between a customer who is engaged and a customer who is still officially engaged while fading fast.

The hard part: Maya will be held accountable for outcomes she cannot see coming.

At this point, the need is clear. The missing layer is real.

Now we can name it.

5

From Knowledge Graphs to Context Graphs

To understand context graphs, it helps to start with something adjacent that many people have heard of: knowledge graphs.

For twenty years, the technology industry has invested heavily in knowledge graphs. Google uses them to answer search queries. Facebook uses them to map social connections. Amazon uses them to power recommendations. Netflix uses them to understand viewing preferences.

A knowledge graph is a structured representation of entities and their relationships. In marketing terms, it can model facts like:

  • Arun bought Product X
  • Product X belongs to Category Y
  • Arun browsed Brand Z
  • Arun lives in Mumbai
  • Arun clicked on Campaign A
  • Customers who bought X often buy F

This structure is powerful because relationships allow multi-hop reasoning. You can move from customer → products → categories → affinities → recommendations. Knowledge graphs enable sophisticated recommendations, targeted marketing, and predictive analytics.

But knowledge graphs have a limitation: they are optimised for what is true, not what matters now.

A knowledge graph can hold a lot of truth about Arun. It can remember his purchases, clicks, and preferences. It can be queried elegantly. It can infer that he might be interested in accessories based on his jacket purchase.

What it cannot do is understand that Arun’s current engagement trajectory makes this the wrong moment for any promotional message — that what he needs is not a recommendation but a reason to care again.

Knowledge graphs are static by nature. They represent facts at a point in time. They update when new facts arrive. But they don’t inherently model motion, trajectory, or temporal dynamics. They know where customers are (or were). They don’t see where customers are going.

That requires more than truth. It requires context.

A context graph is a decision-ready representation of what matters for a specific customer at a specific moment — combining identity, behaviour, time, and situation into actionable relevance.

It isn’t just a prettier data model. It is a different kind of model: one built for decisions, not storage.

A context graph incorporates four dimensions:

  1. Identity — who the customer is, in the broad sense. Preferences, history, declared intent, profile attributes. This is what traditional systems do reasonably well.
  2. Behaviour — what they are doing and how patterns are shifting. Engagement rhythms, browsing cadence, purchase intervals, attention signals. Not just past transactions, but the behavioural fingerprint that reveals intention.
  3. Temporal — when it matters and how it is trending. Recency, velocity of change, decay rates, trajectory direction, lifecycle stage. The same action means different things at different times. An email open after six months of silence is not the same as an email open during active engagement. Time is not just a timestamp; it’s a dimension of meaning.
  4. Situational — what surrounds the moment. Time of day, device, season, calendar events, competitive context, even external triggers. The optimal message at 9 AM on a Monday differs from the optimal message at 9 PM on a Saturday.

Traditional systems capture identity well, behaviour partially, temporal rarely, and situational almost never. Context graphs integrate all four into a unified representation that can be acted upon.

Put simply:

  • Knowledge graphs hold facts.
  • Context graphs hold meaning-in-the-moment.

Here’s the core distinction:

Knowledge Graphs Context Graphs
What we know What matters now
Static truth Dynamic relevance
Data model Decision model
Powers dashboards Powers agents
Answers “what happened?” Answers “what should we do?”
Analysis-ready Action-ready
Memory Awareness

Knowledge graphs are memory. Context graphs are awareness.

The architectural distinction matters profoundly.

Knowledge graphs are designed for human queries. An analyst asks: “Show me customers who purchased in Category X in the last 90 days.” The graph returns a list. The analyst interprets the results. A marketer designs a campaign. This workflow assumes humans in the loop at every decision point.

Context graphs are designed for autonomous action. An agent asks: “What should I do for Arun right now?” The graph returns not data but the contextual foundation that enables the agent to decide. No human interpretation required. No lag between insight and action.

This is the difference between analysis-ready and action-ready infrastructure. CDPs and knowledge graphs serve human analysts. Context graphs serve AI agents. As marketing shifts from human-directed to agent-directed, the infrastructure must shift accordingly.

A knowledge graph can tell you Arun bought running shoes.

A context graph can tell you Arun is about to stop running — because his behaviour is fading, his engagement is down, and the relationship is cooling. It can tell you when to intervene, how to intervene, and what happens if you don’t.

That is why context graphs matter: they don’t just store customer history. They make customer state legible. They make drift visible. They translate data into decision.

Memory is necessary but not sufficient. A brand can have perfect memory of every customer interaction and still fail to act appropriately because it lacks awareness of current state, trajectory, and relevance. Memory tells you the past. Awareness tells you what to do now.

Marketers have spent two decades perfecting memory. The next decade belongs to awareness.

6

Backgrounder – 1

I became intrigued by context graphs when I came across this in Ashu Garg’s newsletter (co-authored with Jaya Gupta). While I had discussed attention decay in my essays, I now had a name for the solution that was needed.

Rules tell an agent what should happen in general (“use official ARR for reporting”)

Decision traces capture what happened in this specific case (“we used X definition, under policy v3.2, with a VP exception, based on precedent Z, and here’s what we changed”).

Agents don’t just need rules. They need access to the decision traces that show how rules were applied in the past, where exceptions were granted, how conflicts were resolved, who approved what, and which precedents actually govern reality.

This is where systems of agents startups have a structural advantage. They sit in the execution path. They see the full context at decision time: what inputs were gathered across systems, what policy was evaluated, what exception route was invoked, who approved, and what state was written. If you persist those traces, you get something that doesn’t exist in most enterprises today: a queryable record of how decisions were made.

We call the accumulated structure formed by those traces a context graph: not “the model’s chain-of-thought,” but a living record of decision traces stitched across entities and time so precedent becomes searchable. Over time, that context graph becomes the real source of truth for autonomy – because it explains not just what happened, but why it was allowed to happen.

A follow-up podcast from Ashu Garg.

More from Animesh Koratana:

There’s a concept worth taking seriously because it reframes what context graphs actually are: world models.

A world model is a learned, compressed representation of how an environment works. It encodes dynamics, ie. what happens when you take actions suspended in a specific state. It captures structure: what entities exist and how they relate. And it enables prediction: given a current state and a proposed action, what happens next?

World models demonstrate something important: agents can learn compressed representations of environments and train entirely inside “dreams”—simulated trajectories through latent space. The world model becomes a simulator. You can run hypotheticals and get useful answers without executing in the real environment.

…A context graph with enough accumulated structure becomes a world model for organizational physics. It encodes how decisions unfold, how state changes propagate, how entities interact. Once you have that, you can simulate.

… Simulation is the test of understanding. If your context graph can’t answer “what if,” it’s just a search index.

Dean Ball: “I do not think systems of record are dying. I think they are getting unbundled and rewired. The “record” part, the actual truth, will increasingly live in a combination of warehouses, lakehouses, and still important operational systems. On top of that, we will get a new layer of semantic contracts and control planes that tell agents how to safely read and write that truth. The familiar SaaS front ends that used to sit on top of those systems of record will matter less over time. Agents and workflow UIs will become the primary way humans interact with work. But the underlying need for a well defined source of truth, with clear ownership and constraints, will only grow. Said another way, agents are not replacing systems of record. They are raising the standards for what a good one looks like. The companies that win this cycle will be the ones that build amazing agentic experiences on top of boring, rock solid sources of truth, rather than pretending those sources no longer matter.”

Glean: “Context graphs are emerging as a foundational technology for enterprise AI, enabling systems to understand not just data, but the real processes, relationships, and activities that drive how work gets done—unlocking new opportunities for automation and productivity. The true value of context graphs lies in their ability to capture the “how” of work (the observable digital trail of actions, collaborations, and decisions), rather than the elusive “why,” allowing AI agents to learn from and automate complex, distributed processes that are often undocumented or only exist as tribal knowledge. Building effective context graphs requires a sophisticated technical stack—including connectors for observability, activity data capture, semantic understanding, and enterprise memory—forming a new kind of data platform that supports both agentic automation and knowledge discovery across the organization.”

Dharmesh Shah: “Here’s the core idea: most of our current systems capture what happened, but not why it happened. Why did this deal need to be escalated to legal review? Why did we pick Providence, RI for our next retail store? Why did we decide to discontinue product [X]? That reasoning — the decision traces, the exceptions, the precedents — lives scattered across Slack, work calls, and inside people’s heads. It’s insider knowledge that builds up as employees gain experience and resets every time someone leaves. A context graph is meant to capture all of that systematically. Not just the final state, but the full sequence of decisions: what inputs were considered, what policies were evaluated, what exceptions were granted, who approved what, and why. It’s a system of record for decisions, not just data… As AI agents begin handling real workflows — reviewing deals, resolving tickets, and more — they run into the same gray areas humans face in everyday work. Humans handle those situations using judgment and insider context built through experience, but agents don’t have access to that layer. They see the final state in the CRM, not the reasoning that led there. Context graphs are supposed to solve this. By capturing decision traces as agents work, you build a queryable history of real-world precedents. Over time, exceptions become encoded knowledge. The organization stops relying on oral tradition and starts learning from its accumulated actions.”

7

Backgrounder – 2

Nikhil writes: “Context graphs are poised to unlock the next generation of AI agents, autonomous systems that not only act but remember, learn, and improve. Traditional automation (RPA, workflow engines) is brittle: it follows fixed paths and breaks when encountering situations outside its programmed scope. AI agents are more flexible but often lack governance and accountability. Context graphs + agents = governance with flexibility. Agents can navigate complex situations while maintaining an auditable record of decisions and reasoning. Over time, as more decision traces accumulate, organizations build precedent libraries that make common decisions faster and edge cases less surprising.​”

Amigo writes that context graphs allow agents to:

  • Follow Optimal Pathways: Use structured guidance to identify and navigate the best routes through complex problem spaces.
  • Adjust to Different Constraint Levels: Achieve high accuracy in critical scenarios while maintaining flexibility in less structured situations.
  • Maintain Critical Context: Preserve essential information to frame interactions, ensuring coherent, relevant, and contextually-informed responses.
  • Transform Knowledge into Navigable Structures: Organize knowledge domains into structured frameworks, facilitating efficient navigation.
  • Learn and Adapt: Continuously improve navigation strategies through measurement-led refinement and ongoing interactions, resulting in increasingly refined and effective agent performance.

Daniel Davis writes:

The AI journey we’re on follows a clear progression:

  1. LLMs can answer questions from their training data
  2. RAG appears: We stuff prompts with chunks of text to add knowledge, realizing that LLM training data alone is insufficient using semantic similarity search over vector embeddings to find the text chunks
  3. GraphRAG emerges: Breaking away from text chunks and semantic similarity search alone, we use flexible knowledge representations that can be navigated and refined for better control that capture rich relationships between entities, concepts, etc.
  4. Ontology RAG: We take control over what gets loaded into graphs, using structured ontologies for precision and improved recall in how the relationships are annotated with improved granularity for retrieval

This progression is revealing. Step 3 (GraphRAG) makes minimal use of existing graph algorithms. Step 4 pulls ontologies from the toolbox. We’re genuinely scratching the surface of what graph tooling can do.

This is where we are today. What comes next?

  1. Information retrieval analytics tuned to different data types: We develop specialized retrieval strategies for temporal data, accuracy-sensitive data, anomalies, clustering, and other domain-specific information retrieval challenges
  2. Self-describing information stores: Information systems that carry metadata about their own structure, allowing retrieval algorithms to adapt automatically to the information they encounter
  3. Dynamic information retrieval strategies: LLMs can derive complete information retrieval strategies for information types they’ve never seen before, generalizing from learned patterns
  4. Closing the loop to enable autonomous learning: As the system reingests its outputs, annotating the generative data with metadata, that can then adjust how that new information is retrieved in comparison to “old” data, and the ability to adjust the “old” structures as well is the holy grail of a true autonomous system that can learn

Context graphs represent the visions that so many information theorists dedicated their lives to pursuing. The opportunity is enormous.

Civil Learning writes: “Agents Change the Interface, Not the Need for Truth. Agents don’t live inside a single system. They pull data from multiple tools, evaluate policies, handle exceptions, route approvals, take actions across systems. In other words, agents are action-oriented, not database-oriented. This creates a separation: Data plane — CRMs, ERPs, ticketing systems, and Decision plane — agents deciding what to do next. Agents become the interface to work… Most enterprises have rules. Rules define what should happen in general. But real businesses run on decisions… Rules are static. Decisions are contextual. Agents need both.”

Diego Lomanto writes: “Your agents must sit in the execution path. Being in the execution path means work cannot happen without going through your agents. Not adjacent (recommendations), not downstream (analyzing after), not upstream (planning). Where the decision commits‌ — ‌where it becomes action that changes state… They must capture decision traces at commit time. Decision traces are the organizational dark matter‌ — ‌the reasoning, context, and precedent that connect actions to the roles, definitions, preferences, and standards that informed them.”

8

Tech Primer – 1

I asked Claude and ChatGPT to put together a tech primer around context graphs.

Understanding what context graphs are requires understanding what they’re not — and why they’re emerging now rather than five years ago or five years from now.

The Evolution: RAG → GraphRAG → Context Graphs

The AI industry has been climbing a ladder, each rung solving the limitations of the one before.

RAG (2023-24) solved the grounding problem. Large language models are powerful but unreliable when asked about specific, private, or recent information. Retrieval-Augmented Generation addressed this by retrieving relevant text chunks from a vector database and feeding them to the model alongside the user’s query. The model could now answer questions about documents it had never seen during training.

But RAG had a critical limitation: it retrieved text based on semantic similarity, not structure. Ask “which customers are connected to this account through shared purchases?” and RAG struggles because the answer requires traversing relationships, not finding similar paragraphs.

GraphRAG (2024-25) solved the structure problem. By combining knowledge graphs with retrieval, GraphRAG enabled multi-hop reasoning — following chains of relationships across entities. Microsoft’s research demonstrated that GraphRAG substantially outperformed traditional RAG on questions requiring synthesis across multiple sources. The knowledge graph provided the “brain-like network” that pure text retrieval lacked.

But GraphRAG was still designed for a specific use case: question-answering. It excels at helping humans find information. It wasn’t built for a different challenge entirely: helping agents make decisions.

Context Graphs (2025+) solve the action problem. When AI shifts from answering questions to executing workflows, the requirements change fundamentally. An agent doesn’t just need to know facts — it needs to know what’s relevant right now, what precedent applies, what policies govern the decision, and what the state of the world looked like when a prior decision was made.

Context graphs are not a retrieval technique. They’re a system layer — a durable substrate that agents read from and write to as they work. The output isn’t an answer to a query. It’s a decision trace that becomes searchable precedent.

This distinction matters. RAG and GraphRAG serve human analysts. Context graphs serve autonomous agents. As marketing shifts from human-directed to agent-directed, the infrastructure must shift accordingly.

The Two Clocks Problem

There’s a concept that clarifies why this shift is so difficult — and why existing systems can’t simply be upgraded.

Every enterprise system has two clocks running simultaneously:

The state clock tracks what is true now. Your CRM knows the current deal value. Your CDP knows the customer’s current segment. Your ESP knows their current engagement score. State is what most enterprise software is built to manage.

The event clock tracks what happened — in what order, with what reasoning, under what conditions. The event clock captures not just that a discount was approved, but why it was approved, who approved it, what precedent was referenced, and what the customer’s situation looked like at that moment.

The uncomfortable truth: we’ve built trillion-dollar infrastructure for the state clock. Almost nothing for the event clock.

This made sense when humans were the reasoning layer. The “why” lived in people’s heads, reconstructed on demand through conversation and institutional memory. When someone asked “why did we give this customer 20% off?”, a human could remember the context, explain the reasoning, and make a judgement call about whether the precedent applied to a new situation.

But agents don’t have heads. They can’t reconstruct reasoning from memory. They see the current state — “20% discount” — without access to the decision trace that produced it. Every exception looks like a random fact. Every precedent is invisible.

This is why context graphs emphasise decision traces over static facts. A context graph doesn’t just record that Arun received a discount. It records the inputs that were gathered, the policies that were evaluated, the exception that was invoked, the approval that was granted, and the outcome that resulted. That trace becomes queryable. The next time a similar situation arises, the agent can ask: “What did we do last time? What worked? What precedent applies?”

The event clock makes reasoning replayable. Without it, every decision starts from scratch.

9

Tech Primer – 2

Why Now? The Agentic Moment

Context graphs aren’t a theoretical concept that academics have been discussing for decades. They’re emerging now because three forces converged simultaneously.

Force 1: Agents moved from chat to work.

In 2023-24, AI was primarily assistive — drafting emails, summarising documents, answering questions. The human remained in control. The AI was a tool.

In 2025-26, the push became agentic: AI systems that execute workflows, route approvals, update records, and take actions across multiple systems without human intervention at every step. Gartner predicts that 15% of day-to-day work decisions will be made autonomously through agentic AI by 2028, up from essentially none in 2024.

The moment agents take actions, the requirements change. You need governance (who approved this?), auditability (can we explain why?), traceability (what happened in what order?), and shared state (can multiple agents coordinate without conflict?). These are precisely the gaps that context graphs claim to fill.

Force 2: Protocol maturity enabled interoperability.

Anthropic’s Model Context Protocol (MCP) and Google’s Agent-to-Agent Protocol (A2A) established standardised ways for agents to connect with tools and communicate with each other. These protocols are to agentic AI what HTTP was to the early web — the foundational layer that enables everything else.

Before standardised protocols, every agent integration was custom. Now, agents can access tools through plug-and-play connectivity, and agents from different vendors can collaborate. This interoperability makes multi-agent systems practical — and makes the need for shared context infrastructure urgent.

Force 3: Compliance got real.

If agents touch pricing, approvals, customer communications, legal claims, or financial decisions, “the model said so” won’t satisfy internal governance, regulators, or courts. Decision traces become a product requirement, not a nice-to-have.

Every decision needs evidence. Every exception needs justification. Every precedent needs documentation. Context graphs provide the audit trail that makes autonomous systems governable.

Two Flavours Emerging

The current discourse contains two overlapping but distinct threads. Understanding both helps clarify what context graphs can and cannot do.

Flavour 1: Decision-trace context graphs

This is the Foundation Capital thesis: context graphs as “systems of record for decisions.” The core insight is that the missing layer isn’t data — it’s the reasoning that connects data to action. Agents that sit in the execution path can capture decision traces at commit time, building a queryable history of how the organisation actually works.

Key characteristics: event-sourced architecture, provenance tracking, policy and exception documentation, replayability (“what was true when this decision was made?”).

This flavour maps directly to marketing because campaigns, messages, offers, and interventions are fundamentally sequences of micro-decisions. Every send is a decision. Every suppression is a decision. Every price point, every timing choice, every channel selection — all decisions that currently happen without trace.

Flavour 2: Enterprise context graphs

This is the Glean-style positioning: build a graph of how the enterprise works — people, documents, projects, tools, activity streams — so AI has the organisational context to answer and act reliably.

Key characteristics: connectors into SaaS applications, identity and permissions governance, activity signal capture, “tribal knowledge” extraction.

This flavour is broader but less focused on decision lineage. It’s more about enabling AI to understand how work gets done than about capturing why decisions were made.

For marketing, the decision-trace flavour is more directly relevant. It isn’t primarily about understanding organisational structure — it’s about making millions of individual decisions, each of which should be traceable, learnable, and improvable.

10

Tech Primer – 3

Is This Real — Or a Passing Fad?

The honest answer: the phrase may be a fad. The underlying shift is durable.

What’s durable:

Agentic workflows are here to stay. The economic pressure to automate knowledge work is too strong, and the technology has crossed the capability threshold. Auditability and governance are permanent requirements. Regulators, boards, and customers will demand explainability. Event trails and decision lineage will become standard practice. You can’t run autonomous systems responsibly without them. Graph-structured representations have proven value. Multi-hop reasoning, relationship traversal, and constraint propagation require graphs. This isn’t going away.

What’s overhyped:

The implication that one context graph will magically replace warehouses, CRMs, and ERPs. Integration is hard. Schema alignment is hard. Identity resolution is hard. Change management is hard. The idea that you can “just build a context graph” without substantial engineering investment. Even proponents admit this is structurally difficult. Organisations are partially observable, ontologies are messy, and everything is constantly changing.

A useful test:

Does it sit in the execution path?

If the graph is built from after-the-fact analytics — synced via ETL, populated from data exports, updated in batch — it will struggle to capture decision traces at the moment they matter. It becomes a retrospective view, not an operational layer.

If the graph is populated by agents as they work — capturing inputs, policies, exceptions, and outcomes at commit time — it can compound into searchable precedent and automated decision-making. That’s the architecture that creates durable advantage.

The Minimum Viable Context Graph

For marketing applications, a context graph must capture at minimum:

  • Entities: customers, campaigns, offers, messages, channels, agents, outcomes
  • Events: decision points (send/suppress, discount/full-price, this-message/that-message, now/later)
  • Inputs: signals that informed the decision (engagement trajectory, purchase history, browse behaviour, timing context)
  • Constraints: policies and rules that governed the decision (frequency caps, brand guidelines, legal restrictions, budget limits)
  • Exceptions: deviations from standard policy and their justification
  • Outcomes: what happened after the decision (opened, clicked, converted, complained, unsubscribed)
  • Feedback loops: how outcomes update future decisions

This is not a database schema. It’s an architectural requirement. The specific implementation will vary, but these elements must be present for the context graph to enable what traditional martech cannot: understanding not just what was done, but why it was done, whether it worked, and what to do differently next time.

The Bridge to Marketing

Context graphs aren’t about knowing more. They’re about acting better.

They turn decisions into data, precedent into policy, and autonomy into something you can trust. They make the invisible visible — not just customer fade, but the reasoning that failed to prevent it.

For marketing, this changes everything. Traditional martech stores what happened (campaigns sent, emails opened, purchases made) but loses the reasoning that connected action to outcome. Context graphs preserve that reasoning, making it queryable, learnable, and improvable.

What does this mean in practice? It means a system that doesn’t just know Arun opened an email 18 days ago, but knows why he was sent that email, what alternatives were considered, what signals suggested it was the right moment, and whether the intervention actually worked. It means Maya doesn’t just see segments — she sees decision trails that explain how customers arrived in those segments and what’s likely to happen next.

It means marketing finally gets an event clock.

11

Marketing as a Decision System

Marketing doesn’t look like a decision system. It looks like campaigns, creatives, segments, calendars, and dashboards. It looks like planning meetings and performance reviews. It looks like briefs and brand guidelines and launch dates.

But that surface view hides the truth: modern marketing is nothing but decisions, made at scale, continuously.

Every time a message is sent, a decision has been made. Every time a message is not sent, a decision has been made. Every offer, delay, channel choice, frequency cap, suppression, escalation — all decisions.

Millions of them, every day.

Yet marketing has a peculiar blind spot. While it is saturated with decisions, it does not remember why those decisions were made. Once executed, the reasoning disappears.

The action survives. The context does not.

**

This is what the enterprise AI literature refers to as the execution path — the moment when a system commits to action. In marketing, the execution path runs through the inbox, the app notification, the WhatsApp message, the on-site banner. At that moment, a choice is finalised: this message, to this person, right now, via this channel.

What marketing systems record today is the outcome: sent, opened, clicked, converted, ignored.

What they do not record is the decision trace:

  • Which signals mattered?
  • Which rules applied?
  • Which alternatives were rejected?
  • Was this a default action or an exception?
  • Was this decision constrained by fatigue, consent, or margin?
  • What was the system trying to optimise for?
  • What precedent, if any, was considered?

The system knows that Arun received the Diwali email at 10:47 AM. It doesn’t know that he was selected despite a declining engagement trajectory, that the system considered but rejected a product-specific message, that a frequency cap almost triggered suppression, or that three other agents proposed conflicting actions and the orchestrator chose this one.

All that context — the inputs, the constraints, the alternatives, the resolution — exists for a moment in system memory and then vanishes.

**

Because this context is lost, marketing never truly learns.

Teams revisit the same debates repeatedly: Should we have sent that? Should we have waited? Should we have discounted? There’s no institutional memory of how these questions were resolved before, what evidence was considered, or what happened as a result.

AI systems repeat patterns blindly because they are optimising outcomes without understanding the decisions that produced them. They see that Arun received seven emails last month and didn’t convert. They don’t see that six of those emails were sent despite signals suggesting fatigue — and that the seventh was the one that finally pushed him away.

This is why marketing AI often feels opaque. When something works, we celebrate. When it fails, we shrug. “The model did it” becomes a convenient explanation — and an unacceptable one.

**

Contrast this with disciplines that treat decisions as first-class objects.

In finance, approvals are logged. Every pricing exception, every credit extension, every risk override leaves a trace.

In law, precedent accumulates. Every ruling becomes part of a queryable body of knowledge. Similar cases reference similar decisions.

In engineering, failures produce postmortems. Systems don’t just break — they break in documented ways. Root causes are identified. The same failure rarely happens twice.

Marketing, by contrast, executes at enormous scale while discarding its own reasoning. Every campaign starts fresh because nothing that came before is accessible.

**

Context graphs change this. They turn marketing into a decision-aware system.

With context graphs, decisions don’t vanish after execution. They accumulate. The system remembers not just what happened, but why it happened — what signals were considered, what rules applied, what alternatives were rejected, and what the outcome was.

Over time, this creates precedent. Similar situations, similar signals, similar constraints — the system can ask: “What did we do last time? What worked? What failed? What should we do differently now?”

Marketing stops rethinking from scratch. It starts building institutional memory.

Today’s Marketing Systems With Context Graphs
Decisions disappear Decisions accumulate
Rules are static Precedent evolves
Optimisation is local Learning is global
Errors repeat Errors teach
AI is opaque AI is accountable

**

This is the real shift. Context graphs are not about adding intelligence on top of marketing. They are about making marketing legible to itself.

As long as decisions disappear, optimisation remains local, learning remains shallow, and AI remains unaccountable. Once decisions are traced, marketing becomes a system that can actually improve — not just react.

If marketing can’t remember why it acted, it can’t get better at acting.

12

Anatomy – 1

At this point, a reasonable question arises: what exactly is inside a marketing context graph?

The answer is not a database schema or a technical architecture. A context graph is best understood as a memory structure for relationships in motion.

Traditional marketing systems store facts:

  • A customer exists
  • A message was sent
  • An email was opened
  • A purchase occurred

A marketing context graph stores meaning:

  • Why that message was sent
  • Why that timing was chosen
  • Why silence was preferred elsewhere
  • How the relationship is changing

The difference is not incremental. It is categorical. Facts tell you what happened. Meaning tells you what it signifies — and what should happen next.

**

What a Marketing Context Graph Captures

At a conceptual level, a marketing context graph captures five things.

First, entities. Customers, messages, channels, offers, moments, campaigns. These are familiar objects, but in a context graph they are not isolated records. They exist in relation to one another. Arun is not just a row in a database. He is a node connected to every message he received, every offer he saw, every channel he used, every moment he engaged or didn’t.

Second, relationships. Not just “sent” or “opened”, but delayed, suppressed, avoided, prioritised. Relationships encode intent, not just activity. “Suppressed due to fatigue” is a different relationship than “not sent due to segment exclusion.” Both result in no message. Only one reflects a decision about the customer’s state.

Third, state. Marketing today treats state crudely — active, inactive, at-risk. These are labels, applied after thresholds are crossed. Context graphs model state as fluid: fatigue rising, curiosity building, readiness peaking, trust eroding. State is not a label. It is a condition — continuous, evolving, and different for every customer at every moment.

Fourth, time. Time is not just recency. It is direction and speed. Is engagement decaying slowly or collapsing quickly? Is silence meaningful or incidental? How long since the last interaction, and is that gap growing or stable? Context graphs track trajectories, not snapshots. The same action means different things depending on whether the customer is accelerating toward engagement or drifting toward dormancy.

Fifth, outcomes. Not just conversions, but consequences. Did the message work? Did restraint improve engagement later? Did an early intervention prevent churn? Did an offer cheapen future behaviour? Outcomes close the loop. Without them, decisions cannot be evaluated. With them, every action becomes a learning opportunity.

**

The Four Dimensions, Applied

These elements come together across four dimensions of context:

Identity — who this customer is to the brand.

Not demographics. Not personas. The specific relationship: shaped by history, preferences, and past treatment. What they’ve bought, what they’ve ignored, what they’ve responded to. Their value, their risk, their potential — all as revealed through accumulated interaction, not declared in a profile.

Behaviour — what patterns are changing.

Not just what events occurred, but how the pattern of events is shifting. Opening fewer emails. Browsing more but buying less. Clicking on different categories. Visiting at different times. Behaviour isn’t a list of actions. It’s a trajectory that reveals where the customer is heading.

Temporal — how fast the relationship is moving, and in which direction.

Recency alone is insufficient. Velocity matters. A customer who opened an email yesterday after two weeks of silence is in a different state than a customer who opened yesterday as part of a consistent daily pattern. Time isn’t a timestamp. It’s a dimension of meaning.

Situational — why this moment is different from the last similar one.

Time of day. Day of week. Season. Calendar events. Competitive context. What else is happening in the customer’s inbox, in the market, in the world. The optimal action at 9 AM on a Monday differs from the optimal action at 9 PM on a Saturday. A Diwali campaign means something different to a customer who just received three other Diwali campaigns from competitors.

Traditional systems capture identity reasonably well. They capture behaviour partially — as isolated events rather than trajectories. They rarely capture temporal dynamics. And they almost never capture situational context.

Context graphs integrate all four into a unified, queryable structure.

13

Anatomy – 2

How This Differs from What Exists

This is why context graphs are fundamentally different from CDPs and journey maps.

CDPs store attributes. A CDP can tell you that Arun purchased twice, opened an email 18 days ago, and browsed the jacket category last week. These are facts. They are accurate. They are also insufficient for deciding what to do next.

A context graph adds meaning: Arun is three weeks into an engagement decline, his browse-to-buy ratio suggests interest without intent, and his response to recent promotional messages has been negative. A blast right now will accelerate drift rather than reverse it.

CDPs answer “what do we know?” Context graphs answer “what does it mean?”

Journey maps assume paths. A journey is a predetermined sequence: if customer does X, then do Y; if customer doesn’t respond, wait Z days and do W. Journeys encode assumptions about how customers should move, not observations of how customers actually move.

A context graph doesn’t assume paths. It observes movement. It sees that Arun is drifting even though he hasn’t triggered any journey exit conditions. It sees that the path he’s on isn’t working, even though the journey logic says he’s still “in sequence.”

Dashboards show aggregates. A dashboard shows that overall open rates are stable, that the Engaged segment contains 340,000 customers, that click-through rates are within range. These aggregates are accurate. They are also averages that mask individual trajectories.

A context graph maintains individual state for every customer. It knows that Arun is fading even while the segment he’s in looks healthy. It sees the variance that dashboards hide.

They do not ask: Which segment is this customer in?

They ask: What is happening to this relationship right now?

**

The Negative Space

Most importantly, context graphs remember what today’s systems forget: the reason behind restraint.

Why a message was not sent. Why an offer was withheld. Why silence was chosen. Why an alternative was rejected.

This negative space — the decisions not taken — is where intelligence lives.

Today’s systems record sends but not suppressions. They log offers accepted but not offers avoided. They track messages delivered but not messages considered and rejected. The absence of action leaves no trace.

But restraint is often the right decision. Silence is often the optimal message. Not sending is often more valuable than sending. A system that cannot remember why it chose restraint cannot learn when restraint works.

Context graphs make the negative space visible. They capture not just what was done, but what was considered and declined. They turn suppression into a traceable decision, silence into a queryable choice, restraint into learnable intelligence.

**

A Simple Test

Here’s a way to assess whether your current systems have context:

Can they answer these questions?

  • Why was this message suppressed for this customer?
  • Why was this timing chosen over an earlier or later moment?
  • Why was an offer avoided when the customer qualified for one?
  • Why was silence preferred to action?
  • What alternatives were considered before this message was selected?
  • What signals suggested this was the right moment?
  • What precedent informed this decision?

If the answer is no — and for most marketing systems, it is — then the system has no decision memory. It executes, but it doesn’t remember why. It acts, but it can’t explain. It optimises locally, but it cannot learn globally.

Context graphs make these questions answerable. Not through analytics after the fact, but through traces captured at decision time.

**

The Mental Model

A useful way to think of a marketing context graph is as a living map of the relationship.

It is continuously updated — every interaction, every signal, every decision adds to it.

Every interaction writes to it. Sends, suppressions, opens, ignores, conversions, complaints — all leave traces that update the map.

Every decision reads from it. Before any action is taken, the system consults the graph to understand current state, trajectory, constraints, and precedent.

Over time, it becomes less about targeting and more about understanding. Less about “who should receive this campaign?” and more about “what does this customer need right now, and do we have it?”

The map isn’t static. It evolves as the relationship evolves. It captures not just where the customer is, but where they’re heading. Not just what happened, but what it means.

**

What This Isn’t

A context graph is not:

  • A new dashboard — it’s infrastructure, not interface
  • A journey builder — it observes movement, not prescribes paths
  • A recommendation engine — it provides context for decisions, not the decisions themselves
  • A data warehouse — it’s operational and real-time, not analytical and batch
  • A CDP replacement — it’s the layer that makes CDPs actionable

It’s the substrate that enables decisions to be informed, traced, and improved. The layer between data and action that has always been missing.

Journeys assume paths. Context graphs observe movement.

14

Three Capabilities Only Context Graphs Enable

Context graphs are not interesting because they are elegant. They are interesting because they enable decisions that today’s systems cannot make reliably.

This isn’t about better algorithms or smarter AI. It’s about structural capability — decisions that require context as a prerequisite. Without decision memory, these capabilities don’t exist. With it, they become natural.

Three use cases illustrate this clearly.

  1. Send vs Suppress: Restraint as Intelligence

The problem: Most marketing systems default to sending.

Frequency caps exist, but they are blunt instruments. They limit volume, not judgement. A cap says “don’t send more than three emails this week.” It doesn’t say “this customer is showing early fatigue signals; even one more email will accelerate drift.”

Restraint, when it happens, is accidental. A rule fires. A threshold triggers. A customer falls outside a segment. These aren’t decisions. They’re side effects.

The result is predictable: too many messages, accumulated fatigue, eroded attention. Relationships degrade not from bad content but from relentless volume.

What context graphs enable:

Context graphs make suppression a first-class decision.

Instead of asking “Can we send?”, the system asks “Should we?”

Fatigue is not inferred after disengagement; it is tracked as a rising state. The system sees that Arun’s engagement trajectory is declining, that his response to recent messages has been negative, that his attention is finite and currently overtaxed. It decides — actively, with reasoning — that the optimal action right now is no action at all.

This decision gets traced. “Suppressed promotional message for Arun due to fatigue signals. Engagement declining 40% over six weeks. Previous three messages ignored. Restraint preferred to preserve future receptivity.”

Later, the system evaluates: Did restraint work? Did engagement stabilise? Did he respond better to the next message, sent after a deliberate pause?

The result is paradoxical but consistent: fewer messages, higher engagement, lower reacquisition costs later.

Silence becomes intentional. The decision to wait is recorded, contextualised, and evaluated.

Restraint stops being a loss. It becomes an investment.

  1. Timing the Intervention Window: Trajectories Over Thresholds

The problem: Traditional marketing detects failure too late.

Customers are marked “at risk” only after disengagement crosses a threshold. Ninety days without purchase. Thirty days without an open. Score drops below 50.

But thresholds are finish lines, not starting guns. By the time a customer crosses the line, the relationship has already cooled. The window for easy intervention has closed. This is why win-back campaigns have such poor response rates — they’re attempting resuscitation on relationships that died weeks earlier.

What context graphs enable:

Context graphs detect direction. They see the slope, not just the cliff.

A customer whose engagement is decaying rapidly is not treated the same as one whose engagement is slowly tapering. A customer who opened an email yesterday after two weeks of silence is in a different state than one who opened as part of a consistent daily pattern.

The system identifies windows of influence — moments where a small, timely intervention can reverse trajectory. It sees that Arun is “entering fade” not because he’s crossed a threshold, but because his pattern has shifted. Opens declining. Time between interactions lengthening. Browse depth shallowing.

This shifts marketing from repair to prevention. Instead of win-back campaigns, you get stay-with moments. Instead of shouting louder at a customer who’s tuned out, the system adjusts while the customer is still tuned in.

The economics are stark. Early intervention is cheap. Win-back is expensive. Reacquisition is ruinous.

Smaller actions, earlier, outperform larger actions later. But only if you can see early.

  1. Offer Governance: Discounts with Memory

The problem: Discounts today are reactive.

Rules trigger offers. Customer in win-back segment? Send discount. Cart abandoned? Offer 10% off. Inactive for 60 days? Deploy incentive.

There’s no memory. The system doesn’t know this customer received four discounts in the past quarter. It doesn’t know the last discount didn’t drive a purchase — it just delayed one that would have happened anyway. It doesn’t know this customer has been trained to expect 20% off, eroding margin with every transaction.

History is shallow. Precedent is ignored. Over time, margin leaks through ungoverned generosity.

What context graphs enable:

Context graphs make incentives part of the decision record.

Before extending an offer, the system considers:

  • Prior incentives given to this customer
  • Behavioural response to those incentives
  • Long-term impact on willingness to pay
  • Margin constraints and contribution history
  • Comparable decisions for similar customers

The decision gets traced. “Offered 10% to Customer X: 180 days since last purchase, no discount in past 6 months, historical response rate 3x baseline, predicted incremental margin positive.” Or: “Withheld discount from Customer Y: received 4 discounts in past quarter, conversion rate identical with and without offers, margin erosion pattern detected.”

These traces create precedent. Over time, the system learns which customers respond to incentives and which are merely subsidised by them.

Offers stop being reflexes. They become governed decisions. Marketing learns not just what converts, but what conditions behaviour.

Discounts without memory become entitlements. Discounts with context become investments.

**

The Common Thread

These three capabilities — restraint, early intervention, governance — share a structural requirement: decisions must be traceable and learnable.

Capability Before Context Graphs After Context Graphs
Restraint Accidental (frequency caps) Intentional (strategic silence)
Intervention Late (threshold-triggered) Early (trajectory-detected)
Governance Stateless (rule-based) Precedent-aware (context-evaluated)

Marketing has always made these decisions. Send or suppress. Intervene now or later. Offer or withhold.

The difference is whether those decisions are made blindly and forgotten instantly — or made with context and remembered permanently.

Context graphs don’t optimise campaigns. They govern decisions.

15

Arun + Maya (After)

The change does not announce itself with dashboards turning red or green. It arrives quietly.

**

Maya’s role no longer revolves around launching campaigns on a calendar. She still plans moments — festivals, product drops, seasonal pushes — but the work has shifted from orchestration to observation. She watches movement, not metrics. Trajectories, not thresholds.

A few days before the Diwali campaign is scheduled to go out, her system flags Arun.

Not as disengaged. Not as dormant. Not even as “at risk.” The signal is subtler: engagement velocity declining faster than normal. Recent browsing without conversion. High probability of fatigue if messaged today.

The recommendation is unglamorous: do nothing now.

In the old world, Maya would have ignored it. Silence felt like surrender. Campaigns were designed to maximise reach, not judgement. If someone qualified for “Engaged,” they were included. Arun would have received the same subject line as everyone else — festive emojis, urgency, a limited-time discount.

This time, Maya trusts the system. Not because it is perfect, but because it has earned her trust over time. She has seen what happens when the system suppresses intelligently — how engagement often recovers without intervention, how relationships stabilise when pressure is removed.

She doesn’t manage sends anymore. She manages relationships.

**

Arun does not receive the Diwali blast.

His inbox is quieter than usual. He doesn’t consciously notice the absence — and that is the point. The brand does not feel needy. It does not demand attention. It simply waits.

Two days later, a different message arrives. No sale banner. No countdown timer. A simple note: the jacket he browsed the previous week is back in stock. His size. Neutral tone. No urgency.

Arun opens it.

The message feels less like marketing and more like continuity — as if the brand remembered an unfinished conversation. He taps through, checks the details, and completes the purchase.

He does not feel persuaded. He feels understood.

Nothing dramatic has happened. No loyalty program triggered. No “personalisation moment” celebrated. But something important has changed: the relationship did not reset. It continued.

The brand stopped feeling random. It started feeling attentive. Like someone was paying attention without being intrusive. Like the relationship had a memory.

Arun didn’t notice better marketing. He noticed less noise.

**

From Maya’s side, the system records more than a conversion. It records the decision: suppression chosen, timing delayed, alternative path selected. It records the context in which restraint worked — and the outcome that followed.

That decision becomes precedent.

The next time a customer like Arun shows similar signals, the system remembers. It does not guess. It recalls. Over time, these memories accumulate into judgement.

This is the difference context creates. Marketing stops oscillating between noise and silence. It learns how to pace itself. Messages no longer feel random. They feel earned.

Maya stopped managing segments. She started managing motion.

**

The most telling change is what does not happen.

Arun does not drift into indifference. He doesn’t swipe away seven messages in a month. The brand doesn’t become wallpaper.

Months later, Maya does not have to pay to win him back through ads. There is no reacquisition moment — because there was no silent loss. No budget spent to reach a customer whose email address already sat in the database. No celebrating a “new acquisition” who was actually a recovered relationship that never needed to break.

The half-trillion-dollar loop — Acquire → Ignore → Drift → Reacquire → Repeat — simply doesn’t engage. The cycle breaks not through heroic intervention, but through quiet continuity.

Arun didn’t decide to stay. He simply never felt the need to leave.

**

Marketing didn’t become louder or smarter in a visible way. It became calmer. More patient. More humane.

The infrastructure is invisible. The experience is not.

This is what continuity looks like when context is preserved.

16

Why Context Compounds

Most marketing advantages are fragile.

A new channel emerges. A new tactic spreads. A new tool promises lift. Competitors copy it within months. Advantage evaporates.

Context graphs are different because they compound.

The first source of advantage is time.

Context cannot be bought or rushed. It accumulates slowly through lived interaction — every message sent or withheld, every offer extended or avoided, every moment of restraint or urgency. Every decision adds to the graph. Every outcome refines its understanding. Every precedent expands its judgement.

A competitor starting today begins with no memory. No trajectories. No precedent. Even with similar tools, they see less clearly.

You cannot license accumulated experience. You cannot acquire institutional memory through a vendor contract. You cannot compress three years of decision history into a quarterly implementation.

Early builders see more clearly. Late entrants start blind.

Second, context graphs create learning loops that traditional systems cannot.

Because decisions are recorded alongside outcomes, marketing stops relearning the same lessons. What worked in one situation informs the next. What failed becomes a constraint, not a repeat mistake. Improvement becomes continuous rather than episodic.

This is a profound shift. Marketing no longer depends on quarterly retrospectives or post-campaign analysis to learn. Learning happens in-line, as part of execution. The system doesn’t just repeat what worked. It generalises. It recognises patterns across customers, across moments, across contexts. It develops judgement.

Over time, judgement improves automatically.

A system that has made a million traced decisions is not incrementally better than one that has made ten thousand. It is categorically better — because it has seen more of the possibility space and learned what works where.

Third, context graphs introduce switching costs of a new kind.

Leaving a platform today often means exporting data and reconfiguring workflows. Inconvenient, but manageable.

Leaving a context-aware system means losing something deeper: the accumulated understanding of how relationships evolved. The new system may have the same customers, but it does not know them. It does not know what was tried, what was withheld, or what nearly broke trust. The relationships between decisions, the outcomes that validated them, the patterns that emerged — these are not exportable as CSV files.

Restarting means starting blind.

The cost of leaving is the cost of forgetting.

Fourth, context graphs benefit from organisational inertia — in a good way.

They require teams to trust restraint, to think in trajectories, to accept that not acting can be the smartest action. These are cultural shifts, not configuration changes. Organisations that make this transition early build muscle memory others struggle to develop later.

Retrofitting context into systems designed without it is difficult — and the difficulty is not technical. It is conceptual. Organisations must change how they think about marketing: from campaigns to decisions, from segments to trajectories, from reach to restraint.

The companies that make this shift early gain time. The companies that delay face a widening gap — not just in capability, but in accumulated intelligence.

Finally, context graphs change the economics of marketing decisions.

When relationships are understood as evolving states rather than static segments, waste becomes visible. Over-messaging shows up as fatigue. Late intervention shows up as avoidable churn. Discounts without memory show up as margin erosion. Reacquisition reveals itself as a failure of memory, not reach.

The half-trillion-dollar loop — Acquire → Ignore → Drift → Reacquire → Repeat — becomes not just visible but measurable. And what can be measured can be fixed.

**

Source of Advantage Why It Compounds
Time Cannot be compressed; early builders accumulate more
Learning Every decision improves the system automatically
Switching costs Leaving means losing accumulated intelligence
Culture Requires mindset shift others struggle to develop
Economics Makes waste visible and fixable

Features can be copied quickly. Context compounds slowly. That’s why it lasts.

**

This is why context graphs are not a feature or a module. They are infrastructure — invisible when they work, painful when absent, and invaluable once established.

They do not promise magic. They promise something rarer in modern marketing: the ability to learn, remember, and act with restraint.

What Comes Next

Once marketing has context, other shifts become possible.

Autonomy — agents that can act without human intervention at every step, because they have the decision memory to act wisely.

Outcome-based economics — pricing models where vendors share in results, because results can finally be traced to decisions.

Attention stewardship — marketing that protects relationships rather than exploiting them, because the cost of exploitation is finally visible.

But those are consequences. Context is the cause.

The infrastructure described in this essay — decision traces, precedent libraries, trajectory awareness, strategic restraint — is not the end state. It is the foundation. What gets built on it will determine whether marketing continues its current trajectory (louder, more intrusive, less effective) or shifts toward something better (calmer, more intelligent, more humane).

The technology exists. The concepts are clear. The question is whether the industry is ready to stop optimising campaigns and start governing decisions.

And in a world drowning in messages, that restraint may be the ultimate competitive advantage.

Context doesn’t just change what marketing can do. It changes what marketing becomes.

17

Executive Summary

  1. Marketing’s Core Problem Is Not Lack of Data—It Is Lack of Context

Modern marketing systems collect vast amounts of customer data, yet fail to understand what is happening now. They remember events, not relationships in motion. The result: $500 billion wasted annually on reacquisition—paying Google and Meta to reach customers whose email addresses already sit in brand databases. This is the AdWaste loop: Acquire → Ignore → Drift → Reacquire → Repeat.

  1. Customers Don’t Churn Abruptly; They Fade Gradually

Most customers don’t leave in a dramatic moment. Engagement decays quietly. Attention thins. Interest wanes. Traditional dashboards, built on snapshots and thresholds, detect this too late—80% of engaged customers vanish every quarter without triggering a single alert. By the time churn is visible, the opportunity to prevent it has passed.

  1. Segments Describe Customers. Trajectories Describe Relationships.

Segments freeze customers in time. Context graphs track direction and speed. Two customers in the same “Engaged” segment may be moving in opposite directions—one deepening, one fading. Without trajectory awareness, marketing mistakes silence for stability and drift for loyalty. The segment says “Engaged.” The trajectory says “Leaving.” In every martech system on earth, the segment wins.

  1. CDPs Gave Marketing Memory—But Not Awareness

Customer Data Platforms unify data, but they are storage systems, not decision systems. A CDP can tell you Arun purchased twice and opened an email 18 days ago. It cannot tell you his engagement has declined 60%, that he’s three weeks from dormancy, or that a promotional blast will accelerate his drift rather than reverse it. CDPs answer “What do we know?” Context graphs answer “What should we do now?”

  1. Knowledge Graphs Store Facts. Context Graphs Encode Judgement.

Knowledge graphs represent entities and relationships at a point in time. Context graphs extend this by incorporating four dimensions: Identity (who the customer is to the brand), Behaviour (what patterns are changing), Temporal (how fast, which direction), and Situational (why this moment differs). They are optimised not for querying, but for action—especially by autonomous agents.

  1. Context Graphs Treat Decisions as First-Class Data

Every send, suppress, delay, offer, or channel choice is a decision. Today, marketing records outcomes but discards reasoning. Context graphs preserve decision traces—what signals were considered, what constraints applied, what alternatives were rejected, what happened as a result. The action survives. With context graphs, so does the reasoning.

  1. Restraint Becomes Intelligence, Not Risk

Without context, silence feels dangerous. With context, silence becomes strategic. Context graphs allow systems to decide when not to message—protecting attention and preserving trust. In a world drowning in noise, restraint becomes a competitive advantage. Fewer messages, higher engagement, lower reacquisition costs.

  1. Timing Matters More Than Targeting

Context graphs detect early trajectory shifts and identify narrow intervention windows where small actions change outcomes. Instead of win-back campaigns after disengagement, marketing intervenes earlier—quietly and precisely. Smaller actions, earlier, outperform larger actions later. But only if you can see early.

  1. Context Compounds—Which Is Why It Becomes a Moat

Context accumulates through lived interaction. It cannot be copied quickly or bought outright. Learning loops improve decision quality automatically. Switching costs rise because leaving means losing relationship memory, not just data. Early builders see more clearly. Late entrants start blind. Features can be copied quickly. Context compounds slowly. That’s why it lasts.

  1. Context Changes the Character of Marketing, Not Just Its Performance

Once context is visible, waste becomes a choice. Over-messaging is measurable. Late intervention is obvious. Discounts without memory reveal themselves as margin erosion. Marketing shifts from extraction to stewardship—from maximising reach to sustaining relationships. Customers don’t notice better marketing. They notice less noise. They don’t feel persuaded. They feel understood.

**

The Bottom Line

Context graphs are the missing layer between customer data and intelligent action. They do not replace campaigns, channels, or creativity. They make those elements coherent, timely, and accountable.

In a world drowning in data and noise, the organisations that win will not be the loudest—but the ones that remember, understand, and act with restraint.

Published by

Rajesh Jain

An Entrepreneur based in Mumbai, India.

Leave a Reply