From Clickstream to Control Plane

Reshuffle reframes AI as coordination without consensus. Andrei Pop reframes the enterprise moat as operational memory. That combination puts product analytics and digital adoption platforms at a fork in the road: optimize human clicking—or become infrastructure for governed execution.

The Age of Skeuomorphs

When new technologies arrive, we almost always try to understand them through the metaphors of the old. Cars were “horseless carriages.” The internet was the “information superhighway.” Early web pages read like brochures. Early banner ads behaved like print ads. We still call our handheld computers “phones.” We still “save” with a floppy disk icon. We “hang up” with a handset symbol most people under 25 have never physically used. We flip pages in e‑readers that have no pages, and we buy LED bulbs engineered to “flicker” like candles they replaced.

This is technological skeuomorphism: we wrap the new in the language and shape of what came before until the true architecture becomes unavoidable.

We are doing the same thing with AI.

Back to top

AI’s Skeuomorphic Phase

The dominant enterprise narrative is still anchored on a Phase 1 framing: automation, task replacement, “copilots,” productivity boosts. That framing isn’t wrong—it’s just incomplete. It treats AI as a better tool for doing yesterday’s work, rather than as a coordination technology that changes what work is, how it flows, and which constraints actually matter.

Back to top

Reshuffle and Coordination Without Consensus

Sangeet Paul Choudary’s Reshuffle is an unlock because it forces you to stop thinking at the task level and start thinking at the system level. In an interview unpacking the book, he explains the automation frame as: you “run a certain set of workflows to create value as a business” and then “bring AI in to speed those things up,” which becomes “the automation view.” (source)

The claim is not that automation doesn’t happen—it will. The claim is that automation is a first-order effect. The second-order effect is the reshaping of coordination itself. One of Choudary’s cleanest phrases for this is “coordination without consensus”: “Actors don’t need to align on standards before they start speaking the same language or start seeing a shared view of the system.” (source)

That single sentence captures the deep shift. Organizations have historically had to earn coordination through standardization—shared schemas, shared processes, shared tools, shared “ways of working.” AI changes the cost of that coherence. It can synthesize coherence locally, on demand, without requiring the organization to be coherent globally first.

Back to top

The Missing “HOW” Layer

This is where Andrei Pop’s framing becomes another unlock. If Choudary is telling you to look at the system, Andrei is telling you where compounding advantage actually lives once you do. After a year of building AI systems that execute real business processes in wealth management—opening accounts, processing transfers, generating compliance documentation, handling service requests—he argues that data quality and decision traces are necessary, but not sufficient. Data tells you WHO and WHAT. Decision traces tell you WHY. Neither tells you HOW to do the work.

The missing layer is what he calls operational memory: the mechanics of execution, the sequences, the edge cases, the firm-specific logic, the approval thresholds, the tool-specific quirks. It’s what new employees spend months absorbing—not “the CRM,” but how this firm actually gets things done. And the key insight is that this layer compounds because it is learned through execution, and each executed workflow makes the next one more reliable.

Back to top

Where Operational Knowledge Actually Lives

Once you accept that operational memory is the compounding asset, the next question becomes uncomfortable: where does operational knowledge live today, at scale, across the modern enterprise? The answer is not “in people’s heads” or “in documentation.” It’s distributed. Some of it is tacit human knowledge. Some of it is written into process artifacts like SOPs, runbooks, case templates, and compliance manuals. Some of it is encoded in system configuration: approval matrices, workflow rules, routing logic, role-based permissions, and thresholds embedded in CRMs, ticketing systems, document workflows, and GRC tools. And some of it leaks out as behavioral exhaust: the click paths, retries, backtracking, tab switching, and cross-system hopping that reveal how humans compensate for fragmented systems.

Back to top

The Product Analytics Fork

This is where a particular category—and its players—suddenly matters a lot more than most people realize. Product analytics and digital adoption platforms already sit on an enormous stream of behavioral exhaust. Pendo, Amplitude, Mixpanel, Heap, FullStory, Contentsquare, and others measure adoption, track paths, detect friction, and quantify where users struggle. Digital adoption platforms like Pendo (again), WalkMe, and Whatfix go further: they don’t just observe, they “patch,” guiding users through point-and-click flows and helping enterprises standardize behavior on top of messy software estates.

In the Phase 1 world, that’s a strong value proposition: make the human layer more efficient, reduce errors, increase adoption, smooth out the workflow.

But in a Phase 2 world—where agents become operators—the category hits a fork in the road.

It can remain a sophisticated point-click repair kit — identifying friction in human workflows and guiding users through them.

Or it can evolve into something more foundational: the instrumentation layer for governed execution in an agentic enterprise.

Back to top

Why Behavioral Exhaust Is Necessary, Not Sufficient

It’s tempting to imagine that behavioral exhaust alone is enough: “If we can see what users do, we can teach agents to do it.” That’s the wrong conclusion. Behavioral telemetry is a fantastic map of entropy. It can show where coordination is expensive, where exceptions cluster, where humans are acting as the integration layer between systems. It can reveal the de facto workflows that exist in the wild, not the ideal workflows described in a process doc.

But it does not, by itself, encode the constraints that make execution safe. It captures motion, not mandate. And training an agent to replicate human coping behavior is how you automate inefficiency at scale.

Back to top

What It Would Take to Build the Control Plane

If the goal is for agents to coordinate without coherence, the infrastructure challenge is subtler: you need a layer that can connect behavioral exhaust to the other substrates of operational knowledge—process artifacts and system configuration—and turn all of it into a policy-aware execution substrate. You need a control plane, not just a dashboard.

What would it actually mean for a product analytics or digital adoption platform to “become AI infrastructure” in this sense? It would mean shifting from event tracking to state modeling, from UI telemetry to workflow reconstruction, and from “help the user click” to “instrument governed execution.” The core job stops being “measure feature adoption.” The core job becomes “represent how work gets done under constraint—and make that representation usable by agents.”

That representation can’t be a pile of raw events. It has to be a coherent model. In practice, that means building a canonical workflow state model that can represent steps, transitions, approvals, required artifacts, exception conditions, and tool-dependent variants. It means identity resolution across systems so “the same client” isn’t three records with three IDs. It means ingesting permission models so the control plane can answer, deterministically, what an agent is allowed to do at a given moment, on a given record, under a given policy.

It also means ingesting and normalizing process artifacts in a way that respects that most process artifacts are not “process definitions.” They’re human-language approximations of lived reality. The platform would need to turn SOPs and runbooks into executable constraints: required documents, thresholds, escalation rules, forbidden actions, and the “if X then Y unless Z” patterns that define real operations. Modern language models can help here, but only if you treat them as translators into a structured representation that can be validated, versioned, audited, and tested—not as magical engines that “know” your process.

Then there’s the system configuration layer, which is the most underappreciated substrate of operational knowledge. A huge amount of “how work gets done” is already encoded in workflow engines and admin consoles: CRM workflows, ServiceNow flows, approval rules, DocuSign routing, case management automation, identity and access policies. A platform that wants to become agent infrastructure would need connectors that extract this configuration as a living policy graph and keep it up to date. It would need to reconcile “what the systems say should happen” with “what humans actually do” as revealed by behavioral telemetry, and then tie both to outcomes.

Finally, it would need a new kind of trace. Not just decision traces (“why did we choose this?”) and not just click traces (“what did the user click?”), but execution traces that make work legible end-to-end: the intent that initiated a workflow, the policy checks that constrained it, the actions taken across tools, the approvals obtained, the exceptions encountered, the retries performed, and the outcome produced. These traces need correlation IDs, stable state transitions, and a storage model that’s effectively append-only for auditability. This is where observability moves from “product analytics” into something closer to OpenTelemetry-style distributed tracing—but for business workflows instead of microservices.

And once you have that, you can actually build the learning loop Andrei is pointing at. Operational memory compounds when execution reliability improves over time. You can measure that slope: the human-intervention rate per workflow, the exception rate over time, the time-to-completion distribution tightening as the system learns, the percentage of edge cases handled without escalation, the reduction in rework loops that used to be invisible. The platform becomes less like a reporting tool and more like an execution substrate that learns, because every governed run produces new operational memory.

Back to top

Humans More Human

This is also where the human story becomes non-negotiable. Daniela Amodei, cofounder and president of Anthropic, put it bluntly in a recent interview: “Humans plus AI together actually create more meaningful work.” (source) In the same context, she adds: “The things that make us human will become much more important.” (source)

If you take that seriously, then the goal of agentic systems is not “eliminate humans.” The goal is to eliminate coordination drag—so humans can do judgment, empathy, trust-building, and responsibility-bearing work that organizations currently suffocate under operational friction.

Back to top

An Invitation to the Category

A year ago, I developed a thesis that this product analytics / digital adoption category was sitting on the early signals of a much larger infrastructure opportunity. At the time, the market was still in Phase 1, obsessed with copilots and surface-level productivity. Even when “AI features” appeared, they mostly improved the point-and-click experience. The architecture had not yet snapped into focus. In hindsight, it was early.

It isn’t early anymore.

Agents are moving from novelty to production, and the enterprise is being forced to confront a new question: not “can the model answer?” but “can the system operate?” In Choudary’s language, the risk is getting stuck in a framing error—staring at task-level optimization while missing the system-level reshuffle. The winners in this next phase will be the companies that step out of the theatre and build the missing infrastructure layer.

So here’s the invitation to the market: one of the incumbents in product analytics or digital adoption will decide that optimizing human clicks is not enough. One of them will attempt the architectural pivot toward becoming a control plane for governed execution and an observability layer for digital workers. And if none of them do it—if they remain skeuomorphic, grafting “AI” onto yesterday’s product category—then a new entrant will build it from first principles.

Because in an agentic enterprise, someone must own the HOW layer. Someone must connect process artifacts, system configuration, and behavioral exhaust into structured operational memory. Someone must capture execution traces that improve reliability over time. That’s not a feature roadmap. That’s a new layer of the enterprise stack.

Back to top

Entropy, Reliability, and the Next Stack

The companies that keep optimizing point-and-click experiences will improve interfaces. The company that builds the control plane for digital workforces will define the next enterprise stack. Clickstream shows entropy, but execution traces create reliability. Reliability compounds into operational memory—and operational memory is the real infrastructure of an agentic world.

Back to top