The Rise of the Choice Architect
In 2020, almost three years before LLMs became mainstream, I wrote a paper called "The Science of User Experience." In it, I made two predictions that aged well. I want to unpack both — because the world has caught up to the first one, and it's about to catch up to the second. And the second one is the reason the first one happened.
Prediction One: The Generalist PM Wins
In that article, I wrote about a problem I saw everywhere — companies confusing product managers with project managers, putting "Product Manager / Product Owner" with a slash in job postings as if these are interchangeable, hiring people who passed a PMI-ACP certification and calling it a day.
I argued that the product management field would inevitably aggregate around generalist PMs. That the "sub-product managers" — the ones who just push tickets, echo sales requests, and organize standups — would go in vain. That the role would be redefined around people who understand users, psychology, culture, and business deeply enough to make real product decisions, not just process decisions.
LLMs made it happen. And fast.
AI now handles the tasks that junior and process-oriented PMs used to build their careers on — writing specs, organizing feedback, tracking competitors, summarizing research. Claire Vo gave a talk at Lenny's Summit called "Product Management is Dead." Entry-level PM roles are shrinking. The people who treated product management as a coordination job are being squeezed out.
Meanwhile, the generalist PMs — the ones who understand why a product should exist, not just how to ship it — are more in demand than ever. PM demand is growing in SaaS, fintech, AI, and enterprise software. Over 6,000 open PM roles worldwide in 2025, the most in over two years. The role isn't dying. It's being filtered.
This was prediction one. It landed.
But why did it land? Why are the generalists surviving while the process people are getting replaced? What is the generalist PM actually doing that AI can't?
That question leads directly to prediction two.
Prediction Two: Choice Architecture Is the Core
In the same 2020 article, I wrote about Richard Thaler and Cass Sunstein — the Nobel Prize winner and the Obama-era "regulatory czar" who coined the term "choice architecture" and the role of "choice architect" in their book Nudge.
Thaler and Sunstein defined a choice architect as the person responsible for organizing the context in which people make decisions. They built an entire theory around how governments and organizations can influence human behavior by redesigning the environment of choice, without restricting options — what they called "libertarian paternalism."
Their work was so valued that special state units were created around it. The UK's Behavioural Insights Team — the "Nudge Unit" — originally part of the Cabinet Office. Sunstein running the Office of Information and Regulatory Affairs. Thousands of scientists employed by governments to study brain errors and optimize citizens' choices.
Building on their foundation, I proposed in my article that in the context of IT and product development, choice architecture comes down to two specific elements:
- A description of mind and cognitive biases to which the target audience is exposed.
- Decisions on how to display the order of the elements involved in the choice that audience should make.
And that the product manager — the person "organizing the context in which users make decisions in the application" — is, by definition, a choice architect.
I argued that this isn't just a nice framing. It's the core of the job. That understanding cognitive biases, behavioral patterns, and the psychology of decision-making isn't a "nice to have" for PMs — it's the foundational skill that separates real product people from backlog administrators.
This is the answer to the question above. The generalist PM survives because the generalist PM — whether they use this language or not — is doing choice architecture. They understand the audience's mind. They make deliberate decisions about how to arrange the elements of a choice. Everything else — the specs, the tickets, the standups — was always just scaffolding. AI ate the scaffolding. What's left is the actual structure.
And now, in 2026, that structure matters more than it ever did. Because AI has entered the game as a choice architect in its own right.
AI Is Already a Choice Architect
Here's what's changed since I wrote that article: the entity designing the environment in which you make your daily decisions is increasingly not a human. It's an algorithm.
Stuart Mills, a researcher at the LSE, coined the term "autonomous choice architect" to describe AI systems that nudge — algorithms powered by data streams, infused with an objective they've been programmed to maximize, continuously redesigning the environments in which people make choices.
Think about it. The Facebook News Feed algorithm curates about 300 posts per day for you out of roughly 1,500 possible ones. That's choice architecture — element one (a model of your biases, preferences, engagement patterns) and element two (decisions on the order and presentation of content). Netflix's homepage, Amazon's "Amazon's Choice" badge, Google's search rankings, TikTok's For You page — all the same thing. Choice architecture, executed by machines.
And it goes further than static arrangements. These AI systems now do what researchers call hypernudging — continuous, real-time adaptation to your individual behavior, exploiting your specific cognitive biases at a scale and speed no human could match. A reinforcement-learning algorithm can test hundreds of different nudges at different times of day, figure out which ones you're most susceptible to, and deploy them without anyone's approval.
Your smartphone already has enough data to build a personalized profile of your biases. Combine that with a bag of pre-determined nudges and a learning algorithm, and you have a system that determines with high accuracy which nudge will work on you, specifically, right now.
This sounds like it should make human choice architects obsolete. More data, faster iteration, personalization at a scale no human team could achieve. Why would you need a person when the machine can architect choices better?
Because it can't. Not everywhere. And not for the reasons most people think.
Where the Algorithm Breaks Down
An autonomous choice architect works beautifully when you have massive user bases, fast feedback loops, and the freedom to experiment at scale. Consumer apps. Social media. E-commerce. These are playgrounds for algorithmic nudging.
But the world isn't all consumer apps (actually those aren’t more than 25% of the entire market).
There are entire B2B domains where you simply cannot run thousands of A/B tests. Your user base is 200 enterprise clients, each with unique workflows and procurement processes. You don't get the luxury of statistical significance. You don't get fast iteration cycles. You get one shot at a sales demo, one shot at an onboarding experience, one shot at a pricing page — and if the choice architecture is wrong, the deal is lost for a year.
There are sensitive domains — healthcare, finance, education, government — where you cannot even allow yourself to hand choice architecture to an algorithm. Where the ethical, legal, and human cost of a misfire isn't a dip in conversion rates — it's a patient making a wrong decision about treatment, a citizen misunderstanding a policy, a student being nudged toward the wrong path. In these contexts, the choice architecture must be deliberate, accountable, and human.
There are cases where regulations prohibit the kind of personalized nudging that AI excels at. Where transparency requirements mean you can't use opaque algorithms to arrange people's choices. Where the very idea of "optimizing" a human decision feels wrong — because the decision is too consequential, too personal, too contextual for a system trained on aggregate patterns.
Algorithmic choice architecture is powerful. But it is not universal. And the places where it fails are precisely the places where the human choice architect matters most.
And even in the places where algorithms do work — even in consumer apps and e-commerce — there's a limitation that goes deeper than domain constraints.
The Outside World
People's desires, fears, and motivations don't exist in a dataset. They are shaped by what happens in the real world — not solely online. A war starts. An economy shifts. A cultural movement gains traction. A generation grows up with different values than the one before. A pandemic rewires how people think about risk, trust, and proximity.
These shifts generate a constantly moving landscape of cognitive and emotional context that influences every choice a person makes. And the signals are everywhere — in conversations, in street-level trends, in the way a city feels different after an election, in the subtle cultural shifts you pick up only by living among people and paying attention.
An AI can monitor online signals. It can scrape sentiment. It can track engagement metrics. But it cannot walk through a neighborhood and sense the shift. It cannot sit in a meeting with a nervous enterprise client and read the room. It cannot feel that this particular moment in a particular culture makes a certain message land differently than it would have six months ago.
The best choice architects I've observed are the ones who absorb these real-world signals and translate them into product decisions. They catch something from the outside — a trend, a fear, a cultural undercurrent — and they know how to map it onto the biases of their audience and the arrangement of their product's decision environment.
AI can process what has already been captured. It cannot catch what hasn't been captured yet. And in a world where the environment of human decision-making is constantly being reshaped by forces outside any dataset, that gap is permanent.
This means the human choice architect operates at an altitude the algorithm cannot reach. But it also means they operate at every altitude below that — because the same skill that reads the room at a macro level is the same skill that decides where to place a button at a micro level. It's all one game.
Two Layers of the Same Game
This is what took me years to articulate clearly. Whether you're a PM optimizing a checkout flow or a strategist crafting a narrative that unites people around an idea, you're playing the same game at different altitudes.
Layer 1: Micro-choice architecture. Funnels, conversion rates, default settings, button placement, notification timing. A description of the user's cognitive biases + decisions on how to arrange the elements of their immediate choice. AI is extraordinarily good at this layer — and it should be, because micro-optimization is pattern recognition at scale.
But someone still has to decide what the funnel is for. What the default should default to. What the notification should make people feel.
Layer 2: Macro-choice architecture. Brand narrative, company mission, cultural positioning, the emotional and ideological context that surrounds a product before a user ever touches it. This is still choice architecture — still a description of the audience's cognitive biases + decisions on how to arrange elements that guide a choice. The choice is just bigger: do I trust this company? Do I identify with what they stand for? Do I want to be part of this?
Both layers follow the same equation. Both require the same foundational knowledge. The only difference is the altitude.
AI dominates layer 1. AI assists with layer 2. But no one — no system — can operate across both layers without a human who understands the mind deeply enough to connect the micro to the macro.
So what does that human actually need to know?
What a Choice Architect Knows
Cognitive science — not "I skimmed a list of 10 biases." Real depth. The kind where you look at a problem and immediately see which biases are at play — availability heuristic shaping perception, base rate fallacy distorting risk assessment, neglect of probability driving irrational behavior. The kind I documented across 105 biases with practical examples in UX Core. This isn't optional knowledge anymore. When AI can deploy personalized nudges at scale, the person directing that AI had better understand precisely what cognitive levers are being pulled.
Cultural context — because a nudge that works in Berlin will backfire in Bangkok. In my 2020 article I gave the example of using Spanish flag colors and a faded Sagrada Familia image to create subconscious familiarity through the mere-exposure effect. That level of cultural specificity only matters more as products go global and AI generates variations at machine speed. Someone has to know which variations are appropriate — and that takes human judgment rooted in real-world understanding.
AI orchestration — this is the new domain. Not coding AI. Directing it. Defining what the AI should optimize for, setting its constraints, interpreting its outputs, and — critically — overriding it when its optimization leads somewhere ethically unacceptable. The choice architect doesn't compete with AI. They govern it.
Narrative architecture — the difference between a product people use and a product people believe in. This isn't copywriting. It's understanding how availability heuristics, mere-exposure effects, and cue-dependent memory work together to create belonging, familiarity, and trust. AI can generate a thousand versions of your message. Only a human can know which version carries the right meaning in the right moment.
Ethics as a load-bearing structure — Thaler and Sunstein built their theory around libertarian paternalism: nudging toward better choices without restricting freedom. In a world of autonomous choice architects and hypernudging, this isn't a nice-to-have. The person who decides how and toward what AI-powered nudges are directed holds enormous power. The constraints put on algorithmic choice architecture depend entirely on the foresight of the humans who set them — and humans are notoriously bad at seeing unintended consequences. Which means the choice architect's ethical judgment isn't decoration. It's structural.
This is a serious list. And that brings us to the hardest part.
The G.I. Joe Problem (Still Unsolved)
In my 2020 article, I highlighted a concept that Professor Laurie Santos called the G.I. Joe Fallacy: the widespread belief that "knowing is half the battle."
It's not. It never was.
You can read this article and nod along — "choice architecture, got it, sounds right" — and then go back to arguing about sprint velocity. Nothing will change.
The people who will become choice architects are not the ones who understand cognitive biases. They're the ones who changed their own behavior based on this knowledge. Who catch themselves falling for the anchoring effect while negotiating salary. Who notice the mere-exposure effect shaping their product preferences. Who feel the discomfort when they realize their "gut feeling" about a design decision is just the availability heuristic feeding them recent memories.
I said this in 2020 and I'll say it again: to apply this knowledge, it is not enough to remember these biases. You must involve them in your life. And that involvement will change your behavior, ideology, and worldview.
That's not a career change. That's a life change. Most people won't do it. The ones who do will be extraordinarily valuable — because they'll be the only humans qualified to govern AI systems that nudge billions of people every day.
The Only Way o Stay Relevant
Everyone is scared right now. Engineers, PMs, designers — nobody has it figured out. The fear is justified. AI is eating execution across every discipline. Stanford found that employment for young software developers has declined nearly 20% from peak. Entry-level PM roles are shrinking. A quarter of product-related professionals report severe burnout.
But if you're a product manager reading this, there is exactly one move that guarantees your relevance. Not learning to vibe-code (though that doesn't hurt). Not getting another certification. Not pivoting to "AI PM" because LinkedIn says it's a hot title.
The move is: go deep into cognitive and behavioral science.
It is a rule of thumb. It is, I believe, the Ultimate Truth of this profession. And it is the only way — although tiring — to stay relevant from one day to another, constantly.
Because no matter what AI automates — specs, code, designs, analytics, A/B tests — it cannot automate the understanding of why people choose what they choose. How biases interact with culture, context, and emotion to produce decisions. The judgment that allows you to look at an AI's output and say: "This is technically optimal, but it's wrong — because it doesn't account for how this audience thinks, feels, and decides."
That understanding is what makes a choice architect.
Wolf Alexanyan
Yerevan, Armenia, 2026

