Skip to content
EN DE

Synthesis: Product Design

You’ve worked through four lessons: how AI-native UX works, how to design trust and explainability, which interaction patterns between human and AI work, and how to design generative features.

Individually, these are design tools. Together, they form a design logic: Lesson 1 asks HOW users interact with AI. Lesson 2 asks WHY they should trust the output. Lesson 3 asks WHO has control. Lesson 4 asks WHAT AI produces — and how users make the result their own.

The UX patterns (L1) shape how trust becomes visible (L2), what autonomy level you can offer (L3), and how iterative your generative output becomes (L4).

All four lessons converge on one central question: how much control does the user retain? From full user control (Copilot ghost text, where every suggestion is explicitly accepted) to full AI autonomy (autonomous agents acting independently) — every AI product positions itself on this spectrum.

For you as a PM: The design challenge isn’t finding the “right” point on the spectrum. It’s making the spectrum visible and configurable — an Autonomy Dial that users can adjust themselves as they build trust.

2. Trust Is Earned Through Recovery, Not Perfection

Section titled “2. Trust Is Earned Through Recovery, Not Perfection”

The Service Recovery Paradox shows that users who experience a problem and see it resolved successfully trust the product more than users who never had a problem at all. This is especially true for AI products, where errors are inevitable.

For you as a PM: Invest more UX budget in error states, undo functionality, and correction flows than in the happy path. The quality of your error handling defines the quality of your product.

3. Progressive Disclosure as the Unifying Pattern

Section titled “3. Progressive Disclosure as the Unifying Pattern”

The same principle runs through all four lessons — only the expression changes. L1: result → detailed options → raw output. L2: result → confidence → sources → full reasoning chain. L3: suggestion → preview → action → audit trail. L4: generation → variations → refinement → final output.

For you as a PM: Reveal complexity proportional to user need. Not everything at once, but everything reachable. The art is in layering — not hiding.

“It takes time to think AI-native” (Lenny’s Newsletter). Every successful AI product goes through three phases: Bolt-on (AI bolted onto existing UI), Integrated (AI embedded in workflows), and AI-native (product designed around AI). Most products are stuck in phase 1 — and users notice.

For you as a PM: The three phases aren’t a quality judgment — they’re a natural maturation process. But don’t stay stuck in phase 1. Every design decision should ask: am I still thinking in old UI patterns, or am I designing for what AI can actually do?

Figma’s 2025 study reveals the divide: 78% of users say AI makes work faster — but fewer than 50% say it makes work better. Speed without quality isn’t a solution. It’s a new problem.

For you as a PM: Your UX challenge isn’t to replace human judgment with automation. It’s to help users apply their judgment more efficiently. Speed-to-judgment, not speed-to-output.

AI products rarely fail because of the technology. They fail because of trust design. The question isn’t “Can our model do this?” but “Does the user understand what’s happening — and can they intervene when it goes wrong?”

This is what sets AI design apart from traditional UX. You’re not just designing interfaces. You’re designing a relationship between a human and a probabilistic system — and relationships require transparency, control, and the ability to course-correct.

What you should now be able to do:

  • Choose the right interaction modality (chat vs. structured vs. hybrid) — Lesson 1
  • Communicate AI confidence and uncertainty appropriately — Lesson 2
  • Design trust through citations and “show your work” — Lesson 2
  • Set the right autonomy level for your use case — Lesson 3
  • Build feedback loops that actually improve the system — Lesson 3
  • Design generative features for iteration, not single-shot — Lesson 4
  • Manage user expectations with honest onboarding — Lesson 4

If any of these feel uncertain, go back to the relevant lesson. These design foundations determine whether users trust your AI product — or give up after three tries.

You design AI features. Chapter 4 gives you the technical language to speak with engineers as equals.

Three scenarios combining multiple concepts from this chapter. Think through your answer before revealing the solution.

Your team launched an AI-powered financial advisor chatbot. Response quality is high, but week-one user retention is only 15%. User interviews reveal: “I don’t know if I can trust this.” How do you redesign the feature?

Solution

The problem isn’t response quality — it’s trust design (Lesson 2). Three levers: First, “show your work” — make citations and reasoning chains visible so users can trace how the bot arrived at its recommendation. Second, build in confidence communication: where is the bot certain, where uncertain? Third, apply progressive disclosure (Connection 3): result first, then confidence, sources, and full reasoning chain on demand. Additionally, reconsider the autonomy level (Lesson 3) — for financial advice, a copilot approach (suggestion + user decision) is more trustworthy than an autonomous agent.

Your product team added an “AI write for me” button to an existing note-taking tool. Feature adoption is at 8% after three months, and feedback says: “It feels tacked on.” How do you approach the problem?

Solution

This is a classic bolt-on problem (Connection 4, AI-Native Rethink): AI was bolted onto an existing UI instead of being integrated into the workflow. The path leads from Phase 1 (Bolt-on) to Phase 2 (Integrated). Instead of a separate button, AI should be embedded in the writing flow — for example, as copilot-style ghost text (Lesson 3) that suggests while typing. This also addresses the Judgment Gap (Connection 5): users don’t want AI to write for them, they want AI to help them write better. Generative features should be designed for iteration rather than single-shot (Lesson 4) — offer variations, allow refinement, don’t present a finished result.

You’re designing an AI feature for an email tool that automatically suggests replies. During testing, power users want the AI to send routine emails directly. Occasional users want to review every suggestion first. How do you resolve this conflict?

Solution

This is exactly the autonomy spectrum (Connection 1): different users need different positions on the spectrum. The solution is an Autonomy Dial (Lesson 3) — configurable autonomy that users can set themselves. For new users, start in copilot mode (show suggestion, user confirms). As trust builds, users can increase the autonomy level — e.g., “send routine replies directly after my initial approval.” Crucially, even in autonomous mode you need recovery options (Connection 2) — undo, audit trail, notifications about sent emails. Trust is built through the ability to correct, not through flawlessness.


Sources: Building on Lessons 1–4. Lenny’s Newsletter “Counterintuitive Advice for Building AI Products” (2024), Figma AI & Design Report (2025), Service Recovery Paradox Research, Google PAIR Guidelines (2024), Nielsen Norman Group AI UX Studies (2024–2025)

Part of AI Learning — free courses from prompt to production. Jan on LinkedIn