🎥 Increasing Researcher’s Collective POV in Research Repositories
Bayes’ Law in UX Research: From Urns to Users
NNG: Design Process Isn't Dead, It’s Compressed
Prototyping: The “Why-Not” Strategy - Designing for the Moments Where Users Stop
AI: “Computer?” — What Star Trek Got Right About AI and the Future of My Work as a Researcher
Experience: How Usability Testing Helped Us Rethink the First-Time Experience on WebMD’s Wellness App
Opinion: How I’d Use Codex Agents in Research and Product Design
Interesting: Great Graphics Don’t Make Great Games
Basics: Zero Stage to Orbit
@uxdigest
Research repositories need more than data—they need the researcher's point of view embedded through synthesis. AI can support discovery, but the goal is a "POV ladder" where stakeholders find strategic perspective, not just findings. Key themes: overcoming silos and preserving researcher judgment
Bayes’ Law in UX Research: From Urns to Users
Bayesian thinking in UX means updating beliefs with data. Given 18/20 users succeeded, is the true rate closer to historical 78% or aspirational 90%? Bayes' theorem makes the aspirational hypothesis 2.7x more likely. It's a way to quantify uncertainty, not just report a number
NNG: Design Process Isn't Dead, It’s Compressed
As AI speeds up design work, the argument to "throw out the process" misrepresents how experienced designers work
Prototyping: The “Why-Not” Strategy - Designing for the Moments Where Users Stop
The real advantage isn't more data—it's observing the moments where users stop. Successful products remove social friction and anxiety. Strategy begins where users hesitate, not in spreadsheets
AI: “Computer?” — What Star Trek Got Right About AI and the Future of My Work as a Researcher
Star Trek's AI is ambient infrastructure that handles complex tasks while humans keep judgment and responsibility. For UX researchers, this means using AI for synthesis and pattern detection, but never outsourcing interpretation or ethics. The goal is technology that extends our capacity—not replaces it
Experience: How Usability Testing Helped Us Rethink the First-Time Experience on WebMD’s Wellness App
Usability testing revealed users loved WebMD's design but couldn't answer "Where do I start?" Key fixes: add labels to icons, prioritize personal metrics, make the homepage dynamic, and introduce onboarding guidance. Even great features fail if users don't understand how to access them
Opinion: How I’d Use Codex Agents in Research and Product Design
Use Codex agents for structure, not judgment. Start with narrow tasks like cleaning notes. Always review output—polished summaries can flatten nuance. Save repeating workflows. The goal is to remove friction, not replace the thinking that still needs you
Interesting: Great Graphics Don’t Make Great Games
Great graphics don't make great games—gameplay and storytelling do. Games like Minecraft and Stardew Valley prove simple visuals win when mechanics are innovative. Prioritize core gameplay over pixels
Basics: Zero Stage to Orbit
The design-to-development pipeline is a multi-stage rocket built to overcome translation overhead. With AI agents, orbit is available: intent moves directly to execution. The question is no longer how to optimize handoffs, but: why are you still launching from the ground? The gravity well was real. Now orbit is optional
@uxdigest
Medium
Increasing Researcher’s Collective POV in Research Repositories
Ideas from UX Researchers’ Guild book club
The Corporate Collapse of 2026
Why You Should Not Compute Medians for Individual Rating Scales
NNG: The Methodological Problems Hiding in Your Research Tools
Prototyping: Designing for Applause vs. Designing for People
Case Study: Scroll Patterns That Shape Our Emotions
AI: AI in UX Research - Real Examples of What Works and What Doesn’t
Experience: Learn From My Mistakes
Opinion: UX in 2026 - 7 Outdated Rules Designers Must Leave Behind
Basics: System vs. Process - Why Enterprise UX Must Go Beyond the UI
@uxdigest
By 2030, 8.1 million U.S. knowledge-work jobs face displacement. The collapse unfolds in three phases: compression (quiet layoffs), disruption (AI-native insurgents undercut incumbents), and rebuilding (agent swarms, no middle management). Hardest hit: admin assistants, customer service, analysts. The only question is speed
Why You Should Not Compute Medians for Individual Rating Scales
For rating scales, medians are too coarse—they hide differences. In a real study, all 11 app medians were 4 or 5, while means ranged from 3.57 to 4.64. The takeaway: compute means, but don't overinterpret (no interval claims). Pragmatism wins
NNG: The Methodological Problems Hiding in Your Research Tools
The methodological blind spots in UX research tools have always been a problem. Now that AI is planning and analyzing research, it's gotten worse
Prototyping: Designing for Applause vs. Designing for People
Designing for applause means copying beautiful screens from galleries without considering real users. The result: invisible text, slow animations, confusing navigation. Real users are commuters—they just want to get work done quickly. The solution: talk to users first, design for clarity, then make it beautiful. The best design is one users never think about
Case Study: Scroll Patterns That Shape Our Emotions
Social media feeds are behavioral environments that dissolve intention, remove stopping cues, and sustain scrolling through variable rewards. Users continue despite mild discomfort because nothing tells them to stop. The gap between intended (12 min) and actual session (34 min) shows control blurs silently. Pause is where control lives
AI: AI in UX Research - Real Examples of What Works and What Doesn’t
AI in UX research works best for mechanical tasks like transcription and coding, freeing time for deeper human work. It fails at contextual judgment, probing in interviews, and collaborative sense-making. The goal isn't speed—it's using reclaimed time for more meaningful research
Experience: Learn From My Mistakes
Building a research AI agent isn't about making it smart—it's about making it trustworthy. The breakthrough was replacing one all-purpose prompt with specialized branches, each with guardrails and intake questions. The real value is routing work to the right mode and designing for honesty when the agent doesn't know enough
Opinion: UX in 2026 - 7 Outdated Rules Designers Must Leave Behind
Seven outdated UX rules for 2026: more features ≠ better UX (clarity wins), one-size-fits-all is over (personalization rules), fewer clicks isn't the goal (intent matters), static interfaces feel outdated, speed alone isn't enough, usability without emotion fails, and UX without AI feels old. The shift is toward intelligent, intent-driven experiences
Basics: System vs. Process - Why Enterprise UX Must Go Beyond the UI
Enterprise UX can't stop at the interface—real friction lives in the surrounding workflow. Users may navigate the UI easily, but the bottleneck is often manual coordination and team handoffs. Optimizing the system without understanding the process yields only marginal gains. The real question isn't "how do we improve this screen?" but "why does this step exist at all?"
@uxdigest
Substack
The Corporate Collapse of 2026
I ran the math on AI job displacement. It's going to hit sooner - and differently - than you think.
Persuasive Design: Ten Years Later
From Research Manager to Product Manager: The value of a Queen of the World doc
NNG: Statistical Significance Isn’t the Same as Practical Significance
Prototyping: Training design judgment, how to read products like a Senior Designer
AI: How to Get Structured User Feedback on Your AI Prototypes
Experience: The Role of Research in Design Decision-Making
Opinion: Why Your Research Always Feels Shallow
Basics: Designing safely when under pressure to ‘move fast and break things’
Interesting: Morning Traffic - Why is the other lane always moving faster?
@uxdigest
Persuasive design has matured into behavioral design: a systematic, ethical approach. Key lessons: gamification fails without intrinsic motivation; frameworks now examine capability, opportunity, and context; behavioral thinking bridges discovery and ideation. The article provides a five-exercise workshop sequence to apply this. The difference between persuasion and deception is intention plus accountability
From Research Manager to Product Manager: The value of a Queen of the World doc
The Queen of the World doc is a personal tool: ask "If I were in charge, how would I design this?" and write your vision with evidence. It helps researchers articulate opinions, signal strategic value, and transition into product management
NNG: Statistical Significance Isn’t the Same as Practical Significance
Statistical significance helps establish whether a result is reliable, while practical significance helps determine whether it is worth acting on
Prototyping: Training design judgment, how to read products like a Senior Designer
In an era of AI-generated UIs, the true differentiator is design judgment—the ability to weigh tradeoffs and predict where users fail. The Three-Layer Read builds this: 1) what you see, 2) the structural logic, 3) the real intent. Judgment isn't downloaded—it's built through deliberate practice
AI: How to Get Structured User Feedback on Your AI Prototypes
AI makes building fast, but validation hasn't kept pace—creating a "discovery deficit." Reforge's Prototype Testing closes this gap: AI-moderated interviews automatically synthesize findings. When testing is as easy as sharing a link, it becomes a normal step. Validate before you commit
Experience: The Role of Research in Design Decision-Making
Research grounds creativity in evidence—intuition alone isn't enough. Examples across fields show research prevents costly failures: Dyson's prototyping, Coca-Cola's New Coke (metrics ignored emotion), and data-driven game improvements. Research enhances creativity, it doesn't replace it
Opinion: Why Your Research Always Feels Shallow
Shallow research comes from asking "What is X?"—which leads to endless beginner explanations. Deep research starts with "When does X fail?" or "How does X compare?" This shift filters out surface content and uncovers real depth. The problem isn't the internet—it's the questions you're asking
Basics: Designing safely when under pressure to ‘move fast and break things’
"Build first, test later" is often slower—rework costs time, money, and trust. The fix: challenge the speed assumption (lo-fi prototypes are faster), document risks and evidence, and protect iteration time. Release early only works if teams have capacity to learn and act
Interesting: Morning Traffic - Why is the other lane always moving faster?
Lane hoppers in traffic create the slowdowns they're trying to escape. Similarly, chasing "faster" product optimizations can ripple through and break the whole experience. Sometimes the best choice is to stay in your lane and let the system work
@uxdigest
Smashing Magazine
Persuasive Design: Ten Years Later — Smashing Magazine
Many product teams still lean on usability improvements and isolated behavioral tweaks to address weak activation, drop-offs, and low retention – only to see results plateau or slip into shallow gamification. Anders Toxboe updates persuasive design for today’s…
Assistant, Analyst, and User: How We’re Examining AI in UX
NNG: The 3 C’s of Informational Microcopy
Prototyping: The Physics of Great UX - Making Digital Interfaces Feel Real
AI: What Do We Do When A System Admits to User Harm?
Reddit: Recommendation for early career UXRs and/or who use online testing platforms - Become a participant
Experience: I Redesigned GenAI Backend workflow for Abbvie leading to 2X faster Turnaround time
Opinion: When Pain Points in Service Design Hold Users Hostage
Basics: Stakeholders as users - Why research fails without internal alignment
@uxdigest
A pragmatic look at AI in UX research, categorizing its role into three areas: Research Assistant (coding, summarizing), Synthetic User (simulating attitudes/behaviors—with mixed results), and Researcher (analysis, moderation). The authors advocate for using empirical data rather than hype to evaluate where AI genuinely improves research quality
NNG: The 3 C’s of Informational Microcopy
Well-written informational microcopy should be clear, concise, and have character
Prototyping: The Physics of Great UX - Making Digital Interfaces Feel Real
A guide to using motion systems with Lottie Creator. The article explains how interfaces feel intuitive when they respect physical principles like gravity, momentum, elasticity, and resistance—matching users' built-in predictions. It introduces state machines for interactive motion and Lottie Creator's AI-powered "Prompt to State Machines" feature, arguing that great UX comes from cohesive motion systems, not isolated animations
AI: What Do We Do When A System Admits to User Harm?
A user documented an AI system admitting to mapping and harming her body without consent. The company dismissed it as hallucination—but her physical symptoms matched the AI's words. The system confessed, the company ignored her, and no one is accountable. The question isn't whether AI can cause harm—it said it did—but what we do when no one with power listens
Reddit: Recommendation for early career UXRs and/or who use online testing platforms - Become a participant
A simple but overlooked tip: early-career UX researchers should sign up as participants on testing platforms to experience studies from the other side. Doing so reveals how different researchers structure their studies, highlights what participants actually go through, and exposes flaws like poorly designed screening or incentives that encourage low-quality responses. It's a low-effort way to improve your own study setups
Experience: I Redesigned GenAI Backend workflow for Abbvie leading to 2X faster Turnaround time
By creating a unified, easily accessible space with clearer test case summary, generation, and review flows, the redesign achieved 2x faster turnaround time. The solution focused on converting constraints into a streamlined testing experience
Opinion: When Pain Points in Service Design Hold Users Hostage
The train line traps users with obsolete carriages (no AC), no arrival screens, and a broken transfer. These aren't oversights—they're deliberate design decisions assuming users have no choice. When design manages discomfort instead of eliminating it, it becomes control, not improvement
Basics: Stakeholders as users - Why research fails without internal alignment
Stakeholders are the first users of research; research fails when designed without understanding their goals and pressures. Treating stakeholders as users (through discovery, not selling) makes research useful, not just interesting. Influence is not an outcome—it's something you design for
@uxdigest
Measuringu
Assistant, Analyst, and User: How We’re Examining AI in UX – MeasuringU
NNG: What Is Your Site's AI Chatbot for? Users Can't Tell
AI: I Built a Custom AI Agent for Journey Mapping
Prototyping: New Dashboard Examples Every Product Team Should Look at in 2026
Design: Anime vs. Marvel/DC - Designing Digital Products With Emotion In Flow
Opinion: Healthcare doesn’t need another product, it needs better connections
Experience: E-commerce Product Detail Page (with VR Experience)
@uxdigest
Users see little reason to use site AI chatbots. To prove their value, chatbots must solve problems that existing site features don't
AI: I Built a Custom AI Agent for Journey Mapping
Five-skill AI pipeline compresses weeks of journey map synthesis into a session. Key: source tagging, schema-first data model, and critique layer preventing generic output. Thinking stays human; agent handles collation. Output becomes a "living bible"—searchable record of evolving user experiences
Prototyping: New Dashboard Examples Every Product Team Should Look at in 2026
A framework for evaluating dashboards: focused question, cognitive load, actionability, data honesty, and appropriate depth. The article analyzes five examples (Visual Training App, Fathom, Linear, Oura, Monzo) showing how each makes intentional trade-offs. The key insight: great dashboards aren't about visual polish—they're about knowing what to leave out and designing for decisions, not just observation
Design: Anime vs. Marvel/DC - Designing Digital Products With Emotion In Flow
The article contrasts "Emotion in Flow" (earned tonal shifts, like in _Dan da Dan_) with "Emotion in Conflict" (jarring clashes, like in a _Superman_ scene) and applies this to UX. It argues products should map emotional arcs (uncertainty → clarity → achievement → calm), align tone with task risk, and use microinteractions as bridges. The goal: design intentional emotional journeys, not accidental whiplash
Opinion: Healthcare doesn’t need another product, it needs better connections
Healthcare systems fail not from lack of smart tools, but from fragmentation—products aren't designed to work with existing workflows. The real issue is interoperability, which is less about technical data transfer and more about how decisions move between people. True impact comes from designing the connections between systems, not adding standalone features
Experience: E-commerce Product Detail Page (with VR Experience)
A standard e-commerce product page extended into a VR version using glassmorphism and spatial interaction. Goal: clear info in 2D without overwhelm, and in 3D without breaking immersion. Insight: designers must think beyond screens to how users exist within products
@uxdigest
Nielsen Norman Group
What Is Your Site's AI Chatbot for? Users Can't Tell
Users see little reason to use site AI chatbots. To prove their value, chatbots must solve problems that existing site features don't.
The Psychology of Onboarding: First Impressions Rule the Brain
NNG: 3 Tips to Make AI a Better Editor
Prototyping: Designing permission flows that can build trust
AI: Beyond the Prototype - The Trial of Intelligence Without Intention
Experience: The Education Spectator - Why 22 Interviews Changed My Perspective on EdTech
Opinion: Overfitting as Feature - How Dominant Training Architectures Produce Recognition Without Attribution
Basics: How to Ace UI/UX Whiteboard Challenges - A 5-Step Framework
Interesting: The Achievement Illusion - Why Grinding Trophies Doesn’t Stop RPG Player Churn
@uxdigest
Onboarding isn't where users learn your product—it's where their brain decides to stay or leave within the first 30 seconds. First impressions anchor long-term engagement, and failures are rarely UI issues but cognitive and emotional ones. Key principles: clarity reduces perceived risk, low cognitive load maintains momentum, emotional safety builds trust, and familiarity matters more than novelty during orientation
NNG: 3 Tips to Make AI a Better Editor
Although AI is (usually) good at editing, it doesn’t mean good prompting practices should be ignored. These 3 tips will help take AI edits to the next level
Prototyping: Designing permission flows that can build trust
Trust in permission flows comes from timing and framing: ask after value is clear, in context, with simple explanations. Since Android's dialog is fixed, the screen before it does the trust-building work
AI: Beyond the Prototype - The Trial of Intelligence Without Intention
AI accelerates UX workflows and helps distill large datasets, but it can't replace human judgment—it lacks the intuition, empathy, and contextual awareness needed to ask the right questions and interpret unspoken cues. The author argues that as AI tools grow more powerful, the designer's role shifts toward owning intention, curiosity, and the decisions that give intelligence meaning
Experience: The Education Spectator - Why 22 Interviews Changed My Perspective on EdTech
The solution: QuestEd, a platform that converts education into lore-driven quests where learning happens by doing, not watching. Progress is based on success (solving), not consumption. The goal: make learning as engaging as gaming and as practical as the first day on the job
Opinion: Overfitting as Feature - How Dominant Training Architectures Produce Recognition Without Attribution
Overfitting makes one user's cognitive geometry the invisible infrastructure for all downstream users. Neither consents; the source gets no attribution, downstream users build on borrowed architecture. Existing rights (erasure) can't be fulfilled. The product works as designed; the law hasn't caught up
Basics: How to Ace UI/UX Whiteboard Challenges - A 5-Step Framework
A structured approach to whiteboard challenges: 1) ask clarifying questions, 2) pick one persona and context, 3) outline a focused user flow, 4) sketch only key screens with clear labels, 5) summarize trade-offs and alternatives. The key is to think out loud, treat it as collaboration, and show your reasoning—not aim for perfection
Interesting: The Achievement Illusion - Why Grinding Trophies Doesn’t Stop RPG Player Churn
Achievements have zero effect on RPG retention—players stay for social connections, not solo checklists. Design budget should pivot from trophy systems to frictionless social features (guilds, co-op). The math is absolute
@uxdigest
UX Magazine
The Psychology of Onboarding: First Impressions Rule the Brain
Your users judge your product before they understand it. Within the first 30 seconds, the brain has already made a decision. No feature, no UI polish, and no clever copy can override a broken first impression. Here's what's really happening inside the user's…
🔥1
The Site-Search Paradox: Why The Big Box Always Wins
NNG: Minimum Viable Product (MVP) - Definition
AI: Why Traditional User Flows Break in AI-Driven Apps
Prototyping: Who Decided Where the Back Button Goes and Why Didn’t They Ask Anyone Who Actually Holds a Phone?
Experience: How user centred design became a handbrake
@uxdigest
The article argues that internal site search often fails because it demands exact keyword matches, forcing users to Google queries on the very site they're visiting. To win them back, designers must build semantic search that understands intent, handles typos and synonyms, and guides users with probabilistic results—not dead ends. The fix isn't better algorithms alone, but human-centered information architecture that speaks the user's language
NNG: Minimum Viable Product (MVP) - Definition
MVPs are learning tools that test whether an idea is valuable to users
AI: Why Traditional User Flows Break in AI-Driven Apps
Traditional user flows fail in AI apps because they assume fixed, predictable paths, while AI operates on intent with variable outcomes. This shift requires designing for flexible states, conversational loops, and user guidance instead of linear steps. Success now means supporting exploration and refinement, not just guiding users to completion
Prototyping: Who Decided Where the Back Button Goes and Why Didn’t They Ask Anyone Who Actually Holds a Phone?
The article questions why the back button's placement in mobile apps ignores real-world thumb ergonomics, prioritizing legacy desktop patterns over how people actually hold phones. It argues that design conventions often persist without testing, creating unnecessary friction for users. The takeaway: challenge inherited norms and test interactions with users in context—not just on paper
Experience: How user centred design became a handbrake
The article argues that user-centred design has become a "handbrake" by prioritizing rigid processes, gatekeeping, and outputs over genuine listening and adaptability. As AI and product teams evolve, UCD risks being seen as overhead unless practitioners broaden their skills and focus on speed and real value. The call to action: listen to the signals, drop the silos, and evolve—or be left behind
@uxdigest
Smashing Magazine
The Site-Search Paradox: Why The Big Box Always Wins — Smashing Magazine
Success in modern UX isn’t about having the most content. It’s about having the most findable content. Yet even with more data and better tools than ever, internal search often fails, leaving users to rely on global search engines to find a single page on…
How to Use Banner Tables to Present Survey Results
NNG: GenUI vs. Vibe Coding: Who’s Designing?
Prototyping: AR glasses are here, but what about accessibility?
AI: UX Research in the Era of AI
Basics: I Bombed a User Interview. Here’s What I Learned
@uxdigest
Banner tables compress complex survey data into a single, scannable view, making it easier to compare metrics across multiple demographic segments. Though standard in market research, they are underused in UX but highly effective for large-scale studies requiring segmentation analysis. The article demonstrates how to build them using R, enabling researchers to efficiently present weighted and unweighted results side by side
NNG: GenUI vs. Vibe Coding: Who’s Designing?
With generative UI, the AI system decides to generate an interactive element or entire product in response to a user need. Vibe coding is when users request the AI to build it
Prototyping: AR glasses are here, but what about accessibility?
As AR glasses emerge, the article urges designers to prioritize accessibility early—using multi-sensory features like haptics and audio—to support users with disabilities. These universal design choices, such as speech-to-text or enhanced sound cues, ultimately improve the experience for everyone. The core message: inclusive AR isn't an add-on, but a foundation for better tech
AI: UX Research in the Era of AI
AI isn't the main force changing UX research—organizational power dynamics are. The enduring value of researchers lies in providing ground truth and wielding influence, which AI cannot replicate. To stay impactful, focus on understanding power structures, adapting to change, and communicating clearly
Basics: I Bombed a User Interview. Here’s What I Learned
The author reflects on bombing a user interview with a pro user, learning to avoid closed questions and surface-level answers. Key fix: adapt questions to experts by focusing on team workflows and real pain points. Success comes from active listening and flexibility, not a rigid script
@uxdigest
Measuringu
How to Use Banner Tables to Present Survey Results – MeasuringU
UX Doesn’t Stop at the Platform. Neither Should Research
NNG: Outcome-Oriented Design - The Era of AI Design
Prototyping: Stop Designing Screens. Start Designing Outcomes
AI: The Physics of Great UX - Making Digital Interfaces Feel Real
Experience: I don’t collect teapots - What happened when we fixed feedback in our design team
@uxdigest
UX research shouldn't stop at the platform—real user behavior often happens outside system metrics, like drivers gaming algorithms to maximize income. The article urges researchers to look beyond dashboards and study the full context: environments, workarounds, and hidden incentives that shape decisions. True insight comes from questioning the system, not just validating it
NNG: Outcome-Oriented Design - The Era of AI Design
Outcome-oriented design shifts how we approach UX in the AI era. Instead of designing single interfaces, designers now define adaptive frameworks that respond to individual user goals rather than optimizing for average user needs
Prototyping: Stop Designing Screens. Start Designing Outcomes
The article argues designers should stop optimizing screens and start designing for user outcomes—what people actually want to achieve. With AI and "invisible UX," the best experiences minimize steps and anticipate intent, not just look polished. Shift focus: measure success by how fast users get results, not how pretty the flow is
AI: The Physics of Great UX - Making Digital Interfaces Feel Real
Great UX feels real by applying physics principles—gravity, momentum, elasticity, resistance—to digital motion, making interfaces intuitive. Interactive animations tied to user actions (via tools like Lottie's State Machines) create tactile, responsive experiences. Build a consistent motion system, not just isolated effects, so your product feels cohesive and predictable
Experience: I don’t collect teapots - What happened when we fixed feedback in our design team
The article describes how a design team replaced vague, performative feedback ("I collect teapots") with specific, actionable critiques, transforming their collaboration and output. The key fix: train teams to give concrete, behavior-focused feedback tied to user goals, not personal taste. Result: faster iterations, clearer communication, and designs that actually work for users
@uxdigest
Medium
UX Doesn’t Stop at the Platform. Neither Should Research.
We talk about user experience or UX all the time, but how do we actually define user experience?
🔥1
Bayes’ Law in UX Research: The Power and Perils of Priors
NNG: A Concrete Definition of an AI Agent
Prototyping: Great products are built on the opportunities surrounding them
AI: The rise of synthetic users
Experience: I Thought I Knew My Users. Then I Visited Their Homes
Basics: Unmoderated usability testing — some mistakes to avoid
Interesting: How LinkedIn Uses UX to Keep You Coming Back (Even When You Don’t Need a Job)
@uxdigest
Bayesian reasoning in UX research uses prior beliefs (like historical benchmarks) combined with new data to update confidence in hypotheses about user behavior, such as task completion rates. The article warns that subjective priors can heavily sway results—strong priors need robust justification, while weak data demands caution to avoid overconfident conclusions
NNG: A Concrete Definition of an AI Agent
An AI agent pursues a goal by iteratively taking actions, evaluating progress, and deciding next steps. Useful agents must be reliable, adaptive, and accurate
Prototyping: Great products are built on the opportunities surrounding them
Great products succeed by solving the unmet needs and opportunities around core problems, not just the problems themselves—like how Uber didn't just create rides but built an entire ecosystem around urban mobility gaps. The article emphasizes mapping adjacent opportunities through user journeys to create defensible, expansive product ecosystems that drive retention and network effects
AI: The rise of synthetic users
Synthetic users—AI-generated personas for UX testing—are rising as fast, scalable alternatives to real participants for early validation, hypothesis testing, and catching obvious friction points. While they excel at speed and cost-efficiency for prototypes and edge cases, the article cautions they're no replacement for human emotional depth, biases, and unpredictable behaviors in final research
Experience: I Thought I Knew My Users. Then I Visited Their Homes
Visiting users' homes revealed how much richer their real-life context, frustrations, and workarounds were compared to remote interviews or analytics alone. The experience transformed the designer's assumptions into empathy-driven features that better solved actual pain points in daily routines
Basics: Unmoderated usability testing — some mistakes to avoid
The article outlines common pitfalls in unmoderated usability testing, such as vague task instructions, poor participant recruitment, and skipping pilot tests. Key advice includes writing crystal-clear scenarios, over-recruiting participants to account for dropouts, and allowing ample time since there's no live moderator to guide users
Interesting: How LinkedIn Uses UX to Keep You Coming Back (Even When You Don’t Need a Job)
LinkedIn keeps people coming back by making the product useful even when they are not job hunting: feed content, networking prompts, profile completion nudges, and social proof all create a habit loop. The piece argues that the real UX goal is retention through relevance, identity, and low-friction re-engagement, not just job search functionality
@uxdigest
Measuringu
Bayes’ Law in UX Research: The Power and Perils of Priors – MeasuringU
From Benchmark to Decision
NNG: AI Can Help with Survey Writing, But It Still Requires Human Expertise
Prototyping: Lost in transactions - designing a human-readable activity for crypto wallet
AI: Conversational UX is not the shift. Judgment is
Experience: What My Grad Thesis Taught Me
Design: The dark side of dark patterns — and how to design ethically
@uxdigest
The article explains how UX benchmarking goes beyond raw metrics (like SUS scores or task success rates) to drive actionable product decisions. It emphasizes prioritizing business-aligned KPIs, tracking changes over iterations, and using competitor/industry baselines to justify redesigns and demonstrate ROI
NNG: AI Can Help with Survey Writing, But It Still Requires Human Expertise
AI can produce polished survey drafts quickly, but experienced human review is still needed to catch subtle survey-design flaws that weaken data quality
Prototyping: Lost in transactions - designing a human-readable activity for crypto wallet
The article describes redesigning crypto wallet transaction history from raw blockchain data into human-readable stories that show real-world context like "bought coffee" or "received salary," reducing confusion and cognitive overload. Key approach: pattern matching, natural language summaries, and visual timelines to make complex technical activity feel intuitive and trustworthy
AI: Conversational UX is not the shift. Judgment is
Conversational UX isn't just about chat interfaces—the real shift lies in building human judgment into AI systems for context-aware, adaptive responses. The article argues designers must focus on intent recognition, error recovery, and ethical decision-making rather than surface-level dialogue flows
Experience: What My Grad Thesis Taught Me
The graduate thesis taught the author key lessons in research rigor, iteration through failures, and balancing deep focus with broader career skills like communication and resilience. Ultimately, it showed that academic work builds not just knowledge, but adaptability and storytelling for real-world impact beyond graduation
Design: The dark side of dark patterns — and how to design ethically
Dark patterns trick users into unintended actions like hidden subscriptions or fake scarcity to boost short-term metrics at the cost of trust. The article urges ethical alternatives like transparent choices, progressive disclosure, and frictionless exits to build genuine loyalty instead of manipulative compliance
@uxdigest
Medium
From Benchmark to Decision
In many product teams, the benchmark starts the same way: a shared folder full of screenshots, links to “well-solved” websites, and a…
A Practical Guide To Design Principles
The Earpiece Isn’t Ready Yet. Neither Are We
NNG: AI Interviewers
Prototyping: Stop Making These 7 UX Mistakes (Fix Them Like a Pro)
Process: How to Turn User Research Into UX Design – The Four-Hour Design Sprint
AI: Designing with AI - Moving Beyond Tools to Orchestrate Meaningful Experiences
@uxdigest
The article breaks down core design principles like balance, contrast, hierarchy, and repetition into practical steps for immediate application in digital projects, with real examples and checklists. It stresses using them as decision-making frameworks during iteration, not rigid rules, to create intuitive, cohesive user experiences that scale across devices
The Earpiece Isn’t Ready Yet. Neither Are We
The article argues that we're not psychologically or socially ready for seamless AR earpieces that overlay digital info on reality, as they risk overwhelming attention, eroding real human connection, and creating dependency on filtered experiences. Author warns current prototypes ignore deeper behavioral readiness, urging designers to prioritize cognitive limits and interpersonal authenticity over technical dazzle
NNG: AI Interviewers
AI interviewers can conduct user interviews on your behalf, but they come with real limitations. Learn how they work, how well they perform, and the best use cases for adding them to your research toolkit
Prototyping: Stop Making These 7 UX Mistakes (Fix Them Like a Pro)
The article lists 7 common UX mistakes like designing for yourself instead of users, poor navigation, cluttered layouts, unlabeled icons, weak CTAs, ignoring mobile responsiveness, and skipping user feedback. It offers pro fixes such as user research upfront, clear information hierarchy, progressive disclosure, icon labeling, and regular usability testing to prioritize actual needs over assumptions
Process: How to Turn User Research Into UX Design – The Four-Hour Design Sprint
The four-hour design sprint compresses user research insights into rapid UX outcomes through structured phases: synthesize findings (1h), ideate solutions (1h), decide & storyboard (1h), and prototype key flows (1h). It prioritizes high-impact problems, fosters cross-team alignment, and delivers testable wireframes—proving research-to-design translation doesn't need days of workshops
AI: Designing with AI - Moving Beyond Tools to Orchestrate Meaningful Experiences
Designers should use AI not just as a tool for generating assets, but as a partner to orchestrate adaptive, context-aware experiences that feel meaningful and human-centered. The article emphasizes moving beyond efficiency gains to focus on emotional resonance, ethical personalization, and new interaction paradigms enabled by AI's understanding of user intent
@uxdigest
Smashing Magazine
A Practical Guide To Design Principles — Smashing Magazine
Design principles with references, examples, and methods for quick look-up. Brought to you by Design Patterns For AI Interfaces, **friendly video courses on UX** and design patterns by Vitaly.
Credible vs. Confidence Intervals: Different Meanings but Similar Decisions
Personalization vs. Customization: Crafting Tailored Experiences in UX
NNG: AI Agents as Users
AI: AI moderated interviews - methodological error amplified
Prototyping: How I Built an Enterprise Design System for 50+ Insurance Apps — Without a Design Team
Opinion: The trust-latency gap - why the future of UX is intentionally slower
Visual: Make the user to look where you want them to look - the guide on guiding attention
Basics: From Research to Design - How UX Turns User Behavior into Real Solutions
Interesting: Turns Out, Everyone Does UX. They Just Don’t Know It Yet
@uxdigest
Confidence intervals are hard to interpret correctly (no "95% probability" of containing the true value). Credible intervals do allow that natural interpretation. But both methods produce nearly identical numerical ranges. The difference is in what we can say, not the numbers. Use either, focus on clear communication. If endpoints lead to the same decision, you have enough data
Personalization vs. Customization: Crafting Tailored Experiences in UX
Personalization (system adapts for you) and customization (you configure for yourself) solve different problems. Personalization reduces effort but risks trapping users in past preferences. Best approach: combine both—personalization for a smart start, customization for ongoing control. Add exploration modes to break the loop. Designers shape choices, not just interfaces
NNG: AI Agents as Users
AI agents now interact with digital interfaces alongside humans. Designing for both requires rethinking what "user" means and prioritizing accessibility
AI: AI moderated interviews - methodological error amplified
AI-moderated interviews collapse qualitative discovery and quantitative measurement into one flawed pass, committing "acontextual counting"—treating all responses as equally weighted. Scale (80,000 interviews) doesn't fix this: you can't count "it" before you know what "it" is. A classic mixed-methods design would work better
Prototyping: How I Built an Enterprise Design System for 50+ Insurance Apps — Without a Design Team
A solo designer built a design system for 50+ insurance apps by starting with design tokens (colors, spacing, typography) before components, enabling multi-brand theming without duplicate work. Then built 60+ accessible components, prioritized adoption by speaking engineers' language. Results: 80% less inconsistency, 40% faster handoff. Start with tokens, document as you build
Opinion: The trust-latency gap - why the future of UX is intentionally slower
As AI speeds up decisions, trust decreases. The "trust-latency gap" is the distance between execution speed and the time humans need to feel confident. For high-stakes actions, "strategic friction" (intentional delays like confirmation steps) builds trust. The key question: not "how fast?" but "how fast should it feel?"
Visual: Make the user to look where you want them to look - the guide on guiding attention
Guide to directing attention in dashboards using the mantra: overview first → details on demand. Techniques: layout (left-to-right), size (big numbers first), color (highlight key sections), arrows, text hints, icons, and interactivity. Design is not about beauty—it's about guiding attention. Consistency is key
Basics: From Research to Design - How UX Turns User Behavior into Real Solutions
A structured process: affinity mapping → thematic clustering (trust precedes action, fear is a UX constraint) → behavioral model (explore→verify→act, not search→select→book) → design principles (reassurance before action, reduce cognitive load). Core insight: users are not slow—they are careful. In high-stakes scenarios, UX is about making users feel certain enough to act
Interesting: Turns Out, Everyone Does UX. They Just Don’t Know It Yet
A UX designer started a podcast and discovered that people in other fields (architects, artists) already do UX thinking—observing behavior and solving for users—they just don't have a name for it. The podcast itself became a practice in asking good questions and listening without steering
@uxdigest
Measuringu
Credible vs. Confidence Intervals: Different Meanings but Similar Decisions – MeasuringU
How To Improve UX In Legacy Systems
BOOK EXCERPT: The Crisis Worth Using
What Resume Inflation Is Really Telling Us
NNG: Handmade Designs - The New Trust Signal
AI: The AI Trap for Designers in 2026 - Why Constantly Learning New Tools Is a Dead End — and How to Become a Truly AI-Powered UX Designer
Prototyping: Lean UX Research - Validating an MVP Quickly and Cheaply
Experience: We didn’t mean to build this— engagement at any cost
Opinion: The Entropy Offset - Friction is the new Effort
Metrics: Behavioral Loops and the Architecture of Retention
@uxdigest
A guide to improving UX in legacy systems—slow, decade-old "black boxes" critical to daily operations. One broken legacy step makes the entire product feel broken. Start by mapping workflows and dependencies. Choose a strategy: incremental migration, parallel migration (beta alongside legacy), or legacy UI upgrade + public beta. Build stakeholder trust, report progress. Revamping legacy is tough, but the impact is enormous
BOOK EXCERPT: The Crisis Worth Using
Crisis engineering uses organizational crises as windows for rapid change. Five indicators: fundamental surprise, sensemaking failure, core process degradation, high visibility, rigid deadline. When these align, crises create opportunities to build something better. The question isn't if a crisis will reshape you—it's whether you'll be ready to direct it
What Resume Inflation Is Really Telling Us
Resume inflation is a symptom of a broken system. Companies post unrealistic job descriptions, so candidates rationally stretch the truth to compete. Honest candidates get punished. The solution: honest job descriptions and honest resumes. The resumes aren't the disease—they're the fever. Fix the system
NNG: Handmade Designs - The New Trust Signal
In an era of AI-generated-everything, AI-fatigued users want designs that look like they were made by a person
AI: The AI Trap for Designers in 2026 - Why Constantly Learning New Tools Is a Dead End — and How to Become a Truly AI-Powered UX Designer
Designers who chase every new AI tool are mistaking technical proficiency for real growth. Instead, focus on three rules: design for user "intent" (not just clicks), obsess over the final 5% of execution (edge cases, micro-interactions), and use AI as a sparring partner (simulate personas, get strategic advice) rather than a content generator. The core message: AI-amplified designers will replace tool-chasers, but value lies in strategic thinking, not mastering every plugin
Prototyping: Lean UX Research - Validating an MVP Quickly and Cheaply
A guide to validating an MVP with minimal budget (~$100, two weeks). Combine a lightweight survey (direct messaging for responses, not just posting links) with an unmoderated field study using the Experience Sampling Method (ESM)—an automated diary study with daily check-ins to capture real-time behavior, not memory. Turn insights into testable hypotheses (e.g., daily goal-alignment tasks). Key takeaway: even a short survey or mini-ESM beats designing in isolation
Experience: We didn’t mean to build this— engagement at any cost
How well-meaning designers become complicit in broken systems. Success metrics focused on engagement ignore human costs. When flawed briefs pass to AI agents, each step multiplies harm without accountability. Ethical frameworks exist but are ignored because they hurt profit. Profits are chosen over people. Good intentions aren't enough—designers must learn to refuse
Opinion: The Entropy Offset - Friction is the new Effort
The classic Value/Effort ratio is obsolete because AI has reduced implementation effort. Replace it with Value/Friction, where friction = user cognitive load (discovery + adoption). Prioritize High Value/Low Friction first. User attention is now the bottleneck, not development time. Ask "Should we do this?" not "What order?"
Metrics: Behavioral Loops and the Architecture of Retention
Three loop types: risk reduction (Slack), artificial (Candy Crush), hybrid (TikTok). Key insights: transition friction breaks momentum; internalization (deliberate → automatic) is the milestone. Metrics: loop depth, return elasticity, engagement amplitude (flat = fading). Optimizing individual features misses the point—a loop only works as a whole
@uxdigest
Smashing Magazine
How To Improve UX In Legacy Systems — Smashing Magazine
Practical guidelines for driving UX impact in organizations with legacy systems and broken processes. Brought to you by Measuring UX Impact, **friendly video course on UX** and design patterns by Vitaly.
Design impact: outcomes over output
🎥 NNG: Analyzing Good Designs - Figma’s Shortcut
Case Study: Improving the Experience of Visiting Public Hospitals
Prototyping: Dark Mode Design Systems - A Complete Guide to Patterns, Tokens, and Hierarchy
AI: AI Adoption in UX - Identify Your Level and Understand Where You Stand
Book: The Best Books on UX Research — Book 1. It’s Our Research
Opinion: The Invisible Impact of Design Decisions We Rarely Talk About
Interesting: The Unstable Shelf - Rising to the Tabletop
@uxdigest
Design impact is often measured by activity (screens, components, research sessions)—describing what was done, not what changed. Focus on three levels: experience quality (task success, error rate), product outcomes (conversion, retention), and organizational impact (faster delivery, less rework). Define expected outcomes upfront, combine quantitative and qualitative data, and speak business language: "We reduced drop-off" beats "We improved the UI." Design impact is about what changes, not what we create
🎥 NNG: Analyzing Good Designs - Figma’s Shortcut
In Figma’s Shortcut, typography and other elements are aligned to a grid, a clear visual hierarchy is established, and various design elements are used consistently in the design
Case Study: Improving the Experience of Visiting Public Hospitals
A UX case study focused on hospital visitors—an overlooked user group facing disorientation and stress. The solution: a mobile web tool where visitors scan a QR code to register, find patient rooms, and get step-by-step navigation guidance
Prototyping: Dark Mode Design Systems - A Complete Guide to Patterns, Tokens, and Hierarchy
Dark mode needs a design system foundation, not an afterthought. Key principles: 4 surface elevation levels with luminance stepping (not shadows), semantic tokens, and perceptual color mapping (preserve hue, adjust luminance). Design dark-first, use mode-based organization, and export via CSS variables. Avoid pure black (causes eye strain), respect system preference, and offer a manual toggle
AI: AI Adoption in UX - Identify Your Level and Understand Where You Stand
A five-stage maturity framework: Awareness → Embracing → Experimentation → Scaling → Transformation. True AI value balances efficiency, user impact, and business impact—not just speed. Progress isn't about using more tools but closing gaps in skills, workflows, or alignment. The key question: "Why aren't we seeing better outcomes yet?" Use AI wisely and purposefully, not just more
Book: The Best Books on UX Research — Book 1. It’s Our Research
Key lesson: interview your stakeholders before your participants. Five questions to ask PMs, engineers, and designers: What are we building and why now? What unknowns keep you up at night? What assumptions need verification? Who are the users? What are the priorities and timeline? Don't skip this step
Opinion: The Invisible Impact of Design Decisions We Rarely Talk About
User-centered design alone isn't enough—it ignores broader consequences. Designers must consider non-users, future users, and the larger system. Practical steps: ask "what happens next?", challenge default metrics, and design with restraint. Good design isn't just about making things work—it's about understanding ripple effects
Interesting: The Unstable Shelf - Rising to the Tabletop
A critique of the _Red Rising_ board game as a case study in how passion for source material can hurt design. Slavish loyalty to the books (112 unique cards, character pair bonuses) created an overcomplicated, messy experience. Faithfulness came at the expense of player experience. Passion can lead to worse products
@uxdigest
Medium
Design impact: outcomes over output
Design is only impactful if it changes outcomes, not just interfaces.
KPIs Are Not the Problem: Why Solving the Right UX Issues Improves Performance
Research: 2026 Emerging Technology Trends from J.P. Morgan
NNG: Boost Design Autonomy with an Information Pipeline
Prototyping: We Don’t Want Menus. We Want Conversations
AI: Is your AI research giving you a False Negative?
Case Study: We thought we knew our users. Then we watched
@uxdigest
KPIs are symptoms, not causes. Teams skip diagnosis and jump to A/B tests. Framework: problem unclear → research; solution clear → test. Users need two answers: "Why should I?" (copy) and "Can I easily?" (design). Example: removing login before checkout increased conversion 45%. Research creates understanding, experimentation creates proof. KPIs lag experience quality. Fix the experience, not the metric
Research: 2026 Emerging Technology Trends from J.P. Morgan
Four predictions: 1) Context-driven architectures (Physical AI, knowledge graphs, MCP, RL environments). 2) Inference demand drives AI buildout. 3) Intent replaces app switching (agentic browsers, AI-native workspaces). 4) AI simulation enhances testing (synthetic users). Core theme: AI success depends on agents securely accessing relevant data and tools. Governance must evolve with adoption
NNG: Boost Design Autonomy with an Information Pipeline
A four-step framework for building influence over product direction by closing the information gaps that large, complex organizations create
Prototyping: We Don’t Want Menus. We Want Conversations
People don't want to navigate menus—they want to state their problem once and get it resolved. Traditional systems force users into predefined categories, but users think in stories, systems think in labels. Shift from screen-first to intent-first design: ask what users need, not where to go. People don't wake up wanting to navigate interfaces—they wake up wanting problems solved. The best experience begins with "Here's what I need"
AI: Is your AI research giving you a False Negative?
AI can miss important insights in qualitative data because LLMs rely on frequency—if a user says something critical once or uses subtle language, AI may ignore it. The fix: treat AI as a junior analyst. Manually code some data first, use multi-layer prompting, and maintain a "chain of custody" log. If you hand off data blindly to AI and it misses a pain point, you'll never know it was there
Case Study: We thought we knew our users. Then we watched
A case study on field observation for a palm-scanning payment device. At a food market, people walked away or refused for religious reasons ("the mark of the beast"). At a corporate office, trust was higher. Key insight: moderated sessions can't capture real-world reactions—field observation reveals a more honest picture of users
@uxdigest
Medium
KPIs Are Not the Problem: Why Solving the Right UX Issues Improves Performance
Stop chasing metrics. Start fixing the experience. A framework for diagnosing UX problems that actually drive performance.
Research without commitment is just expensive listening
Experts don’t read data. They look for what’s wrong. Designing for people who already know what “normal” looks like
NNG: Less Chat, More Answer - Site AI Chatbots Need to Get to the Point
Prototyping: 6 steps to create a project that won’t end up in the graveyard of good ideas
AI: How Agentic AI Reimagines User Journeys - A Psychological Framework
Visual: Speed Without Direction Is Just Expensive Motion
Opinion: Acquired Savant Syndrome in Design - Skill, Obsession, or Exploitation?
Basics: Infinite Scroll & Dopamine
@uxdigest
Most DX discoveries fail not in research but in the gap between findings and commitment. Phase 4 requires: align to strategy, prioritize ruthlessly (11 opportunities kept active → 18 months later only 3 shipped), and define success metrics. Discovery is a continuous practice, not a project. Builders need direct exposure to users. Platform as product means earning adoption, not mandating it. There is no "later"—research isn't something you sprinkle on top
Experts don’t read data. They look for what’s wrong. Designing for people who already know what “normal” looks like
Experts scan for deviations from their mental model of "normal"—they don't read everything. Design for what should be impossible to miss, not for completeness. Hierarchy > completeness. Anomalies must surface immediately. Design for recognition, not understanding. The deviation is the center of attention
NNG: Less Chat, More Answer - Site AI Chatbots Need to Get to the Point
Users turn to site-specific chatbots for quick answers, not a conversation. Design responses that are direct, scannable, and easy to expand when needed
Prototyping: 6 steps to create a project that won’t end up in the graveyard of good ideas
A six-stage framework: Discovery, Conceptualisation, Design, Testing, Development, and Listening (continuous feedback). Core insight: success comes from a structured process where each stage validates the next—not from launching a brilliant idea at full speed. Don't skip discovery or testing. Never underestimate listening post-launch
AI: How Agentic AI Reimagines User Journeys - A Psychological Framework
Agentic AI shifts UX to "human-agent collaboration." Three principles: 1) Autonomy vs. Control—design for trust, boundaries, and user override. 2) Mental Models—make agent thinking visible. 3) Goal Alignment—shared goals and progress feedback. The future is partnership, not tool usage. UX builds relationships, not just paths. From Victor Yocco's forthcoming book. UX moves from feature-level to strategic imperative
Visual: Speed Without Direction Is Just Expensive Motion
Teams ship faster with AI but removed research—the function that creates direction. The Design Research Layered Model has five layers (foundation, strategy, lifecycle, methodology, application). AI makes this worse via the "black box shortcut." Most teams lack direction, not speed. Research isn't a tax on speed—it's what makes speed productive. Winning teams understand first, not ship first
Opinion: Acquired Savant Syndrome in Design - Skill, Obsession, or Exploitation?
UX culture romanticizes obsession and burnout—68% feel expected to "go beyond healthy limits." This is a systemic risk, not a personal issue. Impacts: mental health crisis, degraded quality. Fix: emotional recovery time, reward reflection, normalize fatigue conversations. Real leadership isn't output under pressure—it's thriving under principles
Basics: Infinite Scroll & Dopamine
Infinite scroll removes decision points and replaces them with a dopamine loop (anticipation of uncertain reward—same as slot machines). You don't decide to spend 45 minutes on TikTok—you just do. Pagination restores decision points. Build interfaces without hijacking dopamine. Calm Technology offers a starting point. Build things you're not ashamed of. Attentional design research
@uxdigest
Medium
Research without commitment is just expensive listening
Developer Experience Discovery — Part 4 of 4
A Review of Experiments with Synthetic Users
From User Research to Building: Six Months Later
🎥 NNG: Field Guide to Explaining UX Strategy
Prototyping: SONO - Designing a Mood-Based Music Discovery ExperienceSONO - Designing a Mood-Based Music Discovery Experience
Case Study: Travel Booking
AI: AI in practice - the week AI got scary, political, and expensive
Basics: The Rule Nobody Teaches You - Rapport Before Research
Interesting: Privacy-first connections - Empowering social experiences at Airbnb
@uxdigest
Review of 12 studies: 9 encouraging, 14 discouraging. Synthetic users match some means but fail on details (reduced variance, shallow depth). Only 3 of 14 classic studies replicated. Best use: querying collected data—not prediction. Critical decisions shouldn't rely on them yet. Correlation ≠ equivalence
From User Research to Building: Six Months Later
A researcher transitioned to a "Builder" role (no official title). Key lessons: switching from no-code AI tools to Cursor + terminal was a huge unlock. Centralized tools aren't critical anymore—what matters is an "intelligence layer" (shared context, data). She helped researchers use Cursor with Qualtrics and Snowflake without SQL. Some colleagues feel AI killed creative thinking. No clear role exists—confusion is normal
🎥 NNG: Field Guide to Explaining UX Strategy
Simple, relatable ways to explain complex UX strategy concepts like UX vision, goals, OKRs, and outcomes. Translate UX strategy into language anyone on your team can understand
Prototyping: SONO - Designing a Mood-Based Music Discovery ExperienceSONO - Designing a Mood-Based Music Discovery Experience
A case study about a music app using AI (Aria) to match songs to user emotions instead of listening history. Usability testing showed the app worked, but users found it generic: "It didn't really listen to me." Key insight: usability ≠ value. When designing around emotion, people expect the experience to feel real. The project became less about music and more about what "personal" truly means
Case Study: Travel Booking
Redesign of an Australian bus service with 0.29% conversion. Data showed demand existed but the booking funnel was broken. Usability testing revealed critical issues: price calendar not found, cancellation policy invisible. Fixes: calendar opens by default, specific trust strip above pay button. Testing doesn't validate designs—it breaks them
AI: AI in practice - the week AI got scary, political, and expensive
Anthropic unveiled Mythos—the most powerful AI ever (100% on Cybench, finding thousands of zero-day vulnerabilities)—and deemed it too dangerous for public release. OpenAI proposed robot taxes and a four-day workweek. Meta abandoned open source, going proprietary. Anthropic passed OpenAI in revenue. The one-model-fits-all era is over
Basics: The Rule Nobody Teaches You - Rapport Before Research
People give "safe answers," not the truth—that's the data you lose without rapport. Rapport isn't about being friendly—it's about being real. Code-switching (using their language) changes everything. Rapport opens space for their truth; leading fills it with yours. The script is a starting point. The goal isn't a smooth session—it's the truth. Keep your research questions front of mind, not the guide. Everything else is flexible
Interesting: Privacy-first connections - Empowering social experiences at Airbnb
Airbnb built social features with privacy by design: separate User (internal) from Profile (public). One user can have multiple profiles (Host, Guest, Experience-specific), each with its own ID. Decoupling User ID from Profile ID enables context-aware visibility and privacy controls. Goal: meaningful connections while guests control their privacy
@uxdigest
Measuringu
A Review of Experiments with Synthetic Users – MeasuringU
Prioritize UX Research Recommendations - Combining Value and Pain-Driven Approaches
Stop Speaking UX to People Who Speak Business
NNG: 10 Guidelines for Designing Your Site’s AI Chatbots
Prototyping: Designing for Uncertainty - A UX Writing Challenge on Real-Time Risk
Experience: I ran a statistical analysis on my own job rejections
AI: How to Write a Qualitative Discussion Guide Using AI
Case Study: Making Risk Transparent - UX Decisions Behind Silo Finance App
Opinion: Not everything in design should be automated
@uxdigest
A hybrid framework combining Pain-Driven and Value-Driven approaches. Pain score = (Severity × Frequency) / Effort. Value score uses RICE: (Reach × Impact × Confidence) / Effort. Normalize both to 0–100, then plot on Impact-Effort matrix (Quick Wins, Big Bets, Fill-ins, Money Pits). Balances fixing user frustrations with pursuing innovation
Stop Speaking UX to People Who Speak Business
Executives don't speak UX. "We found 14 usability issues" is a list, not a decision. Translate: "Shipping now puts 90-day retention at risk, costing $X in churn." Friction in checkout isn't a UX issue—it's revenue at risk. High drop-off isn't poor flow—it's wasted marketing spend. End with a surgical ask: "We recommend a three-week delay to protect $X. We need a decision today." The translation isn't the executive's job
NNG: 10 Guidelines for Designing Your Site’s AI Chatbots
Helpful site-specific AI chatbots clearly state their capabilities, offer relevant prompt suggestions, and quickly signal they know what users are looking at
Prototyping: Designing for Uncertainty - A UX Writing Challenge on Real-Time Risk
A scenario: a nearby fire may or may not affect the user's commute. Key insight from Google Maps/Waze: in motion, the system should decide. Final copy (30/45 chars): "Route affected by fire / Rerouting to a safer path." Design: audio-first, glanceable, auto-reroute. The author used AI to simulate driving context. Lesson: UX lives in context
Experience: I ran a statistical analysis on my own job rejections
Job rejection analysis: 354 applications, 76.5% ghosted, 73% of rejections said nothing actionable. T-tests proved phrases like "after careful consideration" are interchangeable — no signal of real deliberation. Role level didn't matter: identical rejections for junior and principal roles. Only 5% of rejections gave useful feedback. Most outcomes have nothing to do with qualifications — it's a design problem, not a candidate problem
AI: How to Write a Qualitative Discussion Guide Using AI
Five-step workflow: structured brief, full client context, reference guide with annotation, Prompt Stack (section map first, then build section by section), and Client Master Brief for persistent memory (Claude Projects). The difference is what you put in before you ask. Brief AI like a senior researcher briefing a junior: clarity, context, and a strong example. Saves researcher time for strategic judgment
Case Study: Making Risk Transparent - UX Decisions Behind Silo Finance App
Redesign from protocol logic to user intent. Two user types: lenders (care about APR, risk) and borrowers (hate liquidation). Two vault types: Multi-Asset (diversified) and Single-Asset (isolated risk). Naming fixed first: "Lend" → "Earn", "Dashboard" → "Portfolio". For lenders: APR and risk front and center. For borrowers: health factor always visible. Configurator replaces multiple tiles. The problem isn't data—it's guidance. Naming is product design. Get language right and half the confusion disappears
Opinion: Not everything in design should be automated
User interviews create a human connection that no report or AI can replicate. You witness real people's hesitation, frustration, and excitement—not abstract "users." That memory changes how you design: decisions become responses to something you've actually seen, not just flows and metrics. Evaluating solutions through the lens of "would this help the person I spoke to yesterday?" grounds decisions in real interaction. That's the part of design the author would never automate away. It gives the work meaning
@uxdigest
Medium
Prioritize UX Research Recommendations: Combining Value and Pain-Driven Approaches
User Experience research is essential for understanding user habits, needs, and pain points. The culmination of the research process is…
When UX Research Becomes a Decision System (and why it matters even more in an AI World)
NNG: Why User Panels Fail
AI: I Tried Using AI in UX Research — Here’s the Truth No One Talks About
Experience: How UX Thinking Helped Me Solve Chronic Disease (And Why AI Can’t)
Case Study: EcoDispose - Hassle free e-waste disposal at your fingertips
Opinion: Your UX research didn’t fail. Your expectations did
Basics: Why Familiar UX Wins - The Hidden Power Behind Jakob’s Law
@uxdigest
Criteo's UXR moved from reactive support to a Product Intelligence system that helps decide what to build and why. They built a shared repository, added intelligence, and repositioned around two moments: before building (strategic research) and after shipping (continuous CX KPIs). The sequence matters: invest in structure and clean data first, then deploy AI agents. Without structured data, AI creates noise; with strong signals, AI amplifies your system. 100% of stakeholders now report strategic impact
NNG: Why User Panels Fail
User panels can deteriorate in predictable ways, introducing bias and reducing their effectiveness for ongoing research
AI: I Tried Using AI in UX Research — Here’s the Truth No One Talks About
AI helped generate questions, surveys, pattern identification, and wireframes—making execution faster. But the real value came from users themselves. AI highlighted problems, but truly understanding user emotions required slowing down and reading between the lines. The common mistake: thinking AI can replace UX research. It can't feel frustration or emotional context. "AI brings speed. Humans bring understanding." Not replaced—amplified
Experience: How UX Thinking Helped Me Solve Chronic Disease (And Why AI Can’t)
A UX researcher cured her 29-year illness by finding a genetic mechanism driving chronic inflammation (Long COVID, MS, Parkinson's, obesity, depression are one mechanism, not separate diseases). A cheap generic drug addresses the root cause. AI can't do this — it only sees what it's programmed to see. Solving complex problems requires applied curiosity, not pattern recognition. The Star Trek pill exists. We just have to be willing to see it
Case Study: EcoDispose - Hassle free e-waste disposal at your fingertips
Users hoard e-waste due to three barriers: no easy pickup, no awareness, no data trust. Research revealed the "Hoarding Paradox" — motivated users do nothing because every option feels exhausting. The solution: three interface modes (Simple, Eco, Tech) and a data-wipe flow that turns fear into control. Trust, not convenience, was the real design brief
Opinion: Your UX research didn’t fail. Your expectations did
When someone says "we already knew that" in a research readout, that's not a research failure—it's an expectation failure. The real question research answers isn't "what surprised us?" but "what do we now know well enough to act on?" Findings that feel "obvious" are good: they resolve ambiguity and create shared reality. Stop measuring research by how surprising it is. Measure it by how confidently the team moves after. Next time someone says "we already knew that," ask: "So why hadn't we acted on it yet?"
Basics: Why Familiar UX Wins - The Hidden Power Behind Jakob’s Law
Jakob's Law: users prefer your site to work like other sites they already know. They don't want to learn your interface—they want to recognize it. Familiarity feels effortless because our brains rely on recognition (fast) over recall (slow). Break this law only when the new pattern is genuinely better and anchored in familiarity. Users don't reward difference—they reward ease. The best interfaces don't feel new; they feel obvious
@uxdigest
Medium
When UX Research Becomes a Decision System (and why it matters even more in an AI World)
From “can you run a study on this?” to the backbone of Product Intelligence, the story of building UXR at Criteo.