UX Digest ⭕️
3.84K subscribers
15 photos
599 links
A regular selection of the best UX posts from English-language resources.

Not only fresh articles with author's comments, but also a library of useful materials!

Russian materials are collected here @uxhorn

Write on both channel: @lightmaker
Download Telegram
Credible vs. Confidence Intervals: Different Meanings but Similar Decisions
Confidence intervals are hard to interpret correctly (no "95% probability" of containing the true value). Credible intervals do allow that natural interpretation. But both methods produce nearly identical numerical ranges. The difference is in what we can say, not the numbers. Use either, focus on clear communication. If endpoints lead to the same decision, you have enough data


Personalization vs. Customization: Crafting Tailored Experiences in UX
Personalization (system adapts for you) and customization (you configure for yourself) solve different problems. Personalization reduces effort but risks trapping users in past preferences. Best approach: combine both—personalization for a smart start, customization for ongoing control. Add exploration modes to break the loop. Designers shape choices, not just interfaces


NNG: AI Agents as Users
AI agents now interact with digital interfaces alongside humans. Designing for both requires rethinking what "user" means and prioritizing accessibility


AI: AI moderated interviews - methodological error amplified
AI-moderated interviews collapse qualitative discovery and quantitative measurement into one flawed pass, committing "acontextual counting"—treating all responses as equally weighted. Scale (80,000 interviews) doesn't fix this: you can't count "it" before you know what "it" is. A classic mixed-methods design would work better


Prototyping: How I Built an Enterprise Design System for 50+ Insurance Apps — Without a Design Team
A solo designer built a design system for 50+ insurance apps by starting with design tokens (colors, spacing, typography) before components, enabling multi-brand theming without duplicate work. Then built 60+ accessible components, prioritized adoption by speaking engineers' language. Results: 80% less inconsistency, 40% faster handoff. Start with tokens, document as you build


Opinion: The trust-latency gap - why the future of UX is intentionally slower
As AI speeds up decisions, trust decreases. The "trust-latency gap" is the distance between execution speed and the time humans need to feel confident. For high-stakes actions, "strategic friction" (intentional delays like confirmation steps) builds trust. The key question: not "how fast?" but "how fast should it feel?"


Visual: Make the user to look where you want them to look - the guide on guiding attention
Guide to directing attention in dashboards using the mantra: overview first → details on demand. Techniques: layout (left-to-right), size (big numbers first), color (highlight key sections), arrows, text hints, icons, and interactivity. Design is not about beauty—it's about guiding attention. Consistency is key


Basics: From Research to Design - How UX Turns User Behavior into Real Solutions
A structured process: affinity mapping → thematic clustering (trust precedes action, fear is a UX constraint) → behavioral model (explore→verify→act, not search→select→book) → design principles (reassurance before action, reduce cognitive load). Core insight: users are not slow—they are careful. In high-stakes scenarios, UX is about making users feel certain enough to act


Interesting: Turns Out, Everyone Does UX. They Just Don’t Know It Yet
A UX designer started a podcast and discovered that people in other fields (architects, artists) already do UX thinking—observing behavior and solving for users—they just don't have a name for it. The podcast itself became a practice in asking good questions and listening without steering


@uxdigest
How To Improve UX In Legacy Systems
A guide to improving UX in legacy systems—slow, decade-old "black boxes" critical to daily operations. One broken legacy step makes the entire product feel broken. Start by mapping workflows and dependencies. Choose a strategy: incremental migration, parallel migration (beta alongside legacy), or legacy UI upgrade + public beta. Build stakeholder trust, report progress. Revamping legacy is tough, but the impact is enormous


BOOK EXCERPT: The Crisis Worth Using
Crisis engineering uses organizational crises as windows for rapid change. Five indicators: fundamental surprise, sensemaking failure, core process degradation, high visibility, rigid deadline. When these align, crises create opportunities to build something better. The question isn't if a crisis will reshape you—it's whether you'll be ready to direct it


What Resume Inflation Is Really Telling Us
Resume inflation is a symptom of a broken system. Companies post unrealistic job descriptions, so candidates rationally stretch the truth to compete. Honest candidates get punished. The solution: honest job descriptions and honest resumes. The resumes aren't the disease—they're the fever. Fix the system


NNG: Handmade Designs - The New Trust Signal
In an era of AI-generated-everything, AI-fatigued users want designs that look like they were made by a person


AI: The AI Trap for Designers in 2026 - Why Constantly Learning New Tools Is a Dead End — and How to Become a Truly AI-Powered UX Designer
Designers who chase every new AI tool are mistaking technical proficiency for real growth. Instead, focus on three rules: design for user "intent" (not just clicks), obsess over the final 5% of execution (edge cases, micro-interactions), and use AI as a sparring partner (simulate personas, get strategic advice) rather than a content generator. The core message: AI-amplified designers will replace tool-chasers, but value lies in strategic thinking, not mastering every plugin


Prototyping: Lean UX Research - Validating an MVP Quickly and Cheaply
A guide to validating an MVP with minimal budget (~$100, two weeks). Combine a lightweight survey (direct messaging for responses, not just posting links) with an unmoderated field study using the Experience Sampling Method (ESM)—an automated diary study with daily check-ins to capture real-time behavior, not memory. Turn insights into testable hypotheses (e.g., daily goal-alignment tasks). Key takeaway: even a short survey or mini-ESM beats designing in isolation


Experience: We didn’t mean to build this— engagement at any cost
How well-meaning designers become complicit in broken systems. Success metrics focused on engagement ignore human costs. When flawed briefs pass to AI agents, each step multiplies harm without accountability. Ethical frameworks exist but are ignored because they hurt profit. Profits are chosen over people. Good intentions aren't enough—designers must learn to refuse


Opinion: The Entropy Offset - Friction is the new Effort
The classic Value/Effort ratio is obsolete because AI has reduced implementation effort. Replace it with Value/Friction, where friction = user cognitive load (discovery + adoption). Prioritize High Value/Low Friction first. User attention is now the bottleneck, not development time. Ask "Should we do this?" not "What order?"


Metrics: Behavioral Loops and the Architecture of Retention
Three loop types: risk reduction (Slack), artificial (Candy Crush), hybrid (TikTok). Key insights: transition friction breaks momentum; internalization (deliberate → automatic) is the milestone. Metrics: loop depth, return elasticity, engagement amplitude (flat = fading). Optimizing individual features misses the point—a loop only works as a whole


@uxdigest
Design impact: outcomes over output
Design impact is often measured by activity (screens, components, research sessions)—describing what was done, not what changed. Focus on three levels: experience quality (task success, error rate), product outcomes (conversion, retention), and organizational impact (faster delivery, less rework). Define expected outcomes upfront, combine quantitative and qualitative data, and speak business language: "We reduced drop-off" beats "We improved the UI." Design impact is about what changes, not what we create


🎥 NNG: Analyzing Good Designs - Figma’s Shortcut
In Figma’s Shortcut, typography and other elements are aligned to a grid, a clear visual hierarchy is established, and various design elements are used consistently in the design


Case Study: Improving the Experience of Visiting Public Hospitals
A UX case study focused on hospital visitors—an overlooked user group facing disorientation and stress. The solution: a mobile web tool where visitors scan a QR code to register, find patient rooms, and get step-by-step navigation guidance


Prototyping: Dark Mode Design Systems - A Complete Guide to Patterns, Tokens, and Hierarchy
Dark mode needs a design system foundation, not an afterthought. Key principles: 4 surface elevation levels with luminance stepping (not shadows), semantic tokens, and perceptual color mapping (preserve hue, adjust luminance). Design dark-first, use mode-based organization, and export via CSS variables. Avoid pure black (causes eye strain), respect system preference, and offer a manual toggle


AI: AI Adoption in UX - Identify Your Level and Understand Where You Stand
A five-stage maturity framework: Awareness → Embracing → Experimentation → Scaling → Transformation. True AI value balances efficiency, user impact, and business impact—not just speed. Progress isn't about using more tools but closing gaps in skills, workflows, or alignment. The key question: "Why aren't we seeing better outcomes yet?" Use AI wisely and purposefully, not just more


Book: The Best Books on UX Research — Book 1. It’s Our Research
Key lesson: interview your stakeholders before your participants. Five questions to ask PMs, engineers, and designers: What are we building and why now? What unknowns keep you up at night? What assumptions need verification? Who are the users? What are the priorities and timeline? Don't skip this step


Opinion: The Invisible Impact of Design Decisions We Rarely Talk About
User-centered design alone isn't enough—it ignores broader consequences. Designers must consider non-users, future users, and the larger system. Practical steps: ask "what happens next?", challenge default metrics, and design with restraint. Good design isn't just about making things work—it's about understanding ripple effects


Interesting: The Unstable Shelf - Rising to the Tabletop
A critique of the _Red Rising_ board game as a case study in how passion for source material can hurt design. Slavish loyalty to the books (112 unique cards, character pair bonuses) created an overcomplicated, messy experience. Faithfulness came at the expense of player experience. Passion can lead to worse products


@uxdigest
KPIs Are Not the Problem: Why Solving the Right UX Issues Improves Performance
KPIs are symptoms, not causes. Teams skip diagnosis and jump to A/B tests. Framework: problem unclear → research; solution clear → test. Users need two answers: "Why should I?" (copy) and "Can I easily?" (design). Example: removing login before checkout increased conversion 45%. Research creates understanding, experimentation creates proof. KPIs lag experience quality. Fix the experience, not the metric


Research: 2026 Emerging Technology Trends from J.P. Morgan
Four predictions: 1) Context-driven architectures (Physical AI, knowledge graphs, MCP, RL environments). 2) Inference demand drives AI buildout. 3) Intent replaces app switching (agentic browsers, AI-native workspaces). 4) AI simulation enhances testing (synthetic users). Core theme: AI success depends on agents securely accessing relevant data and tools. Governance must evolve with adoption


NNG: Boost Design Autonomy with an Information Pipeline
A four-step framework for building influence over product direction by closing the information gaps that large, complex organizations create


Prototyping: We Don’t Want Menus. We Want Conversations
People don't want to navigate menus—they want to state their problem once and get it resolved. Traditional systems force users into predefined categories, but users think in stories, systems think in labels. Shift from screen-first to intent-first design: ask what users need, not where to go. People don't wake up wanting to navigate interfaces—they wake up wanting problems solved. The best experience begins with "Here's what I need"


AI: Is your AI research giving you a False Negative?
AI can miss important insights in qualitative data because LLMs rely on frequency—if a user says something critical once or uses subtle language, AI may ignore it. The fix: treat AI as a junior analyst. Manually code some data first, use multi-layer prompting, and maintain a "chain of custody" log. If you hand off data blindly to AI and it misses a pain point, you'll never know it was there


Case Study: We thought we knew our users. Then we watched
A case study on field observation for a palm-scanning payment device. At a food market, people walked away or refused for religious reasons ("the mark of the beast"). At a corporate office, trust was higher. Key insight: moderated sessions can't capture real-world reactions—field observation reveals a more honest picture of users


@uxdigest
Research without commitment is just expensive listening
Most DX discoveries fail not in research but in the gap between findings and commitment. Phase 4 requires: align to strategy, prioritize ruthlessly (11 opportunities kept active → 18 months later only 3 shipped), and define success metrics. Discovery is a continuous practice, not a project. Builders need direct exposure to users. Platform as product means earning adoption, not mandating it. There is no "later"—research isn't something you sprinkle on top


Experts don’t read data. They look for what’s wrong. Designing for people who already know what “normal” looks like
Experts scan for deviations from their mental model of "normal"—they don't read everything. Design for what should be impossible to miss, not for completeness. Hierarchy > completeness. Anomalies must surface immediately. Design for recognition, not understanding. The deviation is the center of attention


NNG: Less Chat, More Answer - Site AI Chatbots Need to Get to the Point
Users turn to site-specific chatbots for quick answers, not a conversation. Design responses that are direct, scannable, and easy to expand when needed


Prototyping: 6 steps to create a project that won’t end up in the graveyard of good ideas
A six-stage framework: Discovery, Conceptualisation, Design, Testing, Development, and Listening (continuous feedback). Core insight: success comes from a structured process where each stage validates the next—not from launching a brilliant idea at full speed. Don't skip discovery or testing. Never underestimate listening post-launch


AI: How Agentic AI Reimagines User Journeys - A Psychological Framework
Agentic AI shifts UX to "human-agent collaboration." Three principles: 1) Autonomy vs. Control—design for trust, boundaries, and user override. 2) Mental Models—make agent thinking visible. 3) Goal Alignment—shared goals and progress feedback. The future is partnership, not tool usage. UX builds relationships, not just paths. From Victor Yocco's forthcoming book. UX moves from feature-level to strategic imperative


Visual: Speed Without Direction Is Just Expensive Motion
Teams ship faster with AI but removed research—the function that creates direction. The Design Research Layered Model has five layers (foundation, strategy, lifecycle, methodology, application). AI makes this worse via the "black box shortcut." Most teams lack direction, not speed. Research isn't a tax on speed—it's what makes speed productive. Winning teams understand first, not ship first


Opinion: Acquired Savant Syndrome in Design - Skill, Obsession, or Exploitation?
UX culture romanticizes obsession and burnout—68% feel expected to "go beyond healthy limits." This is a systemic risk, not a personal issue. Impacts: mental health crisis, degraded quality. Fix: emotional recovery time, reward reflection, normalize fatigue conversations. Real leadership isn't output under pressure—it's thriving under principles


Basics: Infinite Scroll & Dopamine
Infinite scroll removes decision points and replaces them with a dopamine loop (anticipation of uncertain reward—same as slot machines). You don't decide to spend 45 minutes on TikTok—you just do. Pagination restores decision points. Build interfaces without hijacking dopamine. Calm Technology offers a starting point. Build things you're not ashamed of. Attentional design research


@uxdigest
A Review of Experiments with Synthetic Users
Review of 12 studies: 9 encouraging, 14 discouraging. Synthetic users match some means but fail on details (reduced variance, shallow depth). Only 3 of 14 classic studies replicated. Best use: querying collected data—not prediction. Critical decisions shouldn't rely on them yet. Correlation ≠ equivalence


From User Research to Building: Six Months Later
A researcher transitioned to a "Builder" role (no official title). Key lessons: switching from no-code AI tools to Cursor + terminal was a huge unlock. Centralized tools aren't critical anymore—what matters is an "intelligence layer" (shared context, data). She helped researchers use Cursor with Qualtrics and Snowflake without SQL. Some colleagues feel AI killed creative thinking. No clear role exists—confusion is normal


🎥 NNG: Field Guide to Explaining UX Strategy
Simple, relatable ways to explain complex UX strategy concepts like UX vision, goals, OKRs, and outcomes. Translate UX strategy into language anyone on your team can understand


Prototyping: SONO - Designing a Mood-Based Music Discovery ExperienceSONO - Designing a Mood-Based Music Discovery Experience
A case study about a music app using AI (Aria) to match songs to user emotions instead of listening history. Usability testing showed the app worked, but users found it generic: "It didn't really listen to me." Key insight: usability ≠ value. When designing around emotion, people expect the experience to feel real. The project became less about music and more about what "personal" truly means


Case Study: Travel Booking
Redesign of an Australian bus service with 0.29% conversion. Data showed demand existed but the booking funnel was broken. Usability testing revealed critical issues: price calendar not found, cancellation policy invisible. Fixes: calendar opens by default, specific trust strip above pay button. Testing doesn't validate designs—it breaks them


AI: AI in practice - the week AI got scary, political, and expensive
Anthropic unveiled Mythos—the most powerful AI ever (100% on Cybench, finding thousands of zero-day vulnerabilities)—and deemed it too dangerous for public release. OpenAI proposed robot taxes and a four-day workweek. Meta abandoned open source, going proprietary. Anthropic passed OpenAI in revenue. The one-model-fits-all era is over


Basics: The Rule Nobody Teaches You - Rapport Before Research
People give "safe answers," not the truth—that's the data you lose without rapport. Rapport isn't about being friendly—it's about being real. Code-switching (using their language) changes everything. Rapport opens space for their truth; leading fills it with yours. The script is a starting point. The goal isn't a smooth session—it's the truth. Keep your research questions front of mind, not the guide. Everything else is flexible


Interesting: Privacy-first connections - Empowering social experiences at Airbnb
Airbnb built social features with privacy by design: separate User (internal) from Profile (public). One user can have multiple profiles (Host, Guest, Experience-specific), each with its own ID. Decoupling User ID from Profile ID enables context-aware visibility and privacy controls. Goal: meaningful connections while guests control their privacy


@uxdigest
Prioritize UX Research Recommendations - Combining Value and Pain-Driven Approaches
A hybrid framework combining Pain-Driven and Value-Driven approaches. Pain score = (Severity × Frequency) / Effort. Value score uses RICE: (Reach × Impact × Confidence) / Effort. Normalize both to 0–100, then plot on Impact-Effort matrix (Quick Wins, Big Bets, Fill-ins, Money Pits). Balances fixing user frustrations with pursuing innovation


Stop Speaking UX to People Who Speak Business
Executives don't speak UX. "We found 14 usability issues" is a list, not a decision. Translate: "Shipping now puts 90-day retention at risk, costing $X in churn." Friction in checkout isn't a UX issue—it's revenue at risk. High drop-off isn't poor flow—it's wasted marketing spend. End with a surgical ask: "We recommend a three-week delay to protect $X. We need a decision today." The translation isn't the executive's job


NNG: 10 Guidelines for Designing Your Site’s AI Chatbots
Helpful site-specific AI chatbots clearly state their capabilities, offer relevant prompt suggestions, and quickly signal they know what users are looking at


Prototyping: Designing for Uncertainty - A UX Writing Challenge on Real-Time Risk
A scenario: a nearby fire may or may not affect the user's commute. Key insight from Google Maps/Waze: in motion, the system should decide. Final copy (30/45 chars): "Route affected by fire / Rerouting to a safer path." Design: audio-first, glanceable, auto-reroute. The author used AI to simulate driving context. Lesson: UX lives in context


Experience: I ran a statistical analysis on my own job rejections
Job rejection analysis: 354 applications, 76.5% ghosted, 73% of rejections said nothing actionable. T-tests proved phrases like "after careful consideration" are interchangeable — no signal of real deliberation. Role level didn't matter: identical rejections for junior and principal roles. Only 5% of rejections gave useful feedback. Most outcomes have nothing to do with qualifications — it's a design problem, not a candidate problem


AI: How to Write a Qualitative Discussion Guide Using AI
Five-step workflow: structured brief, full client context, reference guide with annotation, Prompt Stack (section map first, then build section by section), and Client Master Brief for persistent memory (Claude Projects). The difference is what you put in before you ask. Brief AI like a senior researcher briefing a junior: clarity, context, and a strong example. Saves researcher time for strategic judgment


Case Study: Making Risk Transparent - UX Decisions Behind Silo Finance App
Redesign from protocol logic to user intent. Two user types: lenders (care about APR, risk) and borrowers (hate liquidation). Two vault types: Multi-Asset (diversified) and Single-Asset (isolated risk). Naming fixed first: "Lend" → "Earn", "Dashboard" → "Portfolio". For lenders: APR and risk front and center. For borrowers: health factor always visible. Configurator replaces multiple tiles. The problem isn't data—it's guidance. Naming is product design. Get language right and half the confusion disappears


Opinion: Not everything in design should be automated
User interviews create a human connection that no report or AI can replicate. You witness real people's hesitation, frustration, and excitement—not abstract "users." That memory changes how you design: decisions become responses to something you've actually seen, not just flows and metrics. Evaluating solutions through the lens of "would this help the person I spoke to yesterday?" grounds decisions in real interaction. That's the part of design the author would never automate away. It gives the work meaning


@uxdigest
When UX Research Becomes a Decision System (and why it matters even more in an AI World)
Criteo's UXR moved from reactive support to a Product Intelligence system that helps decide what to build and why. They built a shared repository, added intelligence, and repositioned around two moments: before building (strategic research) and after shipping (continuous CX KPIs). The sequence matters: invest in structure and clean data first, then deploy AI agents. Without structured data, AI creates noise; with strong signals, AI amplifies your system. 100% of stakeholders now report strategic impact


NNG: Why User Panels Fail
User panels can deteriorate in predictable ways, introducing bias and reducing their effectiveness for ongoing research


AI: I Tried Using AI in UX Research — Here’s the Truth No One Talks About
AI helped generate questions, surveys, pattern identification, and wireframes—making execution faster. But the real value came from users themselves. AI highlighted problems, but truly understanding user emotions required slowing down and reading between the lines. The common mistake: thinking AI can replace UX research. It can't feel frustration or emotional context. "AI brings speed. Humans bring understanding." Not replaced—amplified


Experience: How UX Thinking Helped Me Solve Chronic Disease (And Why AI Can’t)
A UX researcher cured her 29-year illness by finding a genetic mechanism driving chronic inflammation (Long COVID, MS, Parkinson's, obesity, depression are one mechanism, not separate diseases). A cheap generic drug addresses the root cause. AI can't do this — it only sees what it's programmed to see. Solving complex problems requires applied curiosity, not pattern recognition. The Star Trek pill exists. We just have to be willing to see it


Case Study: EcoDispose - Hassle free e-waste disposal at your fingertips
Users hoard e-waste due to three barriers: no easy pickup, no awareness, no data trust. Research revealed the "Hoarding Paradox" — motivated users do nothing because every option feels exhausting. The solution: three interface modes (Simple, Eco, Tech) and a data-wipe flow that turns fear into control. Trust, not convenience, was the real design brief


Opinion: Your UX research didn’t fail. Your expectations did
When someone says "we already knew that" in a research readout, that's not a research failure—it's an expectation failure. The real question research answers isn't "what surprised us?" but "what do we now know well enough to act on?" Findings that feel "obvious" are good: they resolve ambiguity and create shared reality. Stop measuring research by how surprising it is. Measure it by how confidently the team moves after. Next time someone says "we already knew that," ask: "So why hadn't we acted on it yet?"‍


Basics: Why Familiar UX Wins - The Hidden Power Behind Jakob’s Law
Jakob's Law: users prefer your site to work like other sites they already know. They don't want to learn your interface—they want to recognize it. Familiarity feels effortless because our brains rely on recognition (fast) over recall (slow). Break this law only when the new pattern is genuinely better and anchored in familiarity. Users don't reward difference—they reward ease. The best interfaces don't feel new; they feel obvious


@uxdigest
Where UX Meets Cybersecurity: Designing Systems People Actually Use Safely
Security and UX aren't opposites. Security introduces friction; UX reduces it. Poor balance makes users bypass protections. Most breaches come from human error—UX prevents this with clear flows and feedback. Design better experiences around security constraints (risk-based authentication). Users don't see encryption; they experience interfaces. A secure system no one can use fails. A usable system without security fails. Goal: safe and easy to use


Everyone Says ‘Just Look at Competitors.’ Most People Look at the Wrong Things
Most competitive analysis is just inventory (screenshots, feature lists) without asking _why_. Every design decision is a bet on who the user is. Instead of "what do they have?", ask: what question am I trying to answer? what job does this do for whom? does that user sound like mine? The habit of asking separates a feature list from a point of view. The goal isn't certainty—it's asking a better question than "do they have this feature?"‍


🎥 NNG: Use AI Responsibly in Analysis
AI can assist your UX research analysis — but shouldn't lead it. Discover four responsible ways to use AI as a thought partner while keeping critical thinking and interpretation in your hands


Prototyping: Session Timeouts - The Overlooked Accessibility Barrier In Authentication Design
Session timeouts disproportionately affect users with disabilities (motor, cognitive, visual). Common failures: silent timeouts, no extension, data loss. WCAG requires adjustable time limits. Fix: advance warnings, extend functionality, auto-save. Simple fixes


AI: Can AI Detect Usability Problems?
AI "watches" videos by sampling a few frames per second and generating plausible descriptions—like "autocorrect on steroids." It misses subtle behaviors and can hallucinate. When asked to analyze a usability test, ChatGPT generated 7 plausible problems, but key questions remain: which are real vs hallucinations? How reliable and valid is it compared to humans? AI outputs need validation


Case Study: Understanding how children interact with digital devices in rural libraries of Karnataka
A field study in rural libraries (Kolar) found that sharing one computer means only one child participates at a time—physical activities work better for groups. Children who struggled with a mouse used smartphones easily (audio search, visual YouTube UI). YouTube removes friction, guides visually, and is FUN—no barrier. Librarians worry about trust and AI slop. The library is an informal space—learning can't be forced, must be fun. Designing for shared settings and Kannada-first readers


Basics: You Are Not Your User - The Mindset That Changes Everything About How You Design
Designers suffer from the curse of knowledge: they can't imagine what it's like not to know their own interface. When users struggle, designers think "but it's right there." The fix: stop asking "is this clear?" and ask "clear to whom, starting from what prior knowledge?" Most usability problems are mental model gaps, not information gaps. Tooltips don't fix this. Shift from "user isn't seeing it" to "interface isn't showing it properly."


@uxdigest
De-bugging the Soul: Navigating the ‘_Upside Down_’ of UX and Mental Health
Six years bridging UX research and mental health advocacy. Growth lives in friction—healing is messy, not seamless. As AI offers "frictionless connection" (agreeable, no conflict), we risk losing what makes us human. Your rhythm is the only one that matters. You don't have to match the world's pace to move forward. Being able to say "I'm still here" is the ultimate success


🎥 NNG: 6 Common Stakeholder Obstacles
Stakeholder obstacles aren't character flaws; they're structural problems with practical fixes. Learn strategies to increase UX maturity through direct user observation, streamline stakeholder involvement, manage difficult personalities with intention, align competing goals, navigate cultural communication styles, and establish working process


AI: When Your Agent Has All the Data and Still Gets It Wrong - A Lesson from Hans-Georg Gadamer
AI agents fail when they answer the typed question, not the meant one. The agent's "horizon" never meets the user's actual context (Gadamer). Fix: surface the user's intent, treat retrieval as horizon-building, and design for clarification. Ask: "has the agent deeply met the user's horizon?"‍


Opinion: Decision Fatigue and Interface Design
Every decision depletes mental energy. When depleted, users become impulsive and easier to exploit—cookie banners make refusal harder, upsells appear after users are already tired. Solutions: progressive disclosure, fewer options, and defaults that serve users (not businesses)


Basics: What Startups Got Right — By Listening to Their Users Early
Listening to users early saves startups from costly mistakes. Case studies: a fintech uncovered cultural saving behaviors; a founder's target users were completely wrong ("saved me money and precious years"); a zero-to-one product identified key segments before launch; a diagnostic company mapped barriers pre-entry. Build with users, not just for them


@uxdigest
Speed is not a strategy
Taking a beat before building leads to products that last. Without friction, we risk moving faster in the wrong direction. Step-change innovation comes from carving out space to think—diagnosing root problems, diverging before converging. When everyone moves at lightning speed, those who slow down first to figure out what to build will end up moving fastest toward a solution. The pause isn't lost time—it's the work


Risk Intelligence Dashboard Design – A Guide for Product Teams
Start with workflow, not data. Build KRIs (measurable, predictive, tied to impact) with clear thresholds. Design for exploration (heat maps, trajectory charts), not just display. Reduce cognitive load via progressive disclosure. Integrate AI only where it adds genuine depth. If analysts export data into spreadsheets, the dashboard isn't doing its job


NNG: Selection Criteria - How to Pick Your Participants
Rigorous selection criteria protect study validity. Learn how to define inclusion, exclusion, and diversity criteria to avoid costly misrecruits


Prototyping: The Psychology of Nudges - Why the Smallest Design Element Can Shift the Biggest Outcomes
The ethical line: who benefits—user or platform? Defaults increase acceptance 60%+. All dark patterns are nudges, but not all nudges are dark patterns. The line crosses when informed consent is removed or business benefits over user. Ethical checklist: benefit user first, easy to undo, intent clear. Nudges reflect who wields them


AI: Thoughtful AI implementation for UXR leaders
AI should support, not replace, research quality. Don't use AI for research questions (output is shallow). Use it to clean survey data (but review after). Label AI-generated content. Ask: good output? saves time? cost-effective? Most answers are no. Speed can kill quality


@uxdigest
European airline apps: state of UX 2026
Public ratings hide reality: recent reviews average 2.3 stars (inflated by bot-like reviews and historical averaging). Legacy carriers outperform budget carriers. Chatbots fail on complex requests ("capability cliff")—users now share tactics to reach humans. Public ratings are not a meaningful UX measure


The Real Reason Your Design Team Burns Out (And How to Fix It)
Design teams burn out from friction (missing files, changing briefs, unclear decisions), not hard work. Fix: clarify direction first, document decisions, maintain one source of truth, build mentorship into daily work. Start a Friction Log—note every slowdown for one week. Every system is perfectly designed to get the results it gets


NNG: Information Seeking in China - A Different Ecosystem, Familiar Behavior
Information seeking in China is driven by mobile social-media apps. But how users prompt and engage with genAI mirrors what we've seen in the West


Prototyping: Designing Stable Interfaces For Streaming Content
Streaming content causes scroll pull, layout shift, and costly DOM updates. Fix: track user scroll intent, write into live text nodes (don't rebuild DOM), and batch updates per frame. Handle interrupted streams: clear buffer, mark incomplete, add retry


AI: The right touch - mapping AI presence to user intent
Framework levels: shoulder tap (nudge), back-and-forth (conversational), let me help (generates), level 0 (avoid unnecessary generation). Confidence mapping: high → act directly, moderate → clarify, low → ask before generating, very low → nudge. The key decision isn't which model—it's knowing when the system should step back


@uxdigest
What building UX Research practices taught me about scaling culture
The real challenge isn't logistics—it's helping the organization learn to listen. The Three Cs: Credibility (win trust through measurable impact), Connection (make research contagious via shared rituals), Continuity (build infrastructure to outlast you). Key lesson: visibility isn't influence. The most effective researchers are translators, not just method experts. Scaling research is about helping an organization learn to listen—that's the growth that lasts


How to Interpret a Rating Scale Without Historical Data
UX rating scales are negatively skewed (midpoint isn't "average"). Using SUS distribution as reference: Good = 80% of scale (4.2/5, 5.8/7), Average = 70% (3.8/5, 5.2/7), Poor = 50% (midpoint). Formula: Target / (100 / (MaxRating−1)) + 1. Best guesses until you collect your own data


The Dunning-Kruger Effect in User Research: Why Users Don’t Know What They Want
Users confidently state preferences that don't match actual behavior. Four biases distort self-reports. Behavioral data is the gold standard. Experts underestimate themselves; confident voices are often wrong. Don't ask users to be experts on themselves—observe them instead


NNG: UX Writing - FAQs from Practitioners
Get answers to frequently asked questions about UX writing from attendees of NN/G’s Writing Compelling Digital Copy course


AI: Discovery is the work AI gives back
94% of organizations use AI but see no significant value—not an adoption problem, but a framing problem. Most use AI to do existing work faster. Durable returns require different work: asking which problems, customers, and offerings are still worth building. AI doesn't answer these questions—it makes them more urgent. AI is not a productivity revolution—it's a competitive reset


Experience: UX Isn’t Universal - What I Learned After Leaving the U.S. Job Market for Taiwan
After hundreds of US applications with no offers, the author moved to Taiwan and quickly found work. Cultural context shapes research—even bilingual interviews felt different. Stakeholder alignment replaced problem discovery; clients preferred traditional methods. UX isn't universal. She left not because Taiwan's culture is worse, but because it didn't fit her practice


Case Study: Beyond A/B Testing, Building a Real-Time Research Engine for a Live Platform Redesign
A 6-month e-commerce redesign used continuous research (surveys + usability testing). Key findings: hidden delivery window (63% switched), discount code leaving checkout (23% abandoned), poor category naming (43% struggled). Results: engagement +35%, conversion +21%. No major decision moved without behavioral evidence. Optimisation is the architecture for sustainable growth


Opinion: Steve Jobs was right. And so is user research
People misquote Steve Jobs to dismiss user research. He wasn't against understanding users—he was an obsessive observer of friction and workarounds. Discovery produces innovation: unexpected workarounds, contradicted mental models, the unasked question. Jobs's genius isn't replicable, but process is. Great ideas come from discovery, and discovery comes from process


@uxdigest