Amazon Echo Show Usability Study

Amazon Echo Show Usability Study

Amazon Echo Show Usability Study

Designed and executed usability studies to evaluate product discovery and feature comprehension of Echo Show devices on the Amazon mobile app. Synthesized qualitative and quantitative data to uncover user mental model misalignments, providing actionable recommendations on UI architecture and badge systems to enhance feature discoverability for the new Alexa+ service.

TIMELINE

Jan - March 2026

Institution

University of Washington, HCDE

Amazon, User Experience Research Team

TEAM

UW HCDE- 3 Designer, 1 Product Manager

Amazon Consumer Devices- User Research & Marketing partners

MY ROLE

UX Researcher, Modertator

OVERVIEW

In Winter 2026, we took Usability Studies course in UW HCDE. We partnered with Amazon's UX Research team to evaluate their mobile shopping experience for Echo Show devices, identifying friction points in product differentiation and feature discoverability. When browsing the Amazon app, users struggle to differentiate between visually similar Echo Show models and frequently overlook or misunderstand the newly introduced "Alexa+" feature. Our goal was to uncover these critical usability barriers and provide actionable solutions to improve the shopping journey.

KEY CONTRIBUTION

Interaction Map Definition

Usability Task/Post-Task Questionnaire/Screener Design

Session Moderation

Data Synthesis & Reporting

Project Timeline Control

OVERVIEW

When shopping on the Amazon mobile app, users typically begin their evaluation on the Search Results Page, relying on limited information to differentiate between visually similar Echo Show models. This study investigates whether the current mobile experience effectively helps users distinguish between these models and accurately comprehend the new Alexa+ feature prior to purchase. To capture an unbiased, first-time buyer perspective, we recruited active Amazon electronics shoppers who are the primary technology decision-makers for their households, specifically excluding current Echo Show owners and industry insiders for the usability studies.

STUDY GOAL

Primary Goal
The primary goal of this study is to evaluate how effectively Amazon's mobile experience supports clear and accurate early-stage understanding of the Echo Show product line during initial browsing.

Key Areas of Evaluation
- How imagery, titles, and badges communicate device differences
- Whether visual UI cues support correct feature inference
- Whether Search Results provide enough information for comparison
- How clearly Alexa+ is explained above the fold

RESEARCH QUESTION

RQ1 — Model Differentiation
How accurately can users differentiate between Echo Show models (5, 5 Kids, 8, 11, 15, and 21) based on search results and initial product detail page exposure?

RQ2 — Alexa+ Discoverability
How effectively do users identify and understand Alexa+ as a feature of Echo Show devices when viewing search results page elements, including images, titles, badges, and brief descriptions?

METHODOLOGY

Date: Feb 19 2026 – Feb 23, 2026
Session: 8
Method: Remote moderated sessions via UserTesting
Test Artifact: High-Fidelity Prototype of Amazon App Search Result Page & Product Detail Page
Method: Moderated Probing: Giving tasks and asking follow-up questions to clarify confusion points and expectations.
Think-Aloud Protocol: Capturing real-time cognitive processes and decision-making.
Goals: Check usability of the methods listed above

PARTICIPANTS

Demographics: Ages 26–50, diverse professions (QA Engineer, Sales, Students, etc.)
Tech Familiarity: Ranged from "Somewhat Familiar" to "Extremely Familiar".

For our participants, we recruited 8 diverse individuals, ranging in age from 26 to 50. As you can see from our demographic breakdown, we had a mix of professions and tech familiarity ranging from 'somewhat familiar' to 'extremely familiar.' we especially screened them to ensure they were the primary tech decision-makers in their households, but intentionally excluded current smart display owners and industry insiders to capture a pure, unbiased first-time buyer experience.

TASK

Our 60-minute sessions were structured around 5 key stages.

Stage #1

Baseline Knowledge

Before showing them the prototype, we simply asked what they already knew about Echo Show and Alexa.

Goal
Establish a baseline to determine if users have legacy mental models or are pure novices.

“What do you know about the Echo Show products?”


“What do you know about Alexa?“

A/B Test

(Images-Only)

A/B Test

(Text-Only)

Product

Grouping

Stage #2

Discoverability & Mental Models (Tasks 1 & 2)

This is where we implemented our A/B test. We showed users either an 'Images-Only' or 'Text-Only' version of the search results page, and then asked them to group the products.

Goal
Identify which cues (visual vs. textual) drive product differentiation and how mental models evolve with more information.

Stage #3

Scenario-Based Shopping (Task 4)

We gave them a specific task: find a smart display for the kitchen with 'Fire TV built-in.'

Goal
Test feature validation and information hierarchy.

Stage #4

Brand & Value Comprehension (Tasks 3 & 5)

We observed whether users noticed the new 'Alexa+' branding as they shopped. Once they did, we asked them to explain it.

Goal
Evaluate the clarity and comprehension of Alexa+ functionality, pricing plans, and subscription conditions.

Stage #5

Post-Test Questionnaire

We used a mix of formats, including Likert-scale questions o measure perceived clarity and difficulty, and multiple-choice questions to identify specific confusion points.

Goal
Quantify user perception of brand clarity, product distinctness, and overall shopping confidence.

DATA COLLECTION

Our 60-minute sessions were structured around 5 key stages.

Quantitative Performance Metrics

Qualitative Behavioral Insights

3. Self-Reported Perceptions (Post-Test)

CLASSIFICATION OF USABILITY ISSUES

Severity level 1

prevents completion of a task

Severity level 2

creates significant delay and frustration

Severity level 3

has a minor effect on usability

Severity level 4

subtle problem, points to a future enhancement

Findings- RQ1

Model Differentiation

How accurately can users differentiate between Echo Show models (5, 5 Kids, 8, 11, 15, and 21)

based on search results and initial product detail page exposure?

Usability Issue #1

Usability Issue #2

Findings- RQ2

Alexa+ Discoverability
How effectively do users identify and understand Alexa+ as a feature of Echo Show devices
when viewing search results page elements, including images, titles, badges, and brief descriptions?

Usability Issue #3

Usability Issue #4

Reflection

Reflection

  1. Accommodating Diverse Information Perception

    Every user perceives and processes information differently. Some individuals primarily scan data tables to comprehend product specifications, while others rely heavily on images to imagine usage scenarios and grasp product functions. Because a platform like Amazon serves millions of diverse users, the interface must present information through multiple formats to accommodate these varying cognitive styles and ensure all users can easily understand the product.

  1. Adapting Moderation Styles for Participant Comfort
    Just as users process information differently, they also respond differently to the testing environment. Moderation techniques must be highly adaptable. While some participants are naturally vocal, others may struggle with the think-aloud protocol and require gentle, strategic prompting to articulate their thoughts. The most critical responsibility of the moderator is to establish a relaxed environment where participants feel completely comfortable, consistently reinforcing the core principle that we are evaluating the product, not the user.

  1. The Limits of Remote Testing and Behavioral Precision

    Remote testing inherently restricted our ability to observe precise user behaviors across all tasks. We lacked the behavioral precision to track exactly where users were looking on the screen at any given moment. Additionally, we could not see where their fingers were pointing or tapping, making it difficult to accurately calculate swipe times and fully understand their physical interactions with the interface. In the future, supplementing remote studies with targeted in-person sessions or utilizing eye-tracking and interaction-mapping tools would allow us to capture this essential granular data.

  1. Omnichannel and Cross-Device Validation

    We strictly scoped this study to the Amazon Mobile App, where limited screen real estate forces an aggressive information hierarchy. However, the modern shopping journey is rarely confined to a single device. A critical next step would be validating these findings in desktop environments. Testing how user behavior, visual comparison strategies, and feature discoverability shift across different screen sizes and interaction modes (touch vs. click) is essential for a holistic omnichannel UX strategy

Welcome to connect with me!

Welcome to connect with me!

Reflection

  1. Accommodating Diverse Information Perception

    Every user perceives and processes information differently. Some individuals primarily scan data tables to comprehend product specifications, while others rely heavily on images to imagine usage scenarios and grasp product functions. Because a platform like Amazon serves millions of diverse users, the interface must present information through multiple formats to accommodate these varying cognitive styles and ensure all users can easily understand the product.

  1. Adapting Moderation Styles for Participant Comfort
    Just as users process information differently, they also respond differently to the testing environment. Moderation techniques must be highly adaptable. While some participants are naturally vocal, others may struggle with the think-aloud protocol and require gentle, strategic prompting to articulate their thoughts. The most critical responsibility of the moderator is to establish a relaxed environment where participants feel completely comfortable, consistently reinforcing the core principle that we are evaluating the product, not the user.

  1. The Limits of Remote Testing and Behavioral Precision

    Remote testing inherently restricted our ability to observe precise user behaviors across all tasks. We lacked the behavioral precision to track exactly where users were looking on the screen at any given moment. Additionally, we could not see where their fingers were pointing or tapping, making it difficult to accurately calculate swipe times and fully understand their physical interactions with the interface. In the future, supplementing remote studies with targeted in-person sessions or utilizing eye-tracking and interaction-mapping tools would allow us to capture this essential granular data.

  1. Omnichannel and Cross-Device Validation

    We strictly scoped this study to the Amazon Mobile App, where limited screen real estate forces an aggressive information hierarchy. However, the modern shopping journey is rarely confined to a single device. A critical next step would be validating these findings in desktop environments. Testing how user behavior, visual comparison strategies, and feature discoverability shift across different screen sizes and interaction modes (touch vs. click) is essential for a holistic omnichannel UX strategy