top of page

AI/ML    B2B    ENTERPRISE    MANUFACTURING

Refining AI for Manufacturing Excellence with User Feedback

 An AI tool for anomaly detection faced challenges with accuracy and user trust. I was responsible for designing a AI-to-human feedback loop experience to improve user trust and engagement.

*  Due to NDA, some details have been simplified or omitted.

Anomaly Trends (6) 1 (7).png
OVERVIEW

In the fast-paced world of manufacturing, quality issues can cost millions in warranties and recalls.


Our team developed an AI/ML tool to catch anomalies during production, but early iterations struggled to align with real-world outcomes. Users were skeptical of its predictions and relied on inefficient manual workflows. A lack of trust in AI jeopardized its adoption and impact.

 

I was responsible for designing a AI-to-human feedback loop experience to improve user trust and engagement that would also increase AI accuracy.

TEAM

Data scientists, PO, PM, tech anchor, software engineers, data engineers

MY ROLE

Lead Product Designer

TIMELINE

2022-2023

OUTCOMES

Increased AI trust and engagement

The Problem

  • AI Accuracy Issues: Early models produced many false positives (flagging normal parts) and false negatives (missing actual defects).
  • Inefficient Feedback Loop: Feedback was manual, involving emails, spreadsheets, and guesswork, with no way to track outcomes in the AI tool.
Email.png
Excel.png

Feedback loop was happening outside the AI Tool  via manual email chains, excel and word files.

The Solution

  • Integrated Feedback: Users could now log outcomes directly in the AI tool, streamlining anomaly investigations.
  • Automation: Emails were replaced with auto-notifications triggered by system updates, reducing reliance on manual processes.
  • Transparent Metrics: A dashboard showcased how user feedback influenced AI accuracy, reinforcing trust.
Prior Work (Engineer A) (3).png

In-app feedback integration between users and AI

Design Approach

DESIGN SOLUTION

​The design solution needed to focus on:

  • integrating feedback seamlessly into the user workflow while building trust

COLLABORATION

I worked closely with developers, product owners, and data scientists to to determine best ways user feedback could refine the ML model.

DESIGN PRINCIPLES

​My goal was to design for the transparency (towards users) and accountability (towards AI).

SUCCESS METRICS
  • Increase User Feedback Participation Rate (how many engineers provide feedback on AI predictions)

  • Adoption Rate of Feedback Workflow (how many plants/engineers are using the new feedback system compared to the old manual process)

  • AI Performance Metrics (how well the AI system is performing and improving over time with in-app user feedback loop)

  • Business Value of Reduction in Costly Quality Issues

Research Insights

Engineers flagged anomalies but had no way to track investigation outcomes in the AI tool.
Old Process - During Production (Engineer A).png

Old experience

Found that engineers wanted:

  • Ease of Use: A way to record outcomes without disrupting workflows.

  • AI Value: Proof that the tool improved with their feedback.

Solution

Streamlining anomaly management by making anomaly investigations faster, clearer, and centralized.

Engineers can now:

  • Flag anomalies and request part holds directly in the AI tool.

  • Track investigation status through the "Issue Management" dashboard.

  • Automatically notify colleagues with system-triggered emails.

 

Feedback forms allow users to indicate whether flagged anomalies were true or false, refining AI predictions.

New Process - During Production (Engineer A to Engineer B loop) (2).png

New experience

Key features:
Feedback Forms: Engineers provide input on flagged anomalies.
Metrics Dashboard: Tracks AI accuracy and user feedback impact.
Automated Notifications: Keeps teams informed without manual emails.
Status Controls: Users manage anomalies through "Requested," "Open," and "Closed" states.

Impact

  • Reduced false positives, improving investigation efficiency.

  • Increased user engagement as engineers saw their feedback drive AI refinement.

  • Demonstrated ROI with metrics linking AI flags to warranty outcomes.

  • Automated processes saved time and minimized manual errors.

Lessons Learned - Designing for AI

  • Explainability is Key: Users need to understand how AI decisions are made.
    Continuous Feedback: AI must evolve with real-world input to remain effective.
    Empathy Drives Design: Listening to users uncovers the true barriers to adoption.

bottom of page