AI/ML B2B ENTERPRISE MANUFACTURING
Refining AI Anomaly Detection with User Feedback
Brief: Designed a feedback system for AI tools to empower users and improve AI accuracy.
Company: Major US automaker
Product: AI/ML anomaly detection tool
Role: Sole Product Designer
Goal: Refine AI model accuracy, increase user trust, and improve anomaly detection.
Users: Global plant engineers.
Impact: Increased AI trust and engagement
Intro - The Challenge in Automotive Manufacturing
An AI tool for anomaly detection aimed to save millions but faced challenges with accuracy and user trust.
-
In the fast-paced world of automotive manufacturing, quality issues can cost millions in warranties. Early detection is essential to avoid costly recalls.
-
The team developed an AI/ML tool to catch anomalies during production, but early ML model iterations struggled to align with real-world outcomes.
-
Engineers were skeptical of its predictions and relied on inefficient manual workflows.
A lack of trust in AI jeopardized its adoption and impact.
Visual: Manufacturing plant with annotations showing where the AI tool is used.

The Problem - Challenges with Early AI Models
An AI tool for anomaly detection aimed to save millions but faced challenges with accuracy and user trust.
AI Technology Problem:
-
AI Accuracy Issues: Because it was an early ML model, it produced many false positives (flagging normal parts as bad) and false negatives (missing actual defects). Getting user feedback would help refine ML model and reduce the false positives.
-
AI Black Box Problem: AI may identify where an anomaly occurs but not why it happens, requiring users to spend additional time investigating whether that anomaly is an actual real problem. We need to know what the outcome of that investigation is and we need the user to let us know.
-
Inefficient Feedback Loop: Feedback loop required multiple tools and manual communication, with no way to track outcomes in the AI tool.
The tool was seen as a burden, not a solution due to lack of feedback mechanisms for refining AI accuracy, workflow inefficiencies and limited user trust.
Visual: Side-by-side visual of old workflows: manual email chains, Excel sheets, and disconnected systems.
The Solution - Feedback Loop
Designed a feedback mechanism that empowered users, streamlined workflows, and refined AI predictions.
-
Integrated Feedback: Users could now log outcomes directly in the AI tool, streamlining anomaly investigations.
-
Automation: Emails were replaced with auto-notifications triggered by system updates, reducing reliance on manual processes.
-
Transparent Metrics: A dashboard showcased how user feedback influenced AI accuracy, reinforcing trust.
Users and AI are now collaborators in quality control.
Visual: New workflow diagram showing seamless feedback integration and transparency.
Approach
​​My approach focused on integrating feedback seamlessly into the user workflow while building trust:
​
-
Research Insights: Summarize pain points and user workflows.
-
Collaboration: Partnered with data scientists and engineers to ensure feedback could refine the ML model.
-
Design Principles: Transparency, empowerment, accountability, feedback integration.
My goal was to design for the user transparency and empowerment
Diagram of design principles with examples of how they were applied.
Research Insights
Spoke with plant engineers to uncover pain points:
-
Before Production: Engineers flagged anomalies but had no way to track investigation outcomes in the AI tool.
-
After Warranties: No process connected warranty outcomes back to AI predictions, missing valuable feedback.
​
Found that engineers wanted:
-
Ease of Use: A way to record outcomes without disrupting workflows.
-
AI Value: Proof that the tool improved with their feedback.
Trust comes from listening and responding to user needs.
Visual: Personas for Engineer A (investigation) and Engineer B (testing).
User Goals
What users needed:
-
Accurate predictions with fewer false flags.
-
Seamless feedback within their workflow.
-
Metrics showing the value of AI in real-world outcomes.​
Engineers wanted AI they could trust to help them catch real issues, not create more work.
Visual: Personas for Engineer A (investigation) and Engineer B (testing).
Workflow 1 Redesign – Before Issues Escalate
Streamlining Anomaly Management
Engineers can now:
-
Flag anomalies and request part holds directly in the AI tool.
-
Track investigation status through the "Issue Management" dashboard.
-
Automatically notify colleagues with system-triggered emails.
Key feature: Feedback forms allow users to indicate whether flagged anomalies were true or false, refining AI predictions
Anomaly investigations are now faster, clearer, and centralized.
Visual: Annotated screenshots of the new anomaly management dashboard.
-
Simplify with a flowchart or bullet points for the old vs. new workflows.
-
Emphasize integrated feedback loops and improved transparency.
Workflow 2 Redesign – After Warranty Issues
Closing the Loop with Retrospective Feedback
Engineers validate whether AI could have predicted warranty issues:
-
Log warranty outcomes in the AI tool.
-
Link flagged anomalies to specific warranty claims for retrospective analysis.
-
Metrics track correlations between AI predictions and real-world outcomes.
-
Engineers build confidence in the AI as they see predictions align with actual issues.
Users trust what they see: AI predictions improving over time.
Visual: Feedback loop diagram connecting warranty claims back to AI flags.
-
Highlight retrospective feedback mechanisms and their impact.
Key Features – Empowering Engineers
Features That Drive Trust and Efficiency
-
Feedback Forms: Engineers provide input on flagged anomalies.
-
Metrics Dashboard: Tracks AI accuracy and user feedback impact.
-
Automated Notifications: Keeps teams informed without manual emails.
-
Status Controls: Users manage anomalies through "Requested," "Open," and "Closed" states.
Empowering users to improve AI, one anomaly at a time.
Visual: Key UI screens with callouts for each feature.
Impact – Results That Matter
Measurable Improvements Across the Board
-
Incorporated auto hold system into the plant production system where parts will be held automatically based off AI tool prediction removing human-to-human action of requesting holds (plant buy-in and trust increased)
-
Reduced false positives, improving investigation efficiency.
-
Increased user engagement as engineers saw their feedback drive AI refinement.
-
Demonstrated ROI with metrics linking AI flags to warranty outcomes.
-
Automated processes saved time and minimized manual errors.
AI adoption soared as trust and value became evident.
Metrics dashboard showing improved outcomes (e.g., reduced false positives, increased anomaly-to-warranty correlation).
Lessons Learned – Designing for AI
What This Project Taught Me
-
Explainability is Key: Users need to understand how AI decisions are made.
-
Continuous Feedback: AI must evolve with real-world input to remain effective.
-
Empathy Drives Design: Listening to users uncovers the true barriers to adoption
-
Challenges in balancing technical and user needs.
Great AI design is about more than tech—it’s about trust.
Quote or insight from an engineer involved in the project.
Future Vision – Scaling for Success
A Foundation for Scalable, User-Centric AI
-
Enhance feedback visualization to deepen user understanding.
-
Expand automation to further streamline workflows.
-
Roll out the solution to more plants, increasing adoption and ROI.
This is just the beginning of an AI transformation.
Roadmap for future improvements.