AI Cell Detection Tool

Let AI detect — you decide what stays.
A digital tool for life science researchers to detect cells and tissues in microscopy images.
I used a simple loop—Research → Analysis → Pain Points → Interaction Design → Alignment → Solution that improved clarity and trust.
The Challenge
Researchers spend hours manually identifying cells—time better spent interpreting data.
User Problem
  • Manual detection is slow, tedious, and error-prone.
  • Complex cell images make accuracy challenging.
  • Valuable research time is lost to repetitive labeling.
Business Problem
  • Need to streamline actions into single‑click interactions
  • Maintain consistency with branding while improving usability
  • Faster turnaround to deliver insights
Role:
Interaction & Product Design
Time:
3 months
Tools:
Adobe XD, Miro

Solution

Main features

AI Detection
Split View
Comparison
Image Library
Unified Review
AI cell detection interface
AI highlights cells with confidence. Draw to refine, pick or reassign labels inline, and review by cell type.
AI detection with bounding boxes and confidence labels
AI detection with bounding boxes and confidence labels
Spilt view
Compare raw input and AI-detected output side by side — empowering users to spot missed or over-detected regions.
Comparison view
Toggle between raw and processed views — see how filters like denoise or contrast impact visibility of cells.
AI detection with bounding boxes and confidence labels
Image library
Collapsible folder tree with status dots; instantly see which images have AI detection applied.
Unified review
Detailed results for the selected image—cell type, confidence, and size—with sort and export for deeper review.

Process

Understanding Users

Interviews and lab shadowing across three teams; notes condensed into a task-frequency persona and journey.

Persona

Persona showing roles, tasks, tools, and pains.
Roles + task frequency → we prioritized legible overlays, inline edits, and low eye-travel layouts.

Journey map

Journey from import to export with pain points highlighted.
We mapped the flow—import → detect → refine → compare → export—and surfaced friction at comparison and results being away from the image.
This shaped Split View, Unified Review, and keeping results anchored beside the image.

Our Design Approach

From the persona and journey insights, we mapped real lab tasks into screens and states—minimizing context switches and keeping actions next to the image.

Interaction flow

Persona showing roles, tasks, tools, and pains.
We confirmed the core path—import → detect → refine → compare → export—and removed extra hops by keeping edits and review in the same place.

Heuristic guidelines

On-image edits
Compare in place
Results beside image
Low eye-travel
Palette decisions
Fast recovery

Task-Feature mapping

Prepare & Upload
TASKS
Upload Folder
Select Images
Delete Unwanted
Action
Drag & drop → View status → Confirm upload
Feature
Upload area
Image selector
File manager
Check Quality
TASKS
Pre-process slides
Adjust contrast
Clean background
Action
Run filters → Preview → Save corrections
Feature
Adjustment panel
Preview window
Cleanup tools
Process Slides
TASKS
Run AI detection
Track progress
View detected results
Action
Start detection → Analyze → View overlay
Feature
Detection engine
Progress bar
Results viewer
Verify & Correct
TASKS
Review AI output
Mark missed cells
Assign cell type
Action
Compare → Edit marks → Confirm labels
Feature
Markup tools
Compare view
Label selector
Save & Share
TASKS
Save the study
Export reports
Send corrected cells to AI
Action
Save → Export → Train model
Feature
Save manager
Export module
AI training pipeline
By linking user actions with features, we uncovered the core interactions that shaped our initial sketches — ensuring every screen directly reflected real workflow needs.

Sketching the structure

Paper sketches exploring the idea for the navigation
Paper sketches exploring the idea for manual adding a new cell
Paper sketches exploring the idea for AI detected
Paper sketches exploring the idea for cell detected screen toolbar view
Paper sketches exploring the zoom concept
Paper sketches exploring the idea for compare feature

Mid-fidelity wireframes

Search bar with border contrast increased from 1.71:1 to 3.03:1
Search bar with border contrast increased from 1.71:1 to 3.03:1
Search bar with border contrast increased from 1.71:1 to 3.03:1

Smart UI Decisions

Patterns chosen to reduce eye-travel, clicks, and uncertainty during review.
On-image dropdown

Define the cell type the moment a box is drawn.

What we derived: Labels were assigned sooner with fewer context switches.

Search bar with border contrast increased from 1.71:1 to 3.03:1
Context-aware toolbar

Toolbars adapt to the task—preprocess, detect, or compare.

What we derived: Lower eye-travel and faster task switching.

Search bar with border contrast increased from 1.71:1 to 3.03:1
Global vs Local Zoom

Link zoom across panes or adjust each independently.

What we derived: Flexible overview vs deep inspection without losing the anchor.

Search bar with border contrast increased from 1.71:1 to 3.03:1

Accessible overlay (Palette)

We validated five cross-contrast colors on tough images to keep boxes readable without manual tweaks.
“Microscopy image with red cytoplasm background and blue nuclei. AI bounding boxes use Palette A (cyan, lime, amber, violet, orange).”
Palette A — Red-dominant
Microscopy image with blue nuclei and red tissue. AI bounding boxes use Palette B (lime, magenta, amber, cyan, orange-red)
Palette B — Red+Blue
Microscopy image with mixed green/red tissue and blue nuclei; AI boxes use Palette C (magenta #FF00AA, pink #FF4081, cyan #00E5FF, violet #8E24AA, blue #1E90FF) for clear contrast.
Palette C — Multicolor
Global palette (15 colors)
Blues
Greens
Yellow / Oranges
Pinks / Violets
How we chose these
  • Pre-tested on red, blue, multicolor slides for legibility.
  • High-saturation hues only; neutrals avoided.
  • No white → reserved for draw; avoids glare
  • Spread across cool/warm so adjacent boxes don’t blend.
How the app picks 5 colors
Edge sample
Score contrast
Space hues
Fill 5
solid
if < 3:1
How this works
  • We start with a vivid global palette (no white) that stays legible across diverse microscopy backgrounds and staining schemes.
  • For each box, we sample the edge (a thin 2–3 px ring) to capture the actual background next to the detected cell.
    • We rank colors by WCAG contrast against that ring and pick the best.
      • Formula: CR = (L₁ + 0.05) / (L₂ + 0.05) with L = 0.2126R′ + 0.7152G′ + 0.0722B′; target ≥ 3:1 for thin lines.
  • We keep hues apart (≈40° spacing) so neighboring classes don’t look alike; mapping is deterministic so colors don’t shuffle.
    • If a region still runs dark/noisy, we fail gracefully
      • +1 px solid boost → keep total stroke < 5 px; if still < 3:1, add 0.5 px inner stroke (auto black/white)
Technical appendix - linearized sRGB, edge-sampling details and examples (Coming Soon)

Conclusion

From

Fragmented workflows and manual verification

To

A unified, intelligent flow where AI speeds validation and color cues clarify every state

Where

Small, well-timed interactions quietly guide focus and build trust with every click

What's next

  • Extend to multi-user and high-volume datasets
  • Add detection analytics for ongoing performance insights
  • Continue refining interface rhythm for complex review sessions
Kitchen Thread
Mobile UX | Role-Based Access | Lightweight UI

Coming Soon

Patient Scheduler
Usability | UX | CLINICAL WORKFLOW STRATEGY