The AI Brain Behind Ingredient Transparency

 

TL;DR:

We’re continuing to build an AI-powered ingredient interpreter for skincare, designed to help users decode formulations with scientific precision and user-first language. This update reveals the latest design evolutions, schema decisions, and GenAI behavior prototypes.

Read the original post.


 

What’s New

  • Structured a skincare AI assistant using tagged ingredient data and strict, no-guess prompt rules.

  • Tested with 50 real-world queries to validate accuracy, logic handling, and tone compliance.

  • Achieved 86% accuracy and 96% strict compliance, with clear fallback behavior on missing or risky data.

  • Failures exposed edge cases like missing fields, unclear logic filters, and tone drift—now feeding next dataset upgrade.

View a deeper dive into the latest updates →

 
 
 

Defining the Output Format

To make the system usable for both consumers and professionals, we defined a structured schema for ingredient insights that balances technical depth with readability.

 
 

MVP Architecture: Keon’s Local Prototype

Keon spun up a local dev stack using Ollama to prototype and benchmark LLM output across different prompt styles. We’re using:

  • Frontend: Lightweight React app for rapid input + result loop.

  • API Layer: Gin-powered REST API connecting to Ollama.

  • LLM Runtime: Ollama serving locally fine-tuned models for privacy and iteration speed.

Example Output

React Frontend → Gin REST API → Ollama LLM Engine

 
 

Why It Works: Fast iterations, low-latency prototyping, and versionable LLM behavior that adapts to both consumer questions and professional use cases.

 
 

Sourcing Data

While Keon stress-tests prompt consistency, I’m curating a vetted data pipeline, focusing on medically reviewed, user-readable sources. Our ingestion process prioritizes:

  • Evidence-based summaries

  • INCI-standard nomenclature

  • Contextual safety notes (e.g., photo-sensitivity, pregnancy concerns)

 

The long-term vision: ingredient insights with traceability, explainability, and trust flags baked in.

 

Here’s a screenshot of the early frontend where it all comes together.

 
 
 

Phases:

Phase 1: Testing + Iteration (Current Phase)

  • Integrate with LLM or retrieval-augmented model

  • Conduct usability testing on language clarity and risk phrasing


  • Align outputs with accessibility and enterprise design systems


Phase 2: Testing + Iteration

  • Begin user testing with real ingredient lists

  • Test upload and paste workflows

  • Build a trust-feedback loop to guide future improvements

Future Plans

  • Layer in user profiles (e.g., acne-prone, sensitive skin, rosacea)

  • Enable “smart” questions (e.g., “Is this pregnancy safe?”)

  • Develop a Chrome extension for ingredient popovers while browsing.

Previous
Previous

Can AI Explain Your Moisturizer? Ours Can, Almost…

Next
Next

Building a Science-First Ingredient Interpreter