The AI Brain Behind Ingredient Transparency

 

Overview:

This is a continuation of my ingredient interpreter project, sharing recent updates, refinements, and UX-focused design shifts. Read the original post.

Since publishing that, we’ve focused on:

  • Reusable Design System: Built out Figma components to help deliver ingredient insights consistently across different platforms.

  • Structured Data Pipeline: Ingested cosmetic ingredient data from multiple sources and transformed it into a reliable schema, designed to support prompt engineering and future model training.

  • Sharper UX Copy: Tweaked the language to strike the right balance between scientific precision and everyday clarity.


 

What’s New

  • Training ChatGPT on our ingredient dataset to simulate future model behavior and refine tone early

  • Testing prompts for real-world questions around actives, safety, and product context

  • Structuring 80+ ingredients into clean JSON with functions, aliases, and safety notes

  • Designing chat-focused UX flows with fallback strategies for unknown or debated ingredients

View a deeper dive into the latest updates here →

 
 
 

Defining the Output Format

To make the system usable for both consumers and professionals, we defined a structured schema for ingredient insights.

We focused on a format that balances technical depth with readability.

 
 

MVP Architecture: Keon’s Local Prototype

Keon spun up a local dev stack using Ollama, which enables us to prototype quickly and compare LLM output across prompt types. Ollama acts as the runtime for the language models.

On the frontend, we’re using a lightweight React app where users can input ingredient names. These queries hit a Gin-powered backend API, which connects to the Ollama LLM and returns a clean JSON response.

Example Output

React Frontend → Gin REST API → Ollama LLM Engine

 
 
 
 

Sourcing Data

While Keon benchmarks model performance on classification and clarity, I’ve been focused on the data pipeline. Defining how data enters the system and which sources we trust.

We’ve prioritized sources that are both scientifically rigorous and user-readable. The bigger vision: combine strong UX with scientific rigor, and lay the groundwork for features like versioning, trust indicators, and contextual filtering.

 

This MVP sets the stage for enterprise-grade features like versioning, observability, and contextual filtering, by combining usability with scientific rigor.

Here’s a screenshot of the early frontend where it all comes together.

 
 
 

Phases:

Phase 1: Testing + Iteration (Current Phase)

  • Set up the system to work with language models and trusted data sources.

  • Tested the wording to make sure it’s easy for anyone to understand.

  • Kept the design clear, consistent, and easy to use for all users.

Phase 2: Testing + Iteration

  • Begin user testing with real ingredient lists

  • Test upload and paste workflows

  • Build a feedback loop for trust signals to guide future improvements

Future Plans

  • Layer in user profiles (e.g., acne-prone, sensitive skin, rosacea)

  • Enable “smart” questions (e.g., “Is this pregnancy safe?”)

  • Develop a Chrome extension for ingredient popovers while browsing.

Previous
Previous

Can AI Explain Your Moisturizer? Ours Can, Almost…

Next
Next

Building a Science-First Ingredient Interpreter