AI Experimentation · Design Systems · 2026

AI-Generated Design Systems: Building a Token-Driven Multi-Brand Architecture

This experiment explores how generative AI can accelerate design system development by automating token extraction, documentation, and component generation. Using Figma MCP and Claude Code, I built a token-driven multi-brand architecture capable of switching brand expressions using a single attribute. The prototype demonstrates how AI can reduce documentation overhead while maintaining a scalable design infrastructure.

Design Systems AI Workflow Figma MCP React / TypeScript CSS Token Architecture Multi-Brand
Experiment Overview
01
My Role
Solo design systems architect. Led token strategy, component architecture, AI workflow design, and engineering handoff.
02
Scope
3-layer CSS token system · 3 brand themes · Button, ProductCard, and ProductDiscovery Module · Figma MCP integration.
03
Stack
Figma → JSON export · Claude Code (Figma MCP) · React + TypeScript · CSS Custom Properties · Vercel
04
Key Finding
Docs time cut from 2–5 hours (or days) to under 20 minutes. Zero hardcoded values across all three brand implementations.
The Problem

When Sub-Brands Don't Share A System

The experiment started with a real problem: three distinct brand expressions sharing no design infrastructure. Each brand had its own visual language, typography, and color personality — but no shared token architecture to connect them. Designers hardcoded values, developers hardcoded colors, and every refresh meant manual find-and-replace across every file, codebase, and platform.

The question: could AI help reconstruct and scale a design system faster than building it by hand?

Brand 1

Clean American minimalism — Navy, Helvetica Neue, 2px sharp corners

Brand 2

Sophisticated refinement — Warm brown, Canela serif, zero border radius

Brand 3

Performance empowerment — Deep green, Nunito, soft 8px rounded edges

The Common Failure: Each Brand Becomes A Silo

Designers Hardcode Values → Developers Hardcode Colors → Brand Refresh → Manual Find-And-Replace Across Every File, Every Codebase, Every Platform

The Approach: Token As Data

Brands Share Structure, Not Values

The architecture separates what things are (primitive values) from what they mean (semantic tokens) from how they're used (components). Brands share the structure — they own the values.

Layer 1: Primitive Values
Shared Structure
Shared Primitives
Space
Consistent 4pt/8pt modular scale
Space-1 through space-12
Grid / Breakpoint
Large Desktop / Desktop / Small Desktop / Tablet / Mobile
Typography Scale
Display / h1–h6 / body / caption / overline · Scale ratios
Motion & Duration
duration.fast / moderate / slow
ease-in / ease-out / ease-in-out
Opacity / Elevation
Opacity level
Drop shadow / blur / spread
Z-Index
Tooltip · Modal · Dropdown · Header · Page content
Brand-Owned Primitive Categories
Typography
Font family, weight, size, line-height, letter-spacing, text-case, type scale
Color
Primary Brand Color · Neutral · Black · Blue · Green…
Border Width
Width
Border Radius
Radius
Layer 2: Semantic Values
Semantic Tokens for Product Card Component
color.action.primary
The main CTA color. Named the same in code — valued differently per brand.
Brand 1 → Blue 700 → primitive.blue.700
Brand 2 → Brown 900 → primitive.brown.900
Brand 3 → Red 600 → primitive.red.600
color.surface.card
Card background color. Provides visual separation from page background.
Brand 1 → White → primitive.neutral.white
Brand 2 → Cream-50 → primitive.cream.50
Brand 3 → White-Warm → primitive.white.warm
color.text.primary
Main content text. Product titles, headings.
Brand 1 → Neutral-900 → primitive.neutral.900
Brand 2 → Brown-900 → primitive.brown.900
Brand 3 → Black → primitive.black
font.heading.family
Display typeface for product titles. Scale structure is shared — the voice is brand-owned.
Brand 1 → "Helvetica Neue"
Brand 2 → "Canela"
Brand 3 → "Nunito"
radius.button
Button corner radius. Sharp vs soft signals brand personality.
Brand 1 → 2px (sharp)
Brand 2 → 0px (none)
Brand 3 → 8px (rounded)
Layer 3: Components

Same Components Structure — Three Brands

Brand 1
[Product Image]
Performance Short-Sleeve Shirt
Made from moisture-wicking, quick-drying fabric with plenty of stretch.
$89
Brand 2
[Product Image]
Performance Short-Sleeve Shirt
Made from moisture-wicking, quick-drying fabric with plenty of stretch.
$89
Brand 3
[Product Image]
Performance Short-Sleeve Shirt
Made from moisture-wicking, quick-drying fabric with plenty of stretch.
$89
Figma MCP & Claude Integration

How It Actually Works

The workflow chains Figma as the single source of truth through an MCP server into Claude Code. Tokens are extracted as structured JSON, then auto-generated into a three-layer CSS system. Components are built once — consuming CSS variables, never raw values.

Figma Design File
Single source of truth. Queryable by Claude via MCP connection (GUI + terminal).
Figma MCP Server
Primitives, semantic tokens, and component values extracted from Figma JSON per brand.
Claude Code
Processes extracted tokens and auto-generates structured output across all layers.
Design Token Files
Primitives.json + Semantic.json generated as structured data with resolved values per brand.
React Components
BEM class composition. Zero hardcoded values. All visual decisions resolved through the token chain.
Multi-Brand UI
Brand switching via single data-brand attribute across all 3 token layers.
Components Built

Same Architecture. Three Brand Expressions.

Three components built as proof-of-concept — token-wired from Figma JSON, zero hardcoded values. The test: one component structure, three brand voices, switched by a single data-brand attribute. Live demos below, each loading directly from production.

Buttons
Product Card
Product Discovery
Multi-Brand
figma-claude-test-henna.vercel.app
↗ Open
figma-claude-test-henna.vercel.app/product-card
↗ Open
figma-claude-test-henna.vercel.app/product-discovery
↗ Open
figma-claude-test-henna.vercel.app/multibrand
↗ Open
Engineering Handoff

Design → Developer Handoff

What to share so the workflow is actually useful in production

Step 1: Design Tokens
Primitives
Raw color, spacing & type values. The foundation layer.
→ primitives.json
Semantic Tokens
Mapped meanings: button-bg, focus-ring, surface, text.
→ semantic.json
CSS Custom Properties
Ready-to-use variables for any framework or vanilla CSS.
→ globals.css
Spacing Scale
4px base, 15 steps.
Step 2: Component Specs
Props & API
Define variant, size, and state as the component contract.
variant / size / state
State Matrix
Default, hover, active, focus, disabled — for every variant.
Responsive Rules
Breakpoints and layout shifts.
1 col → 2 col → 4 col
Content Rules
Capitalization, truncation, min/max widths.
Step 3: Visual Reference
Hosted Prototype
Live HTML devs can inspect in the browser. Not production code.
Figma Link
Source of truth for pixel specs, redlines, and measurements.
Interaction Notes
Animations, transitions, timing, and micro-interactions.
Label as Reference Only
Clearly mark HTML prototypes as non-production code.
Step 4: Context & Docs
README
How to consume the tokens and run the prototype locally.
Font & Asset Notes
Licenses, fallback stacks, CDN links.
Accessibility Requirements
Focus order, ARIA roles, contrast ratios.
Acceptance Criteria
What "done" looks like for QA.
Best Practices
  • Share tokens as JSON — framework-agnostic and durable
  • Document the component API: props, variants, states
  • Host prototypes as inspectable visual reference
  • Note font licenses and asset dependencies
Avoid
  • Expecting devs to copy-paste HTML into React or Swift
  • Sharing Tailwind classes as "the spec" — share values
  • Skipping "reference only" labels on prototypes
  • Omitting interaction states: hover, focus, disabled
Before vs. After

The Time Impact

Documentation time is the clearest measure of this experiment's value — the unglamorous but business-critical work of speccing components for engineering handoff.

Before
After · AI
Documentation
2–5 hours per component
< 20 minutes ✓
Token references
Written from memory by hand
Sourced from Figma JSON ✓
Variant coverage
Manual screenshots, easy to miss states
All states auto-documented ✓
Breakpoints
Recreated one at a time in docs
All 3 rendered side-by-side ✓
Multi-brand
Full repeat per brand
One doc, all brands compared ✓
Hardcoded values
Scattered throughout codebase
0 — verified by token audit ✓
On component update
Manual re-documentation
Fully regeneratable ✓
Applying the Learnings

How This Applies to Real Work

Two areas where AI creates the most leverage in design systems work.

01 AI Component Creation

Instead of designers manually building every component from scratch, AI acts as a rapid scaffolding tool — generating layout, token wiring, and responsive variants in seconds.

Before 1–2 days per component, manual spec handoff After Minutes to scaffold, designer refines details
EXAMPLE OUTPUT.card-recommendation { background: var(--color-surface-card); padding: var(--space-4); border-radius: var(--radius-md); }
02 AI Documentation

Documentation is usually the most time-consuming part of design systems. AI generates comprehensive docs by reading the component code directly — always accurate, always in sync.

Before 2–5 hours per component, manual updates After Auto-generated, fully regeneratable
AI-GENERATED DOCS INCLUDE
Prop tables with types & defaults
State matrix (hover, focus, disabled)
Usage examples with live code
Always in sync with source
The limitation I discovered

AI can generate visual documentation quickly, but the current toolchain still lacks full system integration. Through testing and research, I found that documentation is typically generated as SVG visuals rather than linked components. The visuals are accurate but not connected to the design system. As design tools evolve, documentation will likely become automatically synced with components and tokens.

Figma
Component linking
Storybook
Integration layer
Tokens
Pipeline setup
Workflow
AI draft → Designer links → Engineering connects