Spaces:
Running
Running
rebrand: Design System Extractor → Design System Automation
Browse files- Rename project across all 30+ source files, docs, and configs
- Update DTCG namespace: com.design-system-automation
- Update Gradio app title, heading, and footer
- Update token_schema generator field
- Remove internal docs from repo (CLAUDE.md, PROJECT_CONTEXT.md,
ARCHITECTURE.md, PLAN_W3C_DTCG_UPDATE.md, PART2_COMPONENT_GENERATION.md,
docs/CONTEXT.md, docs/FIGMA_SPECIMEN_IDEAS.md, content/*)
- Remove data files (sample JSON outputs, benchmark cache)
- Add .gitignore rules for internal docs and data files
- Add optimized Medium article v2 (9 min read)
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- .gitignore +14 -0
- ARCHITECTURE.md +0 -466
- CLAUDE.md +0 -1468
- PART2_COMPONENT_GENERATION.md +0 -418
- PLAN_W3C_DTCG_UPDATE.md +0 -318
- PROJECT_CONTEXT.md +0 -170
- README.md +4 -4
- agents/__init__.py +1 -1
- agents/advisor.py +1 -1
- agents/crawler.py +1 -1
- agents/extractor.py +1 -1
- agents/firecrawl_extractor.py +1 -1
- agents/graph.py +1 -1
- agents/normalizer.py +1 -1
- agents/semantic_analyzer.py +1 -1
- agents/state.py +1 -1
- app.py +5 -5
- config/agents.yaml +1 -1
- config/settings.py +1 -1
- content/LINKEDIN_POST.md +0 -40
- content/MEDIUM_ARTICLE.md +0 -406
- core/__init__.py +1 -1
- core/color_classifier.py +1 -1
- core/color_utils.py +1 -1
- core/hf_inference.py +1 -1
- core/logging.py +1 -1
- core/token_schema.py +2 -2
- docs/CONTEXT.md +0 -190
- docs/FIGMA_SPECIMEN_IDEAS.md +0 -508
- docs/IMAGE_GUIDE_EPISODE_6.md +1 -1
- docs/LINKEDIN_POST_EPISODE_6.md +1 -1
- docs/MEDIUM_ARTICLE_EPISODE_6.md +1 -1
- docs/MEDIUM_ARTICLE_EPISODE_6_V2.md +264 -0
- output_json/file (16).json +0 -584
- output_json/file (18).json +0 -584
- requirements.txt +1 -1
- storage/benchmark_cache.json +0 -20
.gitignore
CHANGED
|
@@ -16,3 +16,17 @@ storage/cache/
|
|
| 16 |
storage/exports/
|
| 17 |
__MACOSX/
|
| 18 |
.claude/
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 16 |
storage/exports/
|
| 17 |
__MACOSX/
|
| 18 |
.claude/
|
| 19 |
+
|
| 20 |
+
# Internal project docs (not for public repos)
|
| 21 |
+
CLAUDE.md
|
| 22 |
+
PROJECT_CONTEXT.md
|
| 23 |
+
ARCHITECTURE.md
|
| 24 |
+
PLAN_W3C_DTCG_UPDATE.md
|
| 25 |
+
PART2_COMPONENT_GENERATION.md
|
| 26 |
+
docs/CONTEXT.md
|
| 27 |
+
docs/FIGMA_SPECIMEN_IDEAS.md
|
| 28 |
+
content/
|
| 29 |
+
|
| 30 |
+
# Data files (sample outputs, caches)
|
| 31 |
+
storage/benchmark_cache.json
|
| 32 |
+
output_json/*.json
|
ARCHITECTURE.md
DELETED
|
@@ -1,466 +0,0 @@
|
|
| 1 |
-
# Design System Extractor v2 — Complete Architecture
|
| 2 |
-
|
| 3 |
-
## Overview
|
| 4 |
-
|
| 5 |
-
A **2-stage pipeline** that extracts, analyzes, and recommends improvements to any website's design system. Combines **deterministic rule-based analysis** (free, fast, reliable) with **4 specialized LLM agents** (context-aware reasoning) — each agent does one thing well.
|
| 6 |
-
|
| 7 |
-
```
|
| 8 |
-
┌─────────────────────────────────────────────────────────────────┐
|
| 9 |
-
│ STAGE 1: EXTRACTION │
|
| 10 |
-
│ (No LLM — $0.00) │
|
| 11 |
-
│ │
|
| 12 |
-
│ URL → Crawler → Extractor → Normalizer → Semantic Analyzer │
|
| 13 |
-
│ ↓ │
|
| 14 |
-
│ [HUMAN REVIEW CHECKPOINT] │
|
| 15 |
-
│ Accept/reject tokens, Desktop ↔ Mobile toggle │
|
| 16 |
-
├─────────────────────────────────────────────────────────────────┤
|
| 17 |
-
│ STAGE 2: ANALYSIS │
|
| 18 |
-
│ │
|
| 19 |
-
│ Layer 1: Rule Engine ──────────────── FREE ($0.00) │
|
| 20 |
-
│ ├─ WCAG Contrast (AA/AAA) │
|
| 21 |
-
│ ├─ Type Scale Detection │
|
| 22 |
-
│ ├─ Spacing Grid Alignment │
|
| 23 |
-
│ └─ Color Statistics │
|
| 24 |
-
│ │
|
| 25 |
-
│ Layer 2: Benchmark Research ──────── Semi-Free │
|
| 26 |
-
│ └─ Compare to Material 3, Polaris, Atlassian, etc. │
|
| 27 |
-
│ │
|
| 28 |
-
│ Layer 3: LLM Agents ─────────────── ~$0.003/run │
|
| 29 |
-
│ ├─ AURORA → Brand color identification │
|
| 30 |
-
│ ├─ ATLAS → Benchmark recommendation │
|
| 31 |
-
│ └─ SENTINEL → Best practices validation │
|
| 32 |
-
│ │
|
| 33 |
-
│ Layer 4: HEAD Synthesizer ────────── Final output │
|
| 34 |
-
│ └─ NEXUS → Combines everything → User-facing results │
|
| 35 |
-
│ │
|
| 36 |
-
│ [GRACEFUL DEGRADATION: Each layer has fallbacks] │
|
| 37 |
-
└─────────────────────────────────────────────────────────────────┘
|
| 38 |
-
```
|
| 39 |
-
|
| 40 |
-
---
|
| 41 |
-
|
| 42 |
-
## Stage 1: Extraction & Normalization (No LLM)
|
| 43 |
-
|
| 44 |
-
### 1A. PageDiscoverer (Crawler)
|
| 45 |
-
|
| 46 |
-
| | |
|
| 47 |
-
|---|---|
|
| 48 |
-
| **File** | `agents/crawler.py` |
|
| 49 |
-
| **Model** | None |
|
| 50 |
-
| **Input** | Base URL |
|
| 51 |
-
| **Output** | List of discovered pages (title, URL, page type) |
|
| 52 |
-
| **How** | Playwright browser crawling + heuristic page type detection |
|
| 53 |
-
| **Why no LLM** | Pure URL discovery — deterministic crawling |
|
| 54 |
-
|
| 55 |
-
### 1B. TokenExtractor
|
| 56 |
-
|
| 57 |
-
| | |
|
| 58 |
-
|---|---|
|
| 59 |
-
| **File** | `agents/extractor.py` + `agents/firecrawl_extractor.py` |
|
| 60 |
-
| **Model** | None |
|
| 61 |
-
| **Input** | Confirmed page URLs + Viewport (1440px desktop / 375px mobile) |
|
| 62 |
-
| **Output** | `ExtractedTokens` — colors, typography, spacing, radius, shadows, FG/BG pairs, CSS variables |
|
| 63 |
-
| **How** | 7-source extraction via Playwright |
|
| 64 |
-
| **Why no LLM** | DOM parsing + regex — no reasoning needed |
|
| 65 |
-
|
| 66 |
-
**7 Extraction Sources:**
|
| 67 |
-
1. DOM computed styles (`getComputedStyle`)
|
| 68 |
-
2. CSS variables (`:root { --color: }`)
|
| 69 |
-
3. SVG colors (fill, stroke)
|
| 70 |
-
4. Inline styles (`style='color:'`)
|
| 71 |
-
5. Stylesheet rules (CSS files)
|
| 72 |
-
6. External CSS files (fetched via Firecrawl)
|
| 73 |
-
7. Page content scan (brute-force token search)
|
| 74 |
-
|
| 75 |
-
### 1C. TokenNormalizer
|
| 76 |
-
|
| 77 |
-
| | |
|
| 78 |
-
|---|---|
|
| 79 |
-
| **File** | `agents/normalizer.py` |
|
| 80 |
-
| **Model** | None |
|
| 81 |
-
| **Input** | Raw `ExtractedTokens` |
|
| 82 |
-
| **Output** | `NormalizedTokens` — deduplicated, named, confidence-tagged |
|
| 83 |
-
| **How** | Deduplication (exact hex + Delta-E merge), role inference from frequency, semantic naming |
|
| 84 |
-
| **Why no LLM** | Algorithmic deduplication — pure math |
|
| 85 |
-
|
| 86 |
-
### 1D. SemanticColorAnalyzer
|
| 87 |
-
|
| 88 |
-
| | |
|
| 89 |
-
|---|---|
|
| 90 |
-
| **File** | `agents/semantic_analyzer.py` |
|
| 91 |
-
| **Model** | None |
|
| 92 |
-
| **Input** | Extracted colors with usage/frequency data |
|
| 93 |
-
| **Output** | Semantic mapping: `{brand, text, background, border, feedback}` |
|
| 94 |
-
| **How** | Rule-based: buttons → brand, `color` property → text, `background-color` → background, red → error, green → success |
|
| 95 |
-
| **Why no LLM** | CSS property analysis — pattern matching on property names |
|
| 96 |
-
|
| 97 |
-
### Human Review Checkpoint
|
| 98 |
-
|
| 99 |
-
After Stage 1, the user sees:
|
| 100 |
-
- Desktop vs Mobile token comparison (side-by-side)
|
| 101 |
-
- Accept/reject individual colors, typography, spacing tokens
|
| 102 |
-
- Viewport toggle to switch views
|
| 103 |
-
- All accepted tokens flow into Stage 2
|
| 104 |
-
|
| 105 |
-
---
|
| 106 |
-
|
| 107 |
-
## Stage 2: Analysis (Hybrid — Rule Engine + LLM)
|
| 108 |
-
|
| 109 |
-
### Layer 1: Rule Engine (FREE — No LLM)
|
| 110 |
-
|
| 111 |
-
**File:** `core/rule_engine.py`
|
| 112 |
-
**Cost:** $0.00
|
| 113 |
-
**Speed:** < 1 second
|
| 114 |
-
|
| 115 |
-
The rule engine handles everything that can be computed with math. No LLM reasoning needed.
|
| 116 |
-
|
| 117 |
-
#### What It Calculates:
|
| 118 |
-
|
| 119 |
-
**1. Typography Analysis (TypeScaleAnalysis)**
|
| 120 |
-
```
|
| 121 |
-
Input: [11, 12, 14, 16, 18, 22, 24, 32] (extracted font sizes)
|
| 122 |
-
Output:
|
| 123 |
-
├─ Detected Ratio: 1.167
|
| 124 |
-
├─ Closest Standard: Minor Third (1.2)
|
| 125 |
-
├─ Consistent: No (variance: 0.24)
|
| 126 |
-
└─ Recommendation: 1.25 (Major Third)
|
| 127 |
-
```
|
| 128 |
-
- Compares to standard ratios: 1.067, 1.125, 1.2, 1.25, 1.333, 1.414, 1.5
|
| 129 |
-
- Calculates variance to determine consistency
|
| 130 |
-
- 100% deterministic math
|
| 131 |
-
|
| 132 |
-
**2. Color Accessibility (WCAG AA/AAA)**
|
| 133 |
-
```
|
| 134 |
-
Input: 210 colors + 220 FG/BG pairs
|
| 135 |
-
Output:
|
| 136 |
-
├─ AA Pass: 143
|
| 137 |
-
├─ AA Fail (real pairs): 67
|
| 138 |
-
└─ Fix suggestions: #06b2c4 → #048391 (4.5:1)
|
| 139 |
-
```
|
| 140 |
-
- WCAG 2.1 contrast ratio formula
|
| 141 |
-
- Tests actual FG/BG pairs found on page (not just color vs white)
|
| 142 |
-
- Algorithmically generates AA-compliant alternatives
|
| 143 |
-
- Pure math — no LLM
|
| 144 |
-
|
| 145 |
-
**3. Spacing Grid Detection**
|
| 146 |
-
```
|
| 147 |
-
Input: [3, 8, 10, 16, 20, 24, 32, 40] (spacing values)
|
| 148 |
-
Output:
|
| 149 |
-
├─ Detected Base: 1px (GCD)
|
| 150 |
-
├─ Grid Aligned: 0%
|
| 151 |
-
└─ Recommendation: 8px grid
|
| 152 |
-
```
|
| 153 |
-
- GCD math + alignment percentage calculation
|
| 154 |
-
|
| 155 |
-
**4. Color Statistics**
|
| 156 |
-
```
|
| 157 |
-
Input: 143 extracted colors
|
| 158 |
-
Output:
|
| 159 |
-
├─ Unique: 143
|
| 160 |
-
├─ Near-Duplicates: 351
|
| 161 |
-
├─ Grays: 68 | Saturated: 69
|
| 162 |
-
└─ Hue Distribution: {gray: 68, blue: 14, red: 11, ...}
|
| 163 |
-
```
|
| 164 |
-
|
| 165 |
-
**5. Overall Consistency Score (0–100)**
|
| 166 |
-
```
|
| 167 |
-
Weights:
|
| 168 |
-
├─ AA Compliance: 25 pts
|
| 169 |
-
├─ Type Scale Consistent: 15 pts
|
| 170 |
-
├─ Base Size (≥16px): 15 pts
|
| 171 |
-
├─ Spacing Grid Aligned: 15 pts
|
| 172 |
-
├─ Color Count (< 20): 10 pts
|
| 173 |
-
└─ No Near-Duplicates: 10 pts
|
| 174 |
-
```
|
| 175 |
-
|
| 176 |
-
---
|
| 177 |
-
|
| 178 |
-
### Layer 2: Benchmark Research
|
| 179 |
-
|
| 180 |
-
**File:** `agents/benchmark_researcher.py`
|
| 181 |
-
**Cost:** Near-free (optional HF LLM for doc extraction, mostly cached)
|
| 182 |
-
|
| 183 |
-
**Available Benchmarks:**
|
| 184 |
-
| System | Short Name |
|
| 185 |
-
|--------|-----------|
|
| 186 |
-
| Material Design 3 | Material 3 |
|
| 187 |
-
| Apple HIG | Apple |
|
| 188 |
-
| Shopify Polaris | Polaris |
|
| 189 |
-
| Atlassian Design | Atlassian |
|
| 190 |
-
| IBM Carbon | Carbon |
|
| 191 |
-
| Tailwind CSS | Tailwind |
|
| 192 |
-
| Ant Design | Ant |
|
| 193 |
-
| Chakra UI | Chakra |
|
| 194 |
-
|
| 195 |
-
**Process:**
|
| 196 |
-
1. Check 24-hour cache per benchmark
|
| 197 |
-
2. If expired: Fetch docs via Firecrawl → Extract specs → Cache
|
| 198 |
-
3. Compare user's tokens to each benchmark:
|
| 199 |
-
- Type ratio diff, base size diff, spacing grid diff
|
| 200 |
-
- Weighted similarity score
|
| 201 |
-
4. Sort by similarity (closest match first)
|
| 202 |
-
|
| 203 |
-
**Fallback:** Hardcoded `FALLBACK_BENCHMARKS` dict — no external fetch needed
|
| 204 |
-
|
| 205 |
-
---
|
| 206 |
-
|
| 207 |
-
### Layer 3: LLM Agents (4 Specialized Agents)
|
| 208 |
-
|
| 209 |
-
**File:** `agents/llm_agents.py`
|
| 210 |
-
|
| 211 |
-
Each agent has a single responsibility. They run after the rule engine — they reason about patterns the rule engine can't detect.
|
| 212 |
-
|
| 213 |
-
---
|
| 214 |
-
|
| 215 |
-
#### Agent 1: AURORA — Brand Color Identifier
|
| 216 |
-
|
| 217 |
-
| | |
|
| 218 |
-
|---|---|
|
| 219 |
-
| **Persona** | Senior Brand Color Analyst |
|
| 220 |
-
| **Model** | Qwen 72B |
|
| 221 |
-
| **Temperature** | 0.4 (allows creative interpretation) |
|
| 222 |
-
| **Input** | Color tokens with usage counts + semantic CSS analysis |
|
| 223 |
-
| **Output** | `BrandIdentification` |
|
| 224 |
-
|
| 225 |
-
**Why LLM:** Requires context understanding — "33 button instances using #06b2c4 = likely brand primary." A rule engine can count colors, but can't reason about which one is the *brand* color based on where and how it's used.
|
| 226 |
-
|
| 227 |
-
**Sample Output:**
|
| 228 |
-
```
|
| 229 |
-
AURORA's Analysis:
|
| 230 |
-
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
| 231 |
-
Brand Primary: #06b2c4 (confidence: HIGH)
|
| 232 |
-
└─ 33 buttons, 12 CTAs, dominant accent
|
| 233 |
-
|
| 234 |
-
Brand Secondary: #373737 (confidence: HIGH)
|
| 235 |
-
└─ 89 text elements, consistent dark tone
|
| 236 |
-
|
| 237 |
-
Palette Strategy: Complementary
|
| 238 |
-
Cohesion Score: 7/10
|
| 239 |
-
└─ "Clear primary-secondary hierarchy,
|
| 240 |
-
accent colors well-differentiated"
|
| 241 |
-
|
| 242 |
-
Self-Evaluation:
|
| 243 |
-
├─ Confidence: 8/10
|
| 244 |
-
├─ Data Quality: good
|
| 245 |
-
└─ Flags: []
|
| 246 |
-
```
|
| 247 |
-
|
| 248 |
-
---
|
| 249 |
-
|
| 250 |
-
#### Agent 2: ATLAS — Benchmark Advisor
|
| 251 |
-
|
| 252 |
-
| | |
|
| 253 |
-
|---|---|
|
| 254 |
-
| **Persona** | Senior Design System Benchmark Analyst |
|
| 255 |
-
| **Model** | Llama 3.3 70B (128K context) |
|
| 256 |
-
| **Temperature** | 0.25 (analytical, data-driven) |
|
| 257 |
-
| **Input** | User's type ratio, base size, spacing + benchmark comparison data |
|
| 258 |
-
| **Output** | `BenchmarkAdvice` |
|
| 259 |
-
|
| 260 |
-
**Why LLM:** Requires trade-off reasoning. The closest mathematical match (85%) might not be the best fit if alignment effort is high. ATLAS reasons about effort vs. value — "Polaris is 87% match and your spacing already aligns. Material 3 is 77% but would require restructuring your grid."
|
| 261 |
-
|
| 262 |
-
**Sample Output:**
|
| 263 |
-
```
|
| 264 |
-
ATLAS's Recommendation:
|
| 265 |
-
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
| 266 |
-
Recommended: Shopify Polaris (87% match)
|
| 267 |
-
|
| 268 |
-
Alignment Changes:
|
| 269 |
-
├─ Type scale: 1.17 → 1.25 (effort: medium)
|
| 270 |
-
├─ Spacing grid: mixed → 4px (effort: high)
|
| 271 |
-
└─ Base size: 16px → 16px (already aligned!)
|
| 272 |
-
|
| 273 |
-
Pros:
|
| 274 |
-
├─ Closest match to existing system
|
| 275 |
-
├─ E-commerce proven at scale
|
| 276 |
-
└─ Well-documented, community supported
|
| 277 |
-
|
| 278 |
-
Cons:
|
| 279 |
-
├─ Spacing migration is significant effort
|
| 280 |
-
└─ Type scale shift affects all components
|
| 281 |
-
|
| 282 |
-
Alternative: Material 3 (77% match)
|
| 283 |
-
└─ "Stronger mobile patterns, 8px grid"
|
| 284 |
-
```
|
| 285 |
-
|
| 286 |
-
---
|
| 287 |
-
|
| 288 |
-
#### Agent 3: SENTINEL — Best Practices Validator
|
| 289 |
-
|
| 290 |
-
| | |
|
| 291 |
-
|---|---|
|
| 292 |
-
| **Persona** | Design System Best Practices Auditor |
|
| 293 |
-
| **Model** | Qwen 72B |
|
| 294 |
-
| **Temperature** | 0.2 (strict, consistent evaluation) |
|
| 295 |
-
| **Input** | Rule Engine results (typography, accessibility, spacing, color stats) |
|
| 296 |
-
| **Output** | `BestPracticesResult` |
|
| 297 |
-
|
| 298 |
-
**Why LLM:** Requires impact assessment and prioritization. The rule engine says "67 colors fail AA." SENTINEL says "Brand primary failing AA affects 40% of interactive elements — fix this FIRST, it's 5 minutes of work with high impact."
|
| 299 |
-
|
| 300 |
-
**Sample Output:**
|
| 301 |
-
```
|
| 302 |
-
SENTINEL's Audit:
|
| 303 |
-
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
| 304 |
-
Overall Score: 68/100
|
| 305 |
-
|
| 306 |
-
Checks:
|
| 307 |
-
├─ ✅ Type Scale Standard (1.25 ratio)
|
| 308 |
-
├─ ⚠️ Type Scale Consistency (variance 0.18)
|
| 309 |
-
├─ ✅ Base Size Accessible (16px)
|
| 310 |
-
├─ ❌ AA Compliance (67 failures)
|
| 311 |
-
├─ ⚠️ Spacing Grid (0% aligned)
|
| 312 |
-
├─ ⚠️ Color Count (143 unique — too many)
|
| 313 |
-
└─ ❌ Near-Duplicates (351 pairs)
|
| 314 |
-
|
| 315 |
-
Priority Fixes:
|
| 316 |
-
#1 Fix brand color AA compliance
|
| 317 |
-
Impact: HIGH | Effort: 5 min
|
| 318 |
-
Action: #06b2c4 → #048391
|
| 319 |
-
|
| 320 |
-
#2 Consolidate near-duplicate colors
|
| 321 |
-
Impact: MEDIUM | Effort: 2 hours
|
| 322 |
-
Action: Merge 351 near-duplicate pairs
|
| 323 |
-
|
| 324 |
-
#3 Align spacing to 8px grid
|
| 325 |
-
Impact: MEDIUM | Effort: 1 hour
|
| 326 |
-
Action: Snap values to [8, 16, 24, 32, 40]
|
| 327 |
-
```
|
| 328 |
-
|
| 329 |
-
---
|
| 330 |
-
|
| 331 |
-
#### Agent 4: NEXUS — HEAD Synthesizer (Final Agent)
|
| 332 |
-
|
| 333 |
-
| | |
|
| 334 |
-
|---|---|
|
| 335 |
-
| **Persona** | Senior Design System Architect & Synthesizer |
|
| 336 |
-
| **Model** | Llama 3.3 70B (128K context) |
|
| 337 |
-
| **Temperature** | 0.3 (balanced synthesis) |
|
| 338 |
-
| **Input** | ALL Rule Engine results + AURORA + ATLAS + SENTINEL outputs |
|
| 339 |
-
| **Output** | `HeadSynthesis` — the final user-facing result |
|
| 340 |
-
|
| 341 |
-
**Why LLM:** Synthesis and contradiction resolution. If ATLAS says "close to Polaris" but SENTINEL says "spacing misaligned," NEXUS reconciles: "Align to Polaris type scale now (low effort) but defer spacing migration (high effort)."
|
| 342 |
-
|
| 343 |
-
**Sample Output:**
|
| 344 |
-
```
|
| 345 |
-
NEXUS Final Synthesis:
|
| 346 |
-
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
| 347 |
-
Executive Summary:
|
| 348 |
-
"Your design system scores 68/100. Critical issue:
|
| 349 |
-
67 color pairs fail AA compliance. Top action:
|
| 350 |
-
fix brand primary contrast (5 min, high impact)."
|
| 351 |
-
|
| 352 |
-
Scores:
|
| 353 |
-
├─ Overall: 68/100
|
| 354 |
-
├─ Accessibility: 45/100
|
| 355 |
-
├─ Consistency: 75/100
|
| 356 |
-
└─ Organization: 70/100
|
| 357 |
-
|
| 358 |
-
Benchmark Fit:
|
| 359 |
-
├─ Closest: Shopify Polaris (87%)
|
| 360 |
-
└─ Recommendation: Adopt Polaris type scale
|
| 361 |
-
|
| 362 |
-
Top 3 Actions:
|
| 363 |
-
1. Fix brand color AA → #06b2c4 → #048391
|
| 364 |
-
Impact: HIGH | Effort: 5 min
|
| 365 |
-
2. Align type scale to 1.25
|
| 366 |
-
Impact: MEDIUM | Effort: 1 hour
|
| 367 |
-
3. Consolidate 143 → ~20 semantic colors
|
| 368 |
-
Impact: MEDIUM | Effort: 2 hours
|
| 369 |
-
|
| 370 |
-
Color Recommendations:
|
| 371 |
-
├─ ✅ brand.primary: #06b2c4 → #048391 (AA fix — auto-accept)
|
| 372 |
-
├─ ✅ text.secondary: #999999 → #757575 (AA fix — auto-accept)
|
| 373 |
-
└─ ❌ brand.accent: #FF6B35 → #E65100 (aesthetic — user decides)
|
| 374 |
-
|
| 375 |
-
Self-Evaluation:
|
| 376 |
-
├─ Confidence: 7/10
|
| 377 |
-
├─ Data Quality: good
|
| 378 |
-
└─ Flags: ["high near-duplicate count may indicate extraction noise"]
|
| 379 |
-
```
|
| 380 |
-
|
| 381 |
-
---
|
| 382 |
-
|
| 383 |
-
## Cost Model
|
| 384 |
-
|
| 385 |
-
| Component | LLM? | Cost per Run |
|
| 386 |
-
|-----------|-------|-------------|
|
| 387 |
-
| Stage 1 (Crawl + Extract + Normalize) | No | $0.00 |
|
| 388 |
-
| Rule Engine | No | $0.00 |
|
| 389 |
-
| Benchmark Research | Optional | ~$0.0005 |
|
| 390 |
-
| AURORA (Qwen 72B) | Yes | ~$0.0005 |
|
| 391 |
-
| ATLAS (Llama 3.3 70B) | Yes | ~$0.0005 |
|
| 392 |
-
| SENTINEL (Qwen 72B) | Yes | ~$0.0005 |
|
| 393 |
-
| NEXUS (Llama 3.3 70B) | Yes | ~$0.001 |
|
| 394 |
-
| **Total** | | **~$0.003** |
|
| 395 |
-
|
| 396 |
-
All LLM inference via HuggingFace Inference API (PRO subscription at $9/month includes generous free tier for these models).
|
| 397 |
-
|
| 398 |
-
---
|
| 399 |
-
|
| 400 |
-
## Graceful Degradation
|
| 401 |
-
|
| 402 |
-
The system is designed to **always produce output**, even when components fail:
|
| 403 |
-
|
| 404 |
-
| If This Fails... | Fallback |
|
| 405 |
-
|-------------------|----------|
|
| 406 |
-
| Firecrawl (CSS fetch) | Use DOM-only extraction |
|
| 407 |
-
| Benchmark fetch | Use hardcoded `FALLBACK_BENCHMARKS` |
|
| 408 |
-
| AURORA (brand ID) | Skip brand analysis, use defaults |
|
| 409 |
-
| ATLAS (benchmark advice) | Skip recommendation, show raw comparisons |
|
| 410 |
-
| SENTINEL (practices) | Use rule engine score directly |
|
| 411 |
-
| NEXUS (synthesis) | `create_fallback_synthesis()` from rule engine data |
|
| 412 |
-
| Entire LLM layer | Full rule-engine-only analysis still works |
|
| 413 |
-
|
| 414 |
-
---
|
| 415 |
-
|
| 416 |
-
## Key Data Structures
|
| 417 |
-
|
| 418 |
-
```
|
| 419 |
-
ExtractedTokens (Stage 1 raw)
|
| 420 |
-
├─ colors: dict[ColorToken]
|
| 421 |
-
├─ typography: dict[TypographyToken]
|
| 422 |
-
├─ spacing: dict[SpacingToken]
|
| 423 |
-
├─ radius: dict[RadiusToken]
|
| 424 |
-
├─ shadows: dict[ShadowToken]
|
| 425 |
-
├─ fg_bg_pairs: list[dict] ← for real AA checking
|
| 426 |
-
└─ css_variables: dict[str, str] ← CSS var mappings
|
| 427 |
-
|
| 428 |
-
NormalizedTokens (Stage 1 clean)
|
| 429 |
-
├─ colors, typography, spacing, radius, shadows (deduplicated)
|
| 430 |
-
├─ font_families: dict[FontFamily]
|
| 431 |
-
├─ detected_spacing_base: int (4 or 8)
|
| 432 |
-
└─ detected_naming_convention: str
|
| 433 |
-
|
| 434 |
-
RuleEngineResults (Layer 1)
|
| 435 |
-
├─ typography: TypeScaleAnalysis
|
| 436 |
-
├─ accessibility: list[ColorAccessibility]
|
| 437 |
-
├─ spacing: SpacingGridAnalysis
|
| 438 |
-
├─ color_stats: ColorStatistics
|
| 439 |
-
├─ aa_failures: int
|
| 440 |
-
└─ consistency_score: int (0-100)
|
| 441 |
-
|
| 442 |
-
HeadSynthesis (Final output)
|
| 443 |
-
├─ executive_summary: str
|
| 444 |
-
├─ scores: {overall, accessibility, consistency, organization}
|
| 445 |
-
├─ benchmark_fit: {closest, similarity, recommendation}
|
| 446 |
-
├─ brand_analysis: {primary, secondary, cohesion}
|
| 447 |
-
├─ top_3_actions: [{action, impact, effort, details}]
|
| 448 |
-
├─ color_recommendations: [{role, current, suggested, reason, accept}]
|
| 449 |
-
├─ type_scale_recommendation: dict
|
| 450 |
-
├─ spacing_recommendation: dict
|
| 451 |
-
└─ self_evaluation: {confidence, reasoning, data_quality, flags}
|
| 452 |
-
```
|
| 453 |
-
|
| 454 |
-
---
|
| 455 |
-
|
| 456 |
-
## Tech Stack
|
| 457 |
-
|
| 458 |
-
| Component | Technology |
|
| 459 |
-
|-----------|-----------|
|
| 460 |
-
| Frontend | Gradio 4.x |
|
| 461 |
-
| Browser Automation | Playwright (Chromium) |
|
| 462 |
-
| Web Scraping | Firecrawl |
|
| 463 |
-
| LLM Inference | HuggingFace Inference API |
|
| 464 |
-
| Models | Qwen 72B, Llama 3.3 70B |
|
| 465 |
-
| Color Math | Custom WCAG implementation |
|
| 466 |
-
| Deployment | Docker → HuggingFace Spaces |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
CLAUDE.md
DELETED
|
@@ -1,1468 +0,0 @@
|
|
| 1 |
-
# Design System Extractor v3.2 — Project Context
|
| 2 |
-
|
| 3 |
-
## Overview
|
| 4 |
-
|
| 5 |
-
A multi-agent system that extracts, analyzes, and recommends improvements for design systems from websites. The system operates in two stages:
|
| 6 |
-
|
| 7 |
-
1. **Stage 1 (Deterministic)**: Extract CSS values → Normalize (colors, radius, shadows, typography, spacing) → Rule Engine analysis → **Rule-Based Color Classification** (free, no LLM)
|
| 8 |
-
2. **Stage 2 (LLM-powered)**: Brand identification (AURORA) → Benchmark comparison (ATLAS) → Best practices (SENTINEL) → Synthesis (NEXUS)
|
| 9 |
-
3. **Export**: W3C DTCG v1 compliant JSON → Figma Plugin (visual spec + styles/variables)
|
| 10 |
-
|
| 11 |
-
---
|
| 12 |
-
|
| 13 |
-
## CURRENT STATUS: v3.2 (Feb 2026)
|
| 14 |
-
|
| 15 |
-
### What's Working
|
| 16 |
-
|
| 17 |
-
| Component | Status | Notes |
|
| 18 |
-
|-----------|--------|-------|
|
| 19 |
-
| CSS Extraction (Playwright) | ✅ Working | Desktop + mobile viewports |
|
| 20 |
-
| Color normalization | ✅ Working | Single numeric shade system (50-900) |
|
| 21 |
-
| Color classification | ✅ Working | `core/color_classifier.py` (815 lines, 100% deterministic) |
|
| 22 |
-
| Radius normalization | ✅ Working | Parse, deduplicate, sort, name (none/sm/md/lg/xl/2xl/full) |
|
| 23 |
-
| Shadow normalization | ✅ Working | Parse, sort by blur, deduplicate, name (xs/sm/md/lg/xl) |
|
| 24 |
-
| Typography normalization | ✅ Working | Desktop/mobile split, weight suffix |
|
| 25 |
-
| Spacing normalization | ✅ Working | GCD-based grid detection, base-8 alignment |
|
| 26 |
-
| Rule engine | ✅ Working | Type scale, WCAG AA, spacing grid, color statistics |
|
| 27 |
-
| LLM agents (ReAct) | ✅ Working | AURORA, ATLAS, SENTINEL, NEXUS with critic/retry |
|
| 28 |
-
| W3C DTCG export | ✅ Working | $value, $type, $description, $extensions |
|
| 29 |
-
| Figma plugin - visual spec | ✅ Working | Separate frames, AA badges, horizontal layout |
|
| 30 |
-
| Figma plugin - styles/variables | ✅ Working | Paint, text, effect styles + variable collections |
|
| 31 |
-
| Shadow interpolation | ✅ Working | Always produces 5 levels (xs→xl), interpolates if fewer extracted |
|
| 32 |
-
|
| 33 |
-
### Architecture Decisions (v3.2)
|
| 34 |
-
|
| 35 |
-
#### Naming Authority Chain (RESOLVED)
|
| 36 |
-
The three-naming-system conflict from v2/v3.0 is resolved:
|
| 37 |
-
|
| 38 |
-
```
|
| 39 |
-
1. Color Classifier (PRIMARY) — deterministic, covers ALL colors
|
| 40 |
-
└── Rule-based: CSS evidence → category → token name
|
| 41 |
-
└── 100% reproducible, logged with evidence
|
| 42 |
-
|
| 43 |
-
2. AURORA LLM (SECONDARY) — semantic role enhancer ONLY
|
| 44 |
-
└── Can promote "color.blue.500" → "color.brand.primary"
|
| 45 |
-
└── CANNOT rename palette colors
|
| 46 |
-
└── Only brand/text/bg/border/feedback roles accepted
|
| 47 |
-
└── filter_aurora_naming_map() enforces this boundary
|
| 48 |
-
|
| 49 |
-
3. Normalizer (FALLBACK) — preliminary hue+shade names
|
| 50 |
-
└── Only used if classifier hasn't run yet
|
| 51 |
-
└── _generate_preliminary_name() → "color.blue.500"
|
| 52 |
-
```
|
| 53 |
-
|
| 54 |
-
**app.py `_get_semantic_color_overrides()`** implements this chain:
|
| 55 |
-
- PRIMARY: `state.color_classification.colors` (from color_classifier)
|
| 56 |
-
- SECONDARY: `state.brand_result.naming_map` (from AURORA, filtered to semantic roles only)
|
| 57 |
-
|
| 58 |
-
**`_generate_color_name_from_hex()`** is DEPRECATED — kept as thin wrapper for edge cases.
|
| 59 |
-
|
| 60 |
-
#### W3C DTCG v1 Compliance (2025.10 Spec)
|
| 61 |
-
- `$type` values: `color`, `dimension`, `typography`, `shadow`
|
| 62 |
-
- `$value` for all token values
|
| 63 |
-
- `$description` for human-readable descriptions
|
| 64 |
-
- `$extensions` with namespaced metadata: `com.design-system-extractor`
|
| 65 |
-
- Colors: `{frequency, confidence, category, evidence}`
|
| 66 |
-
- Radius: `{frequency, fitsBase4, fitsBase8}`
|
| 67 |
-
- Shadows: `{frequency, rawCSS, blurPx}`
|
| 68 |
-
- Nested structure (not flat)
|
| 69 |
-
- `_flat_key_to_nested()` prevents nesting inside DTCG leaf nodes
|
| 70 |
-
|
| 71 |
-
#### Deprecated Components
|
| 72 |
-
- `agents/semantic_analyzer.py` — superseded by color_classifier + normalizer._infer_role_hint()
|
| 73 |
-
- `agents/stage2_graph.py` — old LangGraph parallel system, replaced by direct async in app.py
|
| 74 |
-
- `app.py _generate_color_name_from_hex()` — third naming system, now thin wrapper
|
| 75 |
-
|
| 76 |
-
---
|
| 77 |
-
|
| 78 |
-
## v3.1 FIX: RULE-BASED COLOR NAMING (Feb 2026)
|
| 79 |
-
|
| 80 |
-
### What Changed
|
| 81 |
-
- **KILLED LLM color naming entirely.** New `core/color_classifier.py` handles all color naming with 100% deterministic rules.
|
| 82 |
-
- **Aggressive deduplication**: Colors within RGB distance < 30 AND same category get merged (e.g., 13 text grays → 3)
|
| 83 |
-
- **Capped categories**: brand (max 3), text (max 3), bg (max 3), border (max 3), feedback (max 4), palette (remaining)
|
| 84 |
-
- **User-selectable naming convention**: semantic, tailwind, or material — chosen BEFORE export
|
| 85 |
-
- **Preview before export**: User sees classification + decision log before committing
|
| 86 |
-
- **Every decision logged**: `[DEDUP]`, `[CLASSIFY]`, `[CAP]`, `[NAME]` with evidence
|
| 87 |
-
|
| 88 |
-
### How Classification Works (No LLM)
|
| 89 |
-
```
|
| 90 |
-
CSS Evidence → Category:
|
| 91 |
-
background-color on <button> + saturated + freq>5 → BRAND
|
| 92 |
-
color on <p>/<span> + low saturation → TEXT
|
| 93 |
-
background-color on <div>/<body> + neutral → BG
|
| 94 |
-
border-color + low saturation → BORDER
|
| 95 |
-
red hue + sat>0.6 + low freq → FEEDBACK (error)
|
| 96 |
-
everything else → PALETTE (named by hue.shade)
|
| 97 |
-
```
|
| 98 |
-
|
| 99 |
-
### What AURORA Does Now
|
| 100 |
-
- Provides brand insights, palette strategy, cohesion score
|
| 101 |
-
- naming_map is filtered to semantic roles only (brand/text/bg/border/feedback)
|
| 102 |
-
- LLM reasoning is shown in logs
|
| 103 |
-
- `filter_aurora_naming_map()` in llm_agents.py enforces the boundary
|
| 104 |
-
|
| 105 |
-
### Files Changed in v3.1
|
| 106 |
-
- `core/color_classifier.py` — NEW: Rule-based classifier with dedup, caps, naming conventions
|
| 107 |
-
- `app.py` — Export functions use classifier instead of LLM naming; convention picker in UI
|
| 108 |
-
- `agents/llm_agents.py` — AURORA prompt updated to advisory-only
|
| 109 |
-
- `CLAUDE.md` — This documentation
|
| 110 |
-
|
| 111 |
-
---
|
| 112 |
-
|
| 113 |
-
## v3.2 FIX: DTCG COMPLIANCE + NAMING AUTHORITY (Feb 2026)
|
| 114 |
-
|
| 115 |
-
### What Changed
|
| 116 |
-
1. **W3C DTCG v1 strict compliance**: `_to_dtcg_token()` now supports `$extensions` with namespaced metadata
|
| 117 |
-
2. **Single naming authority resolved**: Color classifier is PRIMARY, AURORA is SECONDARY (semantic roles only)
|
| 118 |
-
3. **`_get_semantic_color_overrides()` rewritten**: Uses classifier as primary, AURORA filtered to role-only names
|
| 119 |
-
4. **`filter_aurora_naming_map()` added**: In llm_agents.py, strips non-semantic names from AURORA output
|
| 120 |
-
5. **`_generate_color_name_from_hex()` deprecated**: Thin wrapper using `categorize_color()` from color_utils
|
| 121 |
-
6. **`semantic_analyzer.py` deprecated**: Marked with deprecation notice, functionality absorbed elsewhere
|
| 122 |
-
|
| 123 |
-
### Files Changed in v3.2
|
| 124 |
-
- `app.py` — DTCG helpers enhanced, `_get_semantic_color_overrides()` rewritten, hex-name function deprecated
|
| 125 |
-
- `agents/llm_agents.py` — Added `filter_aurora_naming_map()` function
|
| 126 |
-
- `agents/semantic_analyzer.py` — Deprecated with notice
|
| 127 |
-
- `CLAUDE.md` — Updated to current status
|
| 128 |
-
|
| 129 |
-
---
|
| 130 |
-
|
| 131 |
-
## PREVIOUS STATUS (v3.0 and earlier): BROKEN — RETHINK COMPLETED
|
| 132 |
-
|
| 133 |
-
### What's Wrong (observed from real site tests)
|
| 134 |
-
|
| 135 |
-
**Tested sites**: sixflagsqiddiyacity.com, others
|
| 136 |
-
|
| 137 |
-
#### Problem 1: Color Naming is Inconsistent (CRITICAL)
|
| 138 |
-
Three competing naming systems produce mixed output:
|
| 139 |
-
|
| 140 |
-
| Source | Convention | Example |
|
| 141 |
-
|--------|-----------|---------|
|
| 142 |
-
| `normalizer.py` (line 266-275) | Word-based: light/dark/base | `color.blue.light` |
|
| 143 |
-
| `app.py _generate_color_name_from_hex()` | Numeric: 50-900 | `color.blue.500` |
|
| 144 |
-
| AURORA LLM agent | Anything it wants | `brand.primary` |
|
| 145 |
-
|
| 146 |
-
**Result in Figma**: `blue.300`, `blue.dark`, `blue.light`, `blue.base` — ALL IN THE SAME EXPORT. Unusable.
|
| 147 |
-
|
| 148 |
-
#### Problem 2: Border Radius is Broken (CRITICAL)
|
| 149 |
-
- `md = 1616` (concatenated garbage)
|
| 150 |
-
- `full = 50` (should be 9999px)
|
| 151 |
-
- Nested structures: `radius.full.9999` and `radius.full.100` incorrectly inside `radius.full`
|
| 152 |
-
- Multi-value radii like `"0px 0px 16px 16px"` passed as-is — Figma can't use these
|
| 153 |
-
- **Root cause**: Normalizer doesn't process radius at all (line 94-97 just stores raw values)
|
| 154 |
-
|
| 155 |
-
#### Problem 3: LLM Agents Are Single-Shot, No Reasoning (CRITICAL)
|
| 156 |
-
- AURORA does one LLM call → returns whatever it returns → no verification
|
| 157 |
-
- SENTINEL does one LLM call → scores and checks not validated against actual data
|
| 158 |
-
- NEXUS does one LLM call → synthesizes without checking if inputs make sense
|
| 159 |
-
- No ReAct/ToT/reflection loop. No self-correction. No critic.
|
| 160 |
-
- Models (Qwen 72B, Llama 3.3 70B via HF Inference) may not follow structured output reliably
|
| 161 |
-
|
| 162 |
-
#### Problem 4: AURORA Only Names ~10 Colors
|
| 163 |
-
- Prompt says "Suggest Semantic Names for top 10 most-used colors"
|
| 164 |
-
- Remaining 20+ colors keep their normalizer names (word-based)
|
| 165 |
-
- AURORA doesn't see existing names — only receives hex + usage count
|
| 166 |
-
- No cleanup pass exists to unify naming after AURORA
|
| 167 |
-
|
| 168 |
-
#### Problem 5: Shadow Ordering Wrong
|
| 169 |
-
- xs has blur=25px, sm has blur=30px, md has blur=80px — non-progressive
|
| 170 |
-
- Shadow naming (xs/sm/md/lg/xl) doesn't match actual elevation hierarchy
|
| 171 |
-
- No validation that shadow progression makes physical sense
|
| 172 |
-
|
| 173 |
-
#### Problem 6: Font Family Detection
|
| 174 |
-
- All fonts showing as "sans-serif" (the fallback) instead of actual font name
|
| 175 |
-
- Extraction gets computed style which resolves to generic family
|
| 176 |
-
|
| 177 |
-
---
|
| 178 |
-
|
| 179 |
-
## ARCHITECTURE RETHINK PLAN
|
| 180 |
-
|
| 181 |
-
### Phase 1: Fix Stage 2 (LLM Agents) — ADD AGENTIC REASONING
|
| 182 |
-
|
| 183 |
-
Current Stage 2 is just 4 single-shot LLM calls. Needs proper agentic framework.
|
| 184 |
-
|
| 185 |
-
#### Current (Broken):
|
| 186 |
-
```
|
| 187 |
-
Color Data ──→ [Single LLM Call] ──→ Output (hope for the best)
|
| 188 |
-
```
|
| 189 |
-
|
| 190 |
-
#### Target (With Reasoning):
|
| 191 |
-
```
|
| 192 |
-
Color Data ──→ [THINK] ──→ [ACT] ──→ [OBSERVE] ──→ [REFLECT] ──→ [VERIFY] ──→ Output
|
| 193 |
-
│ │ │ │ │
|
| 194 |
-
│ │ │ │ Does it pass
|
| 195 |
-
│ │ │ Is this validation?
|
| 196 |
-
│ │ Check against consistent? If no, loop
|
| 197 |
-
│ Generate real data
|
| 198 |
-
│ initial
|
| 199 |
-
│ analysis
|
| 200 |
-
Plan approach
|
| 201 |
-
```
|
| 202 |
-
|
| 203 |
-
#### Option A: ReAct Framework (Recommended for AURORA + SENTINEL)
|
| 204 |
-
```
|
| 205 |
-
Thought: I need to identify brand colors from 30 extracted colors
|
| 206 |
-
Action: Analyze usage frequency — #005aa3 used 47x in buttons/CTAs
|
| 207 |
-
Observation: #005aa3 is clearly the primary CTA color
|
| 208 |
-
Thought: Now check if secondary color exists — look for headers/nav
|
| 209 |
-
Action: #ff0000 used 23x in headers → likely brand secondary
|
| 210 |
-
Observation: Red + Blue = complementary strategy
|
| 211 |
-
Thought: Now I need to name ALL colors consistently using numeric shades
|
| 212 |
-
Action: Generate full naming map using Tailwind convention (50-900)
|
| 213 |
-
Observation: 28 colors named, all using numeric shades
|
| 214 |
-
Thought: Let me verify — any naming conflicts? Any mixed conventions?
|
| 215 |
-
Action: Self-check naming consistency
|
| 216 |
-
Final Answer: {complete consistent output}
|
| 217 |
-
```
|
| 218 |
-
|
| 219 |
-
#### Option B: Tree of Thought (For NEXUS synthesis)
|
| 220 |
-
```
|
| 221 |
-
Branch 1: Weight accessibility heavily → overall score 45
|
| 222 |
-
Branch 2: Weight consistency heavily → overall score 68
|
| 223 |
-
Branch 3: Balanced weighting → overall score 55
|
| 224 |
-
Evaluate: Which scoring best reflects reality?
|
| 225 |
-
Select: Branch 3 with adjustments
|
| 226 |
-
```
|
| 227 |
-
|
| 228 |
-
#### Option C: Critic/Verifier Pattern (For ALL agents)
|
| 229 |
-
```
|
| 230 |
-
Agent Output ──→ [CRITIC LLM] ──→ Pass? ──→ Final Output
|
| 231 |
-
│ │
|
| 232 |
-
│ No: feedback
|
| 233 |
-
│ │
|
| 234 |
-
│ ▼
|
| 235 |
-
│ [RETRY with feedback]
|
| 236 |
-
│
|
| 237 |
-
Checks:
|
| 238 |
-
- Naming convention consistent?
|
| 239 |
-
- Scores match actual data?
|
| 240 |
-
- All required fields present?
|
| 241 |
-
- Values in valid ranges?
|
| 242 |
-
```
|
| 243 |
-
|
| 244 |
-
### Proposed New Stage 2 Architecture:
|
| 245 |
-
|
| 246 |
-
```
|
| 247 |
-
┌─────────────────────────────────────────────────────────────────────┐
|
| 248 |
-
│ STAGE 2: AGENTIC ANALYSIS │
|
| 249 |
-
│ │
|
| 250 |
-
│ ┌───────────────────────────────────────────────────┐ │
|
| 251 |
-
│ │ STEP 1: AURORA (ReAct, 2-3 reasoning steps) │ │
|
| 252 |
-
│ │ Think → Identify brand → Name ALL colors │ │
|
| 253 |
-
│ │ → Self-verify naming consistency │ │
|
| 254 |
-
│ │ → Critic check → Retry if needed │ │
|
| 255 |
-
│ └───────────────────────────────────────────────────┘ │
|
| 256 |
-
│ │ │
|
| 257 |
-
│ ┌───────────────────────┼───────────────────────────┐ │
|
| 258 |
-
│ │ │ │ │
|
| 259 |
-
│ ▼ ▼ ▼ │
|
| 260 |
-
│ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ │
|
| 261 |
-
│ │ ATLAS │ │ SENTINEL │ │ VALIDATOR │ │
|
| 262 |
-
│ │ Benchmark │ │ Best Prac │ │ (Critic) │ │
|
| 263 |
-
│ │ (ReAct) │ │ (ReAct) │ │ Checks ALL │ │
|
| 264 |
-
│ │ │ │ │ │ outputs │ │
|
| 265 |
-
│ └─────────────┘ └─────────────┘ └─────────────┘ │
|
| 266 |
-
│ │ │ │ │
|
| 267 |
-
│ └────────────────┼─────────────────┘ │
|
| 268 |
-
│ ▼ │
|
| 269 |
-
│ ┌─────────────┐ │
|
| 270 |
-
│ │ NEXUS │ │
|
| 271 |
-
│ │ (ToT) │ │
|
| 272 |
-
│ │ + Critic │ │
|
| 273 |
-
│ └─────────────┘ │
|
| 274 |
-
└─────────────────────────────────────────────────────────────────────┘
|
| 275 |
-
```
|
| 276 |
-
|
| 277 |
-
### Model Selection Rethink
|
| 278 |
-
|
| 279 |
-
Current models via HuggingFace Inference API:
|
| 280 |
-
| Agent | Current Model | Problem |
|
| 281 |
-
|-------|--------------|---------|
|
| 282 |
-
| AURORA | Qwen 72B | Doesn't follow structured output reliably |
|
| 283 |
-
| ATLAS | Llama 3.3 70B | Adequate for comparison |
|
| 284 |
-
| SENTINEL | Qwen 72B | Doesn't validate against actual data |
|
| 285 |
-
| NEXUS | Llama 3.3 70B | Single-shot synthesis, no verification |
|
| 286 |
-
|
| 287 |
-
**Models to evaluate:**
|
| 288 |
-
- **Qwen 2.5 72B Instruct** — Better instruction following than Qwen 72B
|
| 289 |
-
- **Mixtral 8x22B** — Good at structured JSON output
|
| 290 |
-
- **DeepSeek V3** — Strong at reasoning chains
|
| 291 |
-
- **Llama 3.1 405B** — Largest open model, best reasoning (but slow/expensive)
|
| 292 |
-
- **Command R+** — Designed for tool use and structured output
|
| 293 |
-
|
| 294 |
-
**Key question**: Should we use ONE model for all agents (consistency) or specialized models per task?
|
| 295 |
-
|
| 296 |
-
### Phase 2: Fix Stage 1 (After Stage 2 is stable)
|
| 297 |
-
|
| 298 |
-
#### Normalizer Fixes Needed:
|
| 299 |
-
1. **Unify color shade convention** — Pick ONE system (numeric 50-900 recommended)
|
| 300 |
-
2. **Add radius normalization** — Currently just stores raw values
|
| 301 |
-
3. **Handle multi-value radius** — `"0px 0px 16px 16px"` needs decomposition
|
| 302 |
-
4. **Deduplicate radius values** — Multiple entries for same visual radius
|
| 303 |
-
|
| 304 |
-
#### Rule Engine Fixes Needed:
|
| 305 |
-
1. **Base font size filter** — DONE (>= 10px filter applied)
|
| 306 |
-
2. **Shadow progression validation** — Check blur/offset increase with elevation
|
| 307 |
-
3. **Radius grid alignment** — Check if radii follow base-4/base-8
|
| 308 |
-
|
| 309 |
-
#### Export Fixes Needed:
|
| 310 |
-
1. **Validation layer before export** — Catch mixed conventions, nested garbage
|
| 311 |
-
2. **Radius structure flattening** — Never nest tokens inside tokens
|
| 312 |
-
3. **Unit consistency** — All radius values must have `px` units
|
| 313 |
-
|
| 314 |
-
---
|
| 315 |
-
|
| 316 |
-
## FILE STRUCTURE
|
| 317 |
-
|
| 318 |
-
```
|
| 319 |
-
design-system-extractor-v2-hf-fix/
|
| 320 |
-
├── app.py # Main Gradio app, orchestrates everything
|
| 321 |
-
├── CLAUDE.md # THIS FILE — project context and plan
|
| 322 |
-
│
|
| 323 |
-
├── agents/
|
| 324 |
-
│ ├── crawler.py # Page discovery (finds links on site)
|
| 325 |
-
│ ├── extractor.py # Playwright-based CSS extraction
|
| 326 |
-
│ ├── firecrawl_extractor.py # Firecrawl CSS deep extraction
|
| 327 |
-
│ ├── normalizer.py # Token deduplication and naming
|
| 328 |
-
│ ├── llm_agents.py # AURORA, ATLAS, SENTINEL, NEXUS agents
|
| 329 |
-
│ ├── stage2_graph.py # LangGraph orchestration for Stage 2
|
| 330 |
-
│ ├── advisor.py # Upgrade advisor
|
| 331 |
-
│ ├── benchmark_researcher.py # Benchmark data collection
|
| 332 |
-
│ └── semantic_analyzer.py # Semantic CSS analysis
|
| 333 |
-
│
|
| 334 |
-
├── core/
|
| 335 |
-
│ ├── token_schema.py # Pydantic models for all token types
|
| 336 |
-
│ ├── color_utils.py # Color parsing, contrast, ramp generation
|
| 337 |
-
│ ├── rule_engine.py # Deterministic analysis (type scale, WCAG, spacing)
|
| 338 |
-
│ ├── hf_inference.py # HuggingFace Inference API client
|
| 339 |
-
│ ├── preview_generator.py # HTML preview generation
|
| 340 |
-
│ ├── validation.py # Output validation
|
| 341 |
-
│ └── logging.py # Logging utilities
|
| 342 |
-
│
|
| 343 |
-
├── config/
|
| 344 |
-
│ └── settings.py # Configuration (viewports, timeouts, thresholds)
|
| 345 |
-
│
|
| 346 |
-
├── tests/
|
| 347 |
-
│ ├── test_stage1_extraction.py # 82 deterministic tests
|
| 348 |
-
│ ├── test_agent_evals.py # 27 LLM agent schema/behavior tests
|
| 349 |
-
│ └── test_stage2_pipeline.py # Pipeline integration tests
|
| 350 |
-
│
|
| 351 |
-
└── output_json/
|
| 352 |
-
├── file (16).json # Latest extraction output (sixflags)
|
| 353 |
-
└── figma-plugin-extracted/ # Figma plugin source
|
| 354 |
-
└── figma-design-token-creator 5/
|
| 355 |
-
└── src/code.js # Figma plugin main code
|
| 356 |
-
```
|
| 357 |
-
|
| 358 |
-
---
|
| 359 |
-
|
| 360 |
-
## DATA FLOW (Current vs Target)
|
| 361 |
-
|
| 362 |
-
### Current Flow (Broken):
|
| 363 |
-
```
|
| 364 |
-
Extraction → Normalizer (word shades) → Rule Engine → LLM (single-shot)
|
| 365 |
-
↓ ↓ ↓ ↓
|
| 366 |
-
Raw CSS color.blue.light Stats only Unverified output
|
| 367 |
-
values color.neutral.dark No radius Mixed naming
|
| 368 |
-
No radius processing validation No self-correction
|
| 369 |
-
↓
|
| 370 |
-
Export (merges 3 naming conventions → chaos)
|
| 371 |
-
```
|
| 372 |
-
|
| 373 |
-
### Target Flow:
|
| 374 |
-
```
|
| 375 |
-
Extraction → Normalizer (numeric shades, radius too) → Rule Engine
|
| 376 |
-
↓ ↓ ↓
|
| 377 |
-
Raw CSS color.blue.500 Stats + validation
|
| 378 |
-
values color.neutral.200 Shadow progression
|
| 379 |
-
radius.md = 8px Radius grid check
|
| 380 |
-
↓ ↓
|
| 381 |
-
LLM Agents (ReAct framework) │
|
| 382 |
-
↓ │
|
| 383 |
-
AURORA: Think → Act → Observe → Verify │
|
| 384 |
-
SENTINEL: Think → Check data → Score │
|
| 385 |
-
NEXUS: ToT → Select best synthesis │
|
| 386 |
-
↓ │
|
| 387 |
-
CRITIC/VALIDATOR ←────────────────────────────┘
|
| 388 |
-
↓ (validates against Stage 1 data)
|
| 389 |
-
Pass? → Export
|
| 390 |
-
Fail? → Retry with feedback
|
| 391 |
-
```
|
| 392 |
-
|
| 393 |
-
---
|
| 394 |
-
|
| 395 |
-
## WHAT EACH AGENT SHOULD ACTUALLY DO
|
| 396 |
-
|
| 397 |
-
### AURORA (Brand Identifier) — Needs ReAct
|
| 398 |
-
**Current**: Single-shot, names 10 colors, no verification
|
| 399 |
-
**Target**:
|
| 400 |
-
- Step 1 (Think): Plan approach based on color count and usage patterns
|
| 401 |
-
- Step 2 (Act): Identify brand primary/secondary/accent from usage evidence
|
| 402 |
-
- Step 3 (Observe): Check if identification makes sense (is primary really the most-used CTA color?)
|
| 403 |
-
- Step 4 (Act): Name ALL colors using consistent numeric convention (50-900)
|
| 404 |
-
- Step 5 (Verify): Self-check — are all names consistent? Any mixed conventions?
|
| 405 |
-
- Step 6 (Critic): External validation — does output match schema? Names all `color.{family}.{shade}`?
|
| 406 |
-
|
| 407 |
-
### SENTINEL (Best Practices) — Needs ReAct + Data Grounding
|
| 408 |
-
**Current**: Single-shot, scores without verifying against actual data
|
| 409 |
-
**Target**:
|
| 410 |
-
- Step 1 (Think): What checks apply given the data?
|
| 411 |
-
- Step 2 (Act): Score each check CITING SPECIFIC DATA from rule engine
|
| 412 |
-
- Step 3 (Observe): Does my score match what the data shows?
|
| 413 |
-
- Step 4 (Verify): If rule engine says 5 AA failures, my AA check MUST be "fail" not "pass"
|
| 414 |
-
- Step 5 (Critic): Cross-check scores against rule engine numbers
|
| 415 |
-
|
| 416 |
-
### NEXUS (Synthesizer) — Needs ToT
|
| 417 |
-
**Current**: Single-shot synthesis, no evaluation of alternatives
|
| 418 |
-
**Target**:
|
| 419 |
-
- Branch 1: Accessibility-focused scoring (weight AA failures heavily)
|
| 420 |
-
- Branch 2: Consistency-focused scoring (weight naming/grid alignment)
|
| 421 |
-
- Branch 3: Balanced approach
|
| 422 |
-
- Evaluate: Which branch best reflects reality?
|
| 423 |
-
- Critic: Does final score contradict any agent's findings?
|
| 424 |
-
|
| 425 |
-
---
|
| 426 |
-
|
| 427 |
-
## KNOWN FIXES ALREADY APPLIED
|
| 428 |
-
|
| 429 |
-
### 1. Base Font Size Detection (FIXED in rule_engine.py)
|
| 430 |
-
Filters out sizes < 10px before detecting base size.
|
| 431 |
-
|
| 432 |
-
### 2. Garbage Color Names (PARTIALLY FIXED in app.py)
|
| 433 |
-
Detects `firecrawl.N` names and regenerates — but the replacement still creates mixed conventions.
|
| 434 |
-
|
| 435 |
-
### 3. Visual Spec Error Handling (FIXED in code.js)
|
| 436 |
-
Defensive error handling for undefined errors.
|
| 437 |
-
|
| 438 |
-
---
|
| 439 |
-
|
| 440 |
-
## IDEAL OUTPUT REFERENCE
|
| 441 |
-
|
| 442 |
-
What the exported JSON SHOULD look like (for Figma):
|
| 443 |
-
|
| 444 |
-
```json
|
| 445 |
-
{
|
| 446 |
-
"color": {
|
| 447 |
-
"brand": {
|
| 448 |
-
"primary": { "$type": "color", "$value": "#005aa3" },
|
| 449 |
-
"secondary": { "$type": "color", "$value": "#ff0000" }
|
| 450 |
-
},
|
| 451 |
-
"text": {
|
| 452 |
-
"primary": { "$type": "color", "$value": "#000000" },
|
| 453 |
-
"secondary": { "$type": "color", "$value": "#999999" },
|
| 454 |
-
"muted": { "$type": "color", "$value": "#cccccc" }
|
| 455 |
-
},
|
| 456 |
-
"background": {
|
| 457 |
-
"primary": { "$type": "color", "$value": "#ebedef" },
|
| 458 |
-
"secondary": { "$type": "color", "$value": "#bfbfbf" }
|
| 459 |
-
},
|
| 460 |
-
"blue": {
|
| 461 |
-
"50": { "$type": "color", "$value": "#b9daff" },
|
| 462 |
-
"300": { "$type": "color", "$value": "#7fdbff" },
|
| 463 |
-
"500": { "$type": "color", "$value": "#6f7597" },
|
| 464 |
-
"800": { "$type": "color", "$value": "#2c3e50" }
|
| 465 |
-
},
|
| 466 |
-
"neutral": {
|
| 467 |
-
"200": { "$type": "color", "$value": "#b2b8bf" },
|
| 468 |
-
"700": { "$type": "color", "$value": "#333333" }
|
| 469 |
-
}
|
| 470 |
-
},
|
| 471 |
-
"radius": {
|
| 472 |
-
"none": { "$type": "dimension", "$value": "0px" },
|
| 473 |
-
"sm": { "$type": "dimension", "$value": "2px" },
|
| 474 |
-
"md": { "$type": "dimension", "$value": "4px" },
|
| 475 |
-
"lg": { "$type": "dimension", "$value": "8px" },
|
| 476 |
-
"xl": { "$type": "dimension", "$value": "16px" },
|
| 477 |
-
"2xl": { "$type": "dimension", "$value": "24px" },
|
| 478 |
-
"full": { "$type": "dimension", "$value": "9999px" }
|
| 479 |
-
}
|
| 480 |
-
}
|
| 481 |
-
```
|
| 482 |
-
|
| 483 |
-
**Key rules**:
|
| 484 |
-
- Palette colors ALWAYS use numeric shades (50-900)
|
| 485 |
-
- Role colors use semantic names (primary, secondary, muted)
|
| 486 |
-
- Radius is FLAT — never nested, always single px values
|
| 487 |
-
- No mixed conventions in the same category
|
| 488 |
-
|
| 489 |
-
---
|
| 490 |
-
|
| 491 |
-
## FILES TO UPDATE ON HUGGINGFACE
|
| 492 |
-
|
| 493 |
-
When making changes, these files need updating:
|
| 494 |
-
1. `app.py` — Main application logic
|
| 495 |
-
2. `core/rule_engine.py` — Deterministic analysis
|
| 496 |
-
3. `agents/llm_agents.py` — LLM agent prompts and reasoning
|
| 497 |
-
4. `agents/normalizer.py` — Token naming and dedup
|
| 498 |
-
5. `agents/extractor.py` — CSS extraction
|
| 499 |
-
6. `output_json/figma-plugin-extracted/figma-design-token-creator 5/src/code.js` — Figma plugin
|
| 500 |
-
|
| 501 |
-
---
|
| 502 |
-
|
| 503 |
-
## CRITICAL DISCOVERY: TWO COMPETING STAGE 2 ARCHITECTURES
|
| 504 |
-
|
| 505 |
-
The codebase has **two parallel Stage 2 systems** that partially overlap:
|
| 506 |
-
|
| 507 |
-
### System A: `llm_agents.py` (4 Specialized Agents)
|
| 508 |
-
```
|
| 509 |
-
AURORA (brand ID) → ATLAS (benchmark) → SENTINEL (best practices) → NEXUS (synthesis)
|
| 510 |
-
```
|
| 511 |
-
- Each agent has a focused prompt + dedicated data class
|
| 512 |
-
- Called from `app.py` directly via `hf_client.complete_async()`
|
| 513 |
-
- Uses `Qwen/Qwen2.5-72B-Instruct` and `Llama-3.3-70B-Instruct`
|
| 514 |
-
- **Problem**: Single-shot calls, no reasoning, no verification
|
| 515 |
-
|
| 516 |
-
### System B: `stage2_graph.py` (LangGraph Parallel)
|
| 517 |
-
```
|
| 518 |
-
LLM1 (Qwen) ──┐
|
| 519 |
-
├──→ HEAD ──→ Final
|
| 520 |
-
LLM2 (Llama) ─┘
|
| 521 |
-
Rule Engine ───┘
|
| 522 |
-
```
|
| 523 |
-
- Two generic "analyst" LLMs run in parallel + rule engine
|
| 524 |
-
- Uses LangGraph `StateGraph` with `asyncio.gather()`
|
| 525 |
-
- HEAD compiler merges results
|
| 526 |
-
- **Problem**: Generic prompts, no specialization, same analysis duplicated
|
| 527 |
-
|
| 528 |
-
### Decision: Merge into ONE system with ReAct reasoning
|
| 529 |
-
|
| 530 |
-
Keep System A's **specialized agents** (AURORA, SENTINEL, NEXUS) but add System B's **parallel execution** and **LangGraph state management**. Drop the duplicate generic analysts (LLM1/LLM2).
|
| 531 |
-
|
| 532 |
-
---
|
| 533 |
-
|
| 534 |
-
## DETAILED AGENTIC ARCHITECTURE FOR STAGE 2
|
| 535 |
-
|
| 536 |
-
### Design Principles
|
| 537 |
-
1. **ReAct (Reasoning + Acting)**: Each agent THINKS before it acts, OBSERVES the result, REFLECTS on quality
|
| 538 |
-
2. **Critic/Verifier**: A lightweight validation pass after each agent output
|
| 539 |
-
3. **Grounded Reasoning**: LLMs must cite specific data from Stage 1, not hallucinate
|
| 540 |
-
4. **Fail-Safe Defaults**: If LLM fails or produces garbage, fall back to rule-engine defaults
|
| 541 |
-
5. **Single Convention**: ALL naming uses numeric shades (50-900), enforced post-LLM
|
| 542 |
-
|
| 543 |
-
### New Stage 2 Flow
|
| 544 |
-
|
| 545 |
-
```
|
| 546 |
-
Stage 1 Output (NormalizedTokens + RuleEngineResults)
|
| 547 |
-
│
|
| 548 |
-
▼
|
| 549 |
-
┌──────────────────────────────────────────────────────────────┐
|
| 550 |
-
│ PRE-PROCESSING (Deterministic, no LLM) │
|
| 551 |
-
│ • Unify all color names to numeric shades (50-900) │
|
| 552 |
-
│ • Normalize radius values (flatten, deduplicate) │
|
| 553 |
-
│ • Validate shadow progression (sort by blur) │
|
| 554 |
-
│ • Build structured data packets for each agent │
|
| 555 |
-
└──────────────────────────────────────────────────────────────┘
|
| 556 |
-
│
|
| 557 |
-
┌───────────┼───────────┐
|
| 558 |
-
▼ ▼ ▼
|
| 559 |
-
┌─────────────┐ ┌─────────────┐ ┌─────────────┐
|
| 560 |
-
│ AURORA │ │ ATLAS │ │ SENTINEL │
|
| 561 |
-
│ (ReAct) │ │ (Single) │ │ (ReAct) │
|
| 562 |
-
│ 2 steps │ │ 1 step │ │ 2 steps │
|
| 563 |
-
└──────┬──────┘ └──────┬──────┘ └──────┬──────┘
|
| 564 |
-
│ │ │
|
| 565 |
-
▼ ▼ ▼
|
| 566 |
-
┌─────────────┐ ┌─────────────┐ ┌─────────────┐
|
| 567 |
-
│ CRITIC 1 │ │ (no critic │ │ CRITIC 2 │
|
| 568 |
-
│ Validate │ │ needed) │ │ Cross-ref │
|
| 569 |
-
│ naming │ │ │ │ with data │
|
| 570 |
-
└──────┬──────┘ └──────┬──────┘ └──────┬──────┘
|
| 571 |
-
│ │ │
|
| 572 |
-
└───────────────┼───────────────┘
|
| 573 |
-
▼
|
| 574 |
-
┌─────────────────┐
|
| 575 |
-
│ NEXUS │
|
| 576 |
-
│ (ToT: 2 branches, pick best) │
|
| 577 |
-
└────────┬────────┘
|
| 578 |
-
▼
|
| 579 |
-
┌─────────────────┐
|
| 580 |
-
│ POST-VALIDATION│
|
| 581 |
-
│ (Deterministic)│
|
| 582 |
-
│ • Names consistent? │
|
| 583 |
-
│ • Scores in range? │
|
| 584 |
-
│ • All fields present?│
|
| 585 |
-
└─────────────────┘
|
| 586 |
-
```
|
| 587 |
-
|
| 588 |
-
### AURORA — Brand Identifier (ReAct, 2 LLM Calls)
|
| 589 |
-
|
| 590 |
-
**Why ReAct**: Brand identification requires reasoning about CONTEXT (why a color is used 47x on buttons) not just statistics. The model needs to think step-by-step.
|
| 591 |
-
|
| 592 |
-
**Step 1: Identify + Name (Main Call)**
|
| 593 |
-
```
|
| 594 |
-
System: You are AURORA. You will receive color data with usage context.
|
| 595 |
-
|
| 596 |
-
TASK (do these in order, show your reasoning):
|
| 597 |
-
|
| 598 |
-
THINK: Look at the color usage data. Which colors appear most in
|
| 599 |
-
interactive elements (buttons, links, CTAs)?
|
| 600 |
-
ACT: Identify brand primary, secondary, accent.
|
| 601 |
-
THINK: Now look at ALL colors. Group them by hue family.
|
| 602 |
-
ACT: Assign EVERY color a name using this EXACT convention:
|
| 603 |
-
- Role colors: color.{role}.{shade} where role=brand/text/background/border/feedback
|
| 604 |
-
- Palette colors: color.{hue}.{shade} where hue=red/orange/yellow/green/teal/blue/purple/pink/neutral
|
| 605 |
-
- Shade MUST be numeric: 50/100/200/300/400/500/600/700/800/900
|
| 606 |
-
- NEVER use words like "light", "dark", "base" for shades
|
| 607 |
-
OBSERVE: Check your naming. Are ALL names using numeric shades?
|
| 608 |
-
Any duplicates? Any conflicts?
|
| 609 |
-
|
| 610 |
-
Output JSON with brand_colors + complete naming_map for ALL colors.
|
| 611 |
-
```
|
| 612 |
-
|
| 613 |
-
**Step 2: Critic Check (Lightweight Call or Rule-Based)**
|
| 614 |
-
```python
|
| 615 |
-
# Can be done WITHOUT an LLM call — just Python validation:
|
| 616 |
-
def validate_aurora_output(output: dict, input_colors: list[str]) -> tuple[bool, list[str]]:
|
| 617 |
-
errors = []
|
| 618 |
-
naming_map = output.get("naming_map", {})
|
| 619 |
-
|
| 620 |
-
# Check 1: All input colors have names
|
| 621 |
-
for hex_val in input_colors:
|
| 622 |
-
if hex_val not in naming_map:
|
| 623 |
-
errors.append(f"Missing name for {hex_val}")
|
| 624 |
-
|
| 625 |
-
# Check 2: No word-based shades
|
| 626 |
-
for hex_val, name in naming_map.items():
|
| 627 |
-
parts = name.split(".")
|
| 628 |
-
last = parts[-1]
|
| 629 |
-
if last in ("light", "dark", "base", "muted", "deep"):
|
| 630 |
-
errors.append(f"Word shade '{last}' in {name} — must be numeric")
|
| 631 |
-
|
| 632 |
-
# Check 3: No duplicate names
|
| 633 |
-
names = list(naming_map.values())
|
| 634 |
-
dupes = [n for n in names if names.count(n) > 1]
|
| 635 |
-
if dupes:
|
| 636 |
-
errors.append(f"Duplicate names: {set(dupes)}")
|
| 637 |
-
|
| 638 |
-
return len(errors) == 0, errors
|
| 639 |
-
```
|
| 640 |
-
|
| 641 |
-
If validation fails → retry ONCE with error feedback appended to prompt. If still fails → fall back to deterministic HSL-based naming (already in `color_utils.py`).
|
| 642 |
-
|
| 643 |
-
### SENTINEL — Best Practices (ReAct, 2 LLM Calls)
|
| 644 |
-
|
| 645 |
-
**Why ReAct**: Scoring must be GROUNDED in actual data. The model needs to cite specific numbers, not make up scores.
|
| 646 |
-
|
| 647 |
-
**Step 1: Score + Prioritize (Main Call)**
|
| 648 |
-
```
|
| 649 |
-
System: You are SENTINEL. You MUST cite specific data for every score.
|
| 650 |
-
|
| 651 |
-
INPUT DATA (from Rule Engine — these are FACTS, not opinions):
|
| 652 |
-
- AA Pass: 18 of 25 colors (72%)
|
| 653 |
-
- AA Fail: 7 colors (list: #ff0000 3.2:1, #ffdc00 1.8:1, ...)
|
| 654 |
-
- Type Scale Ratio: 1.18 (variance: 0.22)
|
| 655 |
-
- Base Font: 14px
|
| 656 |
-
- Spacing: 8px grid, 85% aligned
|
| 657 |
-
- Shadows: 5 defined, blur progression: 25→30→80→80→90 (non-monotonic)
|
| 658 |
-
- Near-duplicates: 3 pairs
|
| 659 |
-
|
| 660 |
-
TASK (cite data for EVERY check):
|
| 661 |
-
|
| 662 |
-
CHECK 1 - AA Compliance:
|
| 663 |
-
THINK: Rule Engine says 7 of 25 fail. That's 28% failure rate.
|
| 664 |
-
SCORE: "fail" — cite "7 colors fail AA, including brand primary #ff0000 (3.2:1)"
|
| 665 |
-
|
| 666 |
-
CHECK 2 - Type Scale:
|
| 667 |
-
THINK: Ratio 1.18 is not standard (nearest: 1.2 Minor Third). Variance 0.22 > 0.15.
|
| 668 |
-
SCORE: "warn" — cite "1.18 is close to Minor Third but inconsistent (variance 0.22)"
|
| 669 |
-
|
| 670 |
-
... (continue for all 8 checks)
|
| 671 |
-
|
| 672 |
-
THEN calculate overall_score using the weighting:
|
| 673 |
-
AA: 25pts × (pass%/100) = 25 × 0.72 = 18
|
| 674 |
-
Type Scale Consistent: ...
|
| 675 |
-
... total = sum
|
| 676 |
-
|
| 677 |
-
Output JSON with checks, overall_score, priority_fixes.
|
| 678 |
-
```
|
| 679 |
-
|
| 680 |
-
**Step 2: Cross-Reference Critic (Rule-Based)**
|
| 681 |
-
```python
|
| 682 |
-
def validate_sentinel_output(output: dict, rule_engine: RuleEngineResults) -> tuple[bool, list[str]]:
|
| 683 |
-
errors = []
|
| 684 |
-
checks = output.get("checks", {})
|
| 685 |
-
|
| 686 |
-
# If rule engine found AA failures, sentinel MUST mark aa_compliance as fail/warn
|
| 687 |
-
aa_failures = len([a for a in rule_engine.accessibility if not a.passes_aa_normal])
|
| 688 |
-
if aa_failures > 0 and checks.get("aa_compliance", {}).get("status") == "pass":
|
| 689 |
-
errors.append(f"Sentinel says AA passes but rule engine found {aa_failures} failures")
|
| 690 |
-
|
| 691 |
-
# Score must be 0-100
|
| 692 |
-
score = output.get("overall_score", -1)
|
| 693 |
-
if not (0 <= score <= 100):
|
| 694 |
-
errors.append(f"Score {score} out of range")
|
| 695 |
-
|
| 696 |
-
# If many failures, score can't be high
|
| 697 |
-
fail_count = sum(1 for c in checks.values() if isinstance(c, dict) and c.get("status") == "fail")
|
| 698 |
-
if fail_count >= 3 and score > 70:
|
| 699 |
-
errors.append(f"Score {score} too high with {fail_count} failures")
|
| 700 |
-
|
| 701 |
-
return len(errors) == 0, errors
|
| 702 |
-
```
|
| 703 |
-
|
| 704 |
-
### ATLAS — Benchmark Advisor (Single Call, No ReAct Needed)
|
| 705 |
-
|
| 706 |
-
**Why single call**: This agent receives well-structured benchmark comparison data and just needs to pick the best fit. The reasoning is straightforward comparison.
|
| 707 |
-
|
| 708 |
-
Keep current implementation but improve prompt to:
|
| 709 |
-
1. Explicitly output the top 3 benchmarks ranked
|
| 710 |
-
2. Include specific numeric diffs for each
|
| 711 |
-
3. Cap alignment changes at 4
|
| 712 |
-
|
| 713 |
-
### NEXUS — HEAD Synthesizer (ToT: 2 Branches)
|
| 714 |
-
|
| 715 |
-
**Why Tree of Thought**: The synthesizer needs to weigh competing priorities. Should it emphasize accessibility (SENTINEL's input) or brand fidelity (AURORA's input)? ToT lets it explore both and pick the best.
|
| 716 |
-
|
| 717 |
-
**Branch 1: Accessibility-First Scoring**
|
| 718 |
-
```
|
| 719 |
-
Weight accessibility at 40%, consistency at 30%, organization at 30%.
|
| 720 |
-
If SENTINEL found 7 AA failures → accessibility score tanks → overall score lower.
|
| 721 |
-
Result: overall ~55
|
| 722 |
-
```
|
| 723 |
-
|
| 724 |
-
**Branch 2: Balanced Scoring**
|
| 725 |
-
```
|
| 726 |
-
Weight accessibility at 30%, consistency at 35%, organization at 35%.
|
| 727 |
-
Same data but organization counts more.
|
| 728 |
-
Result: overall ~65
|
| 729 |
-
```
|
| 730 |
-
|
| 731 |
-
**Selection**: Pick the branch that:
|
| 732 |
-
1. Doesn't contradict any agent's hard failures (if SENTINEL says AA fails, score CAN'T say accessibility is "good")
|
| 733 |
-
2. Produces actionable top-3 actions (not generic)
|
| 734 |
-
3. Has color recommendations with specific hex values
|
| 735 |
-
|
| 736 |
-
**Implementation**: This can be done as a SINGLE LLM call with explicit instruction:
|
| 737 |
-
|
| 738 |
-
```
|
| 739 |
-
TASK: You will synthesize from two perspectives.
|
| 740 |
-
|
| 741 |
-
PERSPECTIVE A (Accessibility-First): Weight AA compliance heavily.
|
| 742 |
-
Calculate scores with accessibility=40%, consistency=30%, org=30%.
|
| 743 |
-
|
| 744 |
-
PERSPECTIVE B (Balanced): Equal weights.
|
| 745 |
-
Calculate scores with accessibility=33%, consistency=33%, org=33%.
|
| 746 |
-
|
| 747 |
-
THEN: Compare both perspectives. Choose the one that:
|
| 748 |
-
1. Better reflects the ACTUAL data (don't ignore failures)
|
| 749 |
-
2. Produces the most actionable top-3 list
|
| 750 |
-
3. Is internally consistent
|
| 751 |
-
|
| 752 |
-
Output your CHOSEN perspective's scores + explain WHY you chose it.
|
| 753 |
-
```
|
| 754 |
-
|
| 755 |
-
### Model Selection (Final Decision)
|
| 756 |
-
|
| 757 |
-
After reviewing all agents' needs:
|
| 758 |
-
|
| 759 |
-
| Agent | Model | Reasoning |
|
| 760 |
-
|-------|-------|-----------|
|
| 761 |
-
| AURORA | `Qwen/Qwen2.5-72B-Instruct` | Best at structured JSON, good reasoning |
|
| 762 |
-
| ATLAS | `meta-llama/Llama-3.3-70B-Instruct` | 128K context for benchmark data |
|
| 763 |
-
| SENTINEL | `Qwen/Qwen2.5-72B-Instruct` | Methodical, follows rubrics well |
|
| 764 |
-
| NEXUS | `meta-llama/Llama-3.3-70B-Instruct` | Good synthesis, large context |
|
| 765 |
-
|
| 766 |
-
**Keep current models** — the problem isn't the models, it's the prompting strategy (single-shot vs ReAct) and lack of validation.
|
| 767 |
-
|
| 768 |
-
### Cost Budget Per Extraction
|
| 769 |
-
|
| 770 |
-
| Step | LLM Calls | Est. Tokens | Est. Cost |
|
| 771 |
-
|------|-----------|-------------|-----------|
|
| 772 |
-
| AURORA main | 1 | ~2K in, ~1K out | $0.001 |
|
| 773 |
-
| AURORA retry (10% of time) | 0.1 | ~2K in, ~1K out | $0.0001 |
|
| 774 |
-
| ATLAS | 1 | ~1.5K in, ~0.8K out | $0.001 |
|
| 775 |
-
| SENTINEL main | 1 | ~2K in, ~1K out | $0.001 |
|
| 776 |
-
| SENTINEL retry (10% of time) | 0.1 | ~2K in, ~1K out | $0.0001 |
|
| 777 |
-
| NEXUS | 1 | ~3K in, ~1.2K out | $0.002 |
|
| 778 |
-
| **Total** | **~4.2** | **~14K** | **~$0.005** |
|
| 779 |
-
|
| 780 |
-
Well within HF free tier ($0.10/mo).
|
| 781 |
-
|
| 782 |
-
---
|
| 783 |
-
|
| 784 |
-
## IMPLEMENTATION PLAN
|
| 785 |
-
|
| 786 |
-
### Step 1: Consolidate Stage 2 into ONE system
|
| 787 |
-
- Keep `llm_agents.py` as the agent definitions (AURORA, SENTINEL, NEXUS)
|
| 788 |
-
- Use `stage2_graph.py` for orchestration (parallel AURORA+ATLAS+SENTINEL, then NEXUS)
|
| 789 |
-
- Delete the duplicate generic LLM1/LLM2 analyst nodes
|
| 790 |
-
- Single entry point: `run_stage2_analysis()`
|
| 791 |
-
|
| 792 |
-
### Step 2: Add Pre-Processing Layer
|
| 793 |
-
- Before any LLM call, run deterministic cleanup:
|
| 794 |
-
- Unify ALL color names to numeric shades (50-900)
|
| 795 |
-
- Flatten and deduplicate radius values
|
| 796 |
-
- Sort shadows by blur radius
|
| 797 |
-
- Build structured data packets for each agent
|
| 798 |
-
|
| 799 |
-
### Step 3: Rewrite AURORA with ReAct Prompt
|
| 800 |
-
- New prompt: Think → Identify brand → Name ALL colors → Self-verify
|
| 801 |
-
- Add `validate_aurora_output()` rule-based critic
|
| 802 |
-
- Retry once on validation failure
|
| 803 |
-
- Fallback to `_generate_color_name_from_hex()` if LLM fails
|
| 804 |
-
|
| 805 |
-
### Step 4: Rewrite SENTINEL with Grounded Scoring
|
| 806 |
-
- New prompt: Must cite rule-engine data for every check
|
| 807 |
-
- Add `validate_sentinel_output()` cross-reference critic
|
| 808 |
-
- Ensure scores match actual data (no inflated pass when data says fail)
|
| 809 |
-
|
| 810 |
-
### Step 5: Rewrite NEXUS with ToT
|
| 811 |
-
- Two-perspective evaluation in single prompt
|
| 812 |
-
- Must choose perspective and explain why
|
| 813 |
-
- Post-validation: scores internally consistent, actions are specific
|
| 814 |
-
|
| 815 |
-
### Step 6: Add Post-Validation Layer
|
| 816 |
-
- After all agents complete, run deterministic checks:
|
| 817 |
-
- All color names follow `color.{family}.{shade}` pattern
|
| 818 |
-
- All scores are in valid ranges
|
| 819 |
-
- No contradictions between agents
|
| 820 |
-
- All required fields present
|
| 821 |
-
- If post-validation fails, apply rule-based fixes (not another LLM call)
|
| 822 |
-
|
| 823 |
-
### Step 7: Fix Normalizer (Stage 1)
|
| 824 |
-
- Unify `_generate_color_name_from_value()` to use numeric shades only
|
| 825 |
-
- Add radius normalization (flatten, single-value, deduplicate)
|
| 826 |
-
- Handle multi-value radius (`"0px 0px 16px 16px"` → individual values or skip)
|
| 827 |
-
|
| 828 |
-
### Step 8: Fix Export Layer
|
| 829 |
-
- Validation before JSON export
|
| 830 |
-
- Ensure DTCG format (`$type`, `$value`)
|
| 831 |
-
- Flat radius (never nested tokens inside tokens)
|
| 832 |
-
- Consistent units (all px for dimensions)
|
| 833 |
-
|
| 834 |
-
---
|
| 835 |
-
|
| 836 |
-
## STAGE 1 AUDIT: WHAT IS VALID vs WHAT NEEDS RETHINKING
|
| 837 |
-
|
| 838 |
-
Stage 1 feeds Stage 2 — if Stage 1 produces garbage, no amount of agentic reasoning in Stage 2 can fix it. Let's audit every rule-based component honestly.
|
| 839 |
-
|
| 840 |
-
### OVERALL VERDICT: Stage 1 is ~60% correct, 40% broken/missing
|
| 841 |
-
|
| 842 |
-
The extraction (Playwright CSS scraping) is solid. The normalizer and rule engine have real problems that corrupt data BEFORE any LLM ever sees it.
|
| 843 |
-
|
| 844 |
-
---
|
| 845 |
-
|
| 846 |
-
### Component 1: Extractor (`agents/extractor.py`) — ✅ MOSTLY VALID
|
| 847 |
-
|
| 848 |
-
**What it does**: Playwright visits pages, extracts computed CSS styles for every element.
|
| 849 |
-
**What it produces**: `ExtractedTokens` — lists of `ColorToken`, `TypographyToken`, `SpacingToken`, `RadiusToken`, `ShadowToken`.
|
| 850 |
-
|
| 851 |
-
**What's working**:
|
| 852 |
-
- Color extraction: Gets hex values, usage frequency, CSS property context (background-color, color, border-color), element types (button, h1, p). This is exactly what Stage 2 needs.
|
| 853 |
-
- Typography extraction: Gets font-family, font-size, font-weight, line-height, element context. Solid.
|
| 854 |
-
- Spacing extraction: Gets margin/padding/gap values with px conversion. Solid.
|
| 855 |
-
|
| 856 |
-
**What's broken**:
|
| 857 |
-
- **Font family**: Returns `"sans-serif"` (the computed fallback) instead of `"Inter"` (the actual font). This is a browser behavior issue — `getComputedStyle()` resolves the font stack to the generic family. **Fix needed**: Use `document.fonts.check()` or extract from CSS `font-family` declarations before resolution.
|
| 858 |
-
- **Radius**: Extracts raw CSS values including multi-value shorthand like `"0px 0px 16px 16px"` and percentage values like `"50%"`. The RadiusToken has `value: str` and `value_px: Optional[int]` but the extractor doesn't parse multi-value or percentage. **Fix needed**: Parse in extractor or normalizer.
|
| 859 |
-
- **Shadows**: Extracts full CSS shadow string but parsing into components (offset_x, offset_y, blur, spread, color) is unreliable. Some shadows have `None` for all parsed fields. **Fix needed**: Better CSS shadow parser.
|
| 860 |
-
|
| 861 |
-
**Verdict**: Extraction is the least broken part. Font family is the biggest issue but it's a well-known Playwright limitation with known workarounds.
|
| 862 |
-
|
| 863 |
-
---
|
| 864 |
-
|
| 865 |
-
### Component 2: Normalizer (`agents/normalizer.py`) — ❌ NEEDS MAJOR RETHINK
|
| 866 |
-
|
| 867 |
-
**What it does**: Takes raw `ExtractedTokens` lists → deduplicates → names → outputs `NormalizedTokens` dicts.
|
| 868 |
-
|
| 869 |
-
**What's working**:
|
| 870 |
-
- Color deduplication by exact hex: Correct. Merges frequency/contexts.
|
| 871 |
-
- Similar color merging (RGB Euclidean distance < 10): Reasonable threshold, works.
|
| 872 |
-
- Typography dedup by unique `family|size|weight|lineHeight`: Correct.
|
| 873 |
-
- Spacing dedup and base-8 alignment preference: Correct.
|
| 874 |
-
- Confidence scoring by frequency (10+=high, 3-9=medium, 1-2=low): Reasonable.
|
| 875 |
-
|
| 876 |
-
**What's BROKEN**:
|
| 877 |
-
|
| 878 |
-
#### Problem 2A: Color Naming — TWO COMPETING FUNCTIONS
|
| 879 |
-
|
| 880 |
-
```
|
| 881 |
-
_generate_color_name(color, role) → line 236-256
|
| 882 |
-
Input: color + inferred role (from CSS context keywords)
|
| 883 |
-
Output: "color.{role}.{shade}" where shade = 50/200/500/700/900
|
| 884 |
-
Uses: NUMERIC shades based on luminance buckets ✅
|
| 885 |
-
|
| 886 |
-
_generate_color_name_from_value(color) → line 258-275
|
| 887 |
-
Input: color (no role found)
|
| 888 |
-
Output: "color.{category}.{shade}" where shade = light/base/dark
|
| 889 |
-
Uses: WORD shades ❌ ← THIS IS THE ROOT OF THE NAMING PROBLEM
|
| 890 |
-
```
|
| 891 |
-
|
| 892 |
-
**The irony**: The first function (with role) already uses numeric shades! But only colors where `_infer_color_role()` finds a keyword match get numeric names. All other colors fall through to the word-based function.
|
| 893 |
-
|
| 894 |
-
**`_infer_color_role()` (line 220-234)**: Searches color.contexts + color.elements for keywords like "primary", "button", "background". **Problem**: Most extracted colors don't have semantic class names — they come from computed styles on generic elements. A `<div>` with `background-color: #005aa3` has no "primary" keyword anywhere. So MOST colors fall through to word-based naming.
|
| 895 |
-
|
| 896 |
-
**How often does role inference work?** Rough estimate:
|
| 897 |
-
- Sites with BEM/utility classes (Tailwind, Bootstrap): ~40% of colors get roles
|
| 898 |
-
- Sites with generic/minified classes: ~5-10% of colors get roles
|
| 899 |
-
- Remaining get word-based names → causes mixed convention chaos
|
| 900 |
-
|
| 901 |
-
**Fix needed**: Remove `_generate_color_name_from_value()` entirely. Make `_generate_color_name()` the only path, and if no role is inferred, use hue-family + numeric shade (which `_generate_color_name_from_hex()` in app.py already does correctly).
|
| 902 |
-
|
| 903 |
-
#### Problem 2B: Radius — NO PROCESSING AT ALL
|
| 904 |
-
|
| 905 |
-
```python
|
| 906 |
-
# Line 93-97: Just stores raw values
|
| 907 |
-
radius_dict = {}
|
| 908 |
-
for r in extracted.radius:
|
| 909 |
-
key = f"radius-{r.value}" # Raw CSS value as dict key!
|
| 910 |
-
radius_dict[key] = r
|
| 911 |
-
```
|
| 912 |
-
|
| 913 |
-
**What this produces**:
|
| 914 |
-
- `"radius-8px"` → ok
|
| 915 |
-
- `"radius-0px 0px 16px 16px"` → garbage key, multi-value
|
| 916 |
-
- `"radius-50%"` → percentage, Figma can't use
|
| 917 |
-
- `"radius-16px"` AND `"radius-1rem"` → duplicates (both = 16px)
|
| 918 |
-
|
| 919 |
-
**What's missing**:
|
| 920 |
-
1. No value parsing (multi-value → skip or take max)
|
| 921 |
-
2. No unit normalization (%, rem, em → px)
|
| 922 |
-
3. No deduplication by resolved px value
|
| 923 |
-
4. No semantic naming (none/sm/md/lg/xl/full)
|
| 924 |
-
5. No sorting by size
|
| 925 |
-
|
| 926 |
-
#### Problem 2C: Shadows — NO PROCESSING AT ALL
|
| 927 |
-
|
| 928 |
-
```python
|
| 929 |
-
# Line 99-102: Hash-based key, no analysis
|
| 930 |
-
shadows_dict = {}
|
| 931 |
-
for s in extracted.shadows:
|
| 932 |
-
key = f"shadow-{hash(s.value) % 1000}" # Meaningless key!
|
| 933 |
-
shadows_dict[key] = s
|
| 934 |
-
```
|
| 935 |
-
|
| 936 |
-
**What's missing**:
|
| 937 |
-
1. No deduplication by visual similarity
|
| 938 |
-
2. No sorting by elevation (blur radius)
|
| 939 |
-
3. No semantic naming (xs/sm/md/lg/xl)
|
| 940 |
-
4. No validation of shadow progression (blur should increase with elevation level)
|
| 941 |
-
5. No filtering of garbage shadows (blur=0, identical to another, etc.)
|
| 942 |
-
|
| 943 |
-
#### Problem 2D: Typography Naming — COLLISION RISK
|
| 944 |
-
|
| 945 |
-
```python
|
| 946 |
-
# Line 310-339: Size-tier names can collide
|
| 947 |
-
"font.{category}.{size_tier}"
|
| 948 |
-
# Two different h2 styles (24px/700 and 24px/400) both become "font.heading.lg"
|
| 949 |
-
```
|
| 950 |
-
|
| 951 |
-
The dedup key at line 86 is `suggested_name or f"{font_family}-{font_size}"`, so if two styles get the SAME suggested name, the second overwrites the first silently.
|
| 952 |
-
|
| 953 |
-
---
|
| 954 |
-
|
| 955 |
-
### Component 3: Rule Engine (`core/rule_engine.py`) — ✅ MOSTLY VALID
|
| 956 |
-
|
| 957 |
-
**What it does**: Deterministic analysis — type scale ratios, WCAG contrast, spacing grid detection, color statistics.
|
| 958 |
-
|
| 959 |
-
**What's working**:
|
| 960 |
-
- **Type scale analysis**: Detects ratio between consecutive font sizes, identifies closest standard scale, measures consistency (variance). Correctly filters sizes < 10px. ✅
|
| 961 |
-
- **WCAG contrast checking**: Correct `get_relative_luminance()` per WCAG 2.1 spec. Correct 4.5:1 threshold for AA normal text, 3.0:1 for large text. ✅
|
| 962 |
-
- **AA fix suggestions**: `find_aa_compliant_color()` iterates darken/lighten in 1% steps until 4.5:1 is reached. Brute-force but correct. ✅
|
| 963 |
-
- **Spacing grid detection**: GCD-based base detection, alignment % calculation. Correct. ✅
|
| 964 |
-
- **Color statistics**: Near-duplicate detection, hue distribution, gray/saturated counts. Correct. ✅
|
| 965 |
-
- **Consistency score**: Weighted formula combining all checks. Reasonable. ✅
|
| 966 |
-
|
| 967 |
-
**What's broken/questionable**:
|
| 968 |
-
|
| 969 |
-
#### Problem 3A: Accessibility Only Tests Against White/Black
|
| 970 |
-
|
| 971 |
-
```python
|
| 972 |
-
# Line 545-550
|
| 973 |
-
contrast_white = get_contrast_ratio(hex_color, "#ffffff")
|
| 974 |
-
contrast_black = get_contrast_ratio(hex_color, "#000000")
|
| 975 |
-
passes_aa_normal = contrast_white >= 4.5 or contrast_black >= 4.5
|
| 976 |
-
```
|
| 977 |
-
|
| 978 |
-
This tests every color against pure white AND pure black. If it passes against EITHER, it's marked as passing. But:
|
| 979 |
-
- A brand blue (#005aa3) that passes on white (7.2:1) might be used on a dark navy background (#1a1a2e) where it fails (1.8:1)
|
| 980 |
-
- A light gray (#cccccc) passes on black but is used as text on white (#ffffff) where it fails (1.6:1)
|
| 981 |
-
|
| 982 |
-
The `fg_bg_pairs` logic (line 577-610) partially addresses this — it checks actual foreground-background combinations from the DOM. **But**: it only adds FAILURES to the results, doesn't correct the per-color assessment above. So a color could show as "passes AA" in the per-color check but "fails AA" in the pair check. **Contradictory data sent to SENTINEL**.
|
| 983 |
-
|
| 984 |
-
**Fix needed**: Two modes — (1) per-color against white/black for palette overview, (2) per-pair for actual accessibility score. SENTINEL should see BOTH clearly labeled.
|
| 985 |
-
|
| 986 |
-
#### Problem 3B: No Radius Analysis
|
| 987 |
-
|
| 988 |
-
The rule engine receives `radius_tokens` (line 1034) but does NOTHING with them. No grid alignment check, no progression validation, no statistics. It's just passed through.
|
| 989 |
-
|
| 990 |
-
#### Problem 3C: Shadow Analysis Is Minimal
|
| 991 |
-
|
| 992 |
-
The rule engine receives `shadow_tokens` but only passes them to SENTINEL's prompt as raw strings. No programmatic analysis of:
|
| 993 |
-
- Blur progression (should increase with elevation)
|
| 994 |
-
- Y-offset progression (should increase with elevation)
|
| 995 |
-
- Color consistency (should all use same base color/alpha)
|
| 996 |
-
- Whether shadows form a coherent elevation system
|
| 997 |
-
|
| 998 |
-
This means SENTINEL gets raw shadow CSS strings and has to evaluate them purely from text — no pre-computed metrics to ground its scoring.
|
| 999 |
-
|
| 1000 |
-
---
|
| 1001 |
-
|
| 1002 |
-
### Component 4: Semantic Analyzer (`agents/semantic_analyzer.py`) — ⚠️ USEFUL BUT UNDERTRUSTED
|
| 1003 |
-
|
| 1004 |
-
**What it does**: Rule-based categorization of colors by CSS property usage. If a color is used in `background-color` on buttons → it's likely brand primary. If used in `color` property on `<p>` → it's likely text color.
|
| 1005 |
-
|
| 1006 |
-
**What's working**: The logic is sound — CSS property + element type is a strong signal for color role. This is actually one of the best parts of Stage 1.
|
| 1007 |
-
|
| 1008 |
-
**What's broken**: AURORA receives this as `semantic_analysis` parameter but the data is passed as a secondary input, not the primary. AURORA's prompt says "Suggest Semantic Names for top 10 most-used colors" — it ignores the semantic analysis for the OTHER 20 colors. The semantic analyzer's work is wasted for most colors.
|
| 1009 |
-
|
| 1010 |
-
---
|
| 1011 |
-
|
| 1012 |
-
### Component 5: Color Utils (`core/color_utils.py`) — ✅ VALID
|
| 1013 |
-
|
| 1014 |
-
**What it does**: Hex/RGB/HSL parsing, contrast calculation, color categorization by hue, color ramp generation.
|
| 1015 |
-
|
| 1016 |
-
**What's working**: All the pure color math is correct. `categorize_color()` returns the right hue family. `generate_color_ramp()` produces reasonable 50-900 shade ramps using OKLCH.
|
| 1017 |
-
|
| 1018 |
-
**No issues found.** This is the most solid component.
|
| 1019 |
-
|
| 1020 |
-
---
|
| 1021 |
-
|
| 1022 |
-
### Component 6: Export Layer (`app.py` export functions) — ❌ NEEDS RETHINK
|
| 1023 |
-
|
| 1024 |
-
Already documented above in the AS-IS flow. The 3-way naming merge is the killer.
|
| 1025 |
-
|
| 1026 |
-
---
|
| 1027 |
-
|
| 1028 |
-
## WHAT STAGE 1 SHOULD ACTUALLY PRODUCE (for Stage 2 to work)
|
| 1029 |
-
|
| 1030 |
-
### Current: What Stage 2 receives
|
| 1031 |
-
```
|
| 1032 |
-
NormalizedTokens:
|
| 1033 |
-
colors: {
|
| 1034 |
-
"color.blue.light": ColorToken(value="#7fdbff", freq=5, contexts=["background"]),
|
| 1035 |
-
"color.blue.dark": ColorToken(value="#2c3e50", freq=12, contexts=["text", "button"]),
|
| 1036 |
-
"color.blue.base": ColorToken(value="#005aa3", freq=47, contexts=["button", "link"]),
|
| 1037 |
-
"color.neutral.dark": ColorToken(value="#333333", freq=89, contexts=["text"]),
|
| 1038 |
-
// ← word-based shades, no consistent convention
|
| 1039 |
-
}
|
| 1040 |
-
radius: {
|
| 1041 |
-
"radius-8px": RadiusToken(value="8px"),
|
| 1042 |
-
"radius-0px 0px 16px 16px": RadiusToken(value="0px 0px 16px 16px"), // ← garbage
|
| 1043 |
-
"radius-50%": RadiusToken(value="50%"), // ← Figma can't use
|
| 1044 |
-
}
|
| 1045 |
-
shadows: {
|
| 1046 |
-
"shadow-234": ShadowToken(value="0px 4px 25px rgba(0,0,0,0.1)"), // ← meaningless key
|
| 1047 |
-
"shadow-891": ShadowToken(value="0px 2px 30px rgba(0,0,0,0.15)"), // ← unsorted
|
| 1048 |
-
}
|
| 1049 |
-
```
|
| 1050 |
-
|
| 1051 |
-
### Target: What Stage 2 SHOULD receive
|
| 1052 |
-
```
|
| 1053 |
-
NormalizedTokens:
|
| 1054 |
-
colors: {
|
| 1055 |
-
"color.blue.300": ColorToken(value="#7fdbff", freq=5, contexts=["background"],
|
| 1056 |
-
role="palette", hue="blue", shade=300),
|
| 1057 |
-
"color.blue.800": ColorToken(value="#2c3e50", freq=12, contexts=["text", "button"],
|
| 1058 |
-
role="palette", hue="blue", shade=800),
|
| 1059 |
-
"color.blue.500": ColorToken(value="#005aa3", freq=47, contexts=["button", "link"],
|
| 1060 |
-
role="brand_candidate", hue="blue", shade=500),
|
| 1061 |
-
"color.neutral.700": ColorToken(value="#333333", freq=89, contexts=["text"],
|
| 1062 |
-
role="text_candidate", hue="neutral", shade=700),
|
| 1063 |
-
// ← ALL numeric shades, with role hints for AURORA
|
| 1064 |
-
}
|
| 1065 |
-
radius: {
|
| 1066 |
-
"radius.sm": RadiusToken(value="4px", value_px=4),
|
| 1067 |
-
"radius.md": RadiusToken(value="8px", value_px=8),
|
| 1068 |
-
"radius.xl": RadiusToken(value="16px", value_px=16),
|
| 1069 |
-
"radius.full": RadiusToken(value="9999px", value_px=9999),
|
| 1070 |
-
// ← flat, single-value, deduped, sorted, named
|
| 1071 |
-
}
|
| 1072 |
-
shadows: {
|
| 1073 |
-
"shadow.xs": ShadowToken(value="...", blur_px=4, y_offset_px=2),
|
| 1074 |
-
"shadow.sm": ShadowToken(value="...", blur_px=8, y_offset_px=4),
|
| 1075 |
-
"shadow.md": ShadowToken(value="...", blur_px=16, y_offset_px=8),
|
| 1076 |
-
// ← sorted by elevation, named progressively
|
| 1077 |
-
}
|
| 1078 |
-
```
|
| 1079 |
-
|
| 1080 |
-
### What changes are needed in Stage 1:
|
| 1081 |
-
|
| 1082 |
-
| Component | Current State | What's Wrong | Fix |
|
| 1083 |
-
|-----------|--------------|-------------|-----|
|
| 1084 |
-
| **Normalizer: color naming** | Two functions, word vs numeric | Mixed conventions | Remove word-based function, use numeric for ALL |
|
| 1085 |
-
| **Normalizer: color role hints** | Keyword-based inference (5-40% hit rate) | Most colors get no role | Add `role_hint` field: "brand_candidate", "text_candidate", "bg_candidate" based on CSS property (from semantic analyzer) |
|
| 1086 |
-
| **Normalizer: radius** | Raw values stored, no processing | Multi-value, %, no dedup | Parse → single px value → deduplicate → sort → name (none/sm/md/lg/xl/full) |
|
| 1087 |
-
| **Normalizer: shadows** | Hash-based keys, no processing | Unsorted, unnamed, no metrics | Parse components → sort by blur → deduplicate → name (xs/sm/md/lg/xl) |
|
| 1088 |
-
| **Normalizer: typography** | Collision-prone naming | Same name for different styles | Add weight suffix: `font.heading.lg.700` vs `font.heading.lg.400` |
|
| 1089 |
-
| **Rule engine: accessibility** | Tests against white/black only | Doesn't match real usage | Add separate per-pair analysis, label both modes clearly |
|
| 1090 |
-
| **Rule engine: radius** | Not analyzed | No grid check, no stats | Add radius grid analysis (base-4/base-8), dedup stats |
|
| 1091 |
-
| **Rule engine: shadows** | Not analyzed | No progression check | Add shadow elevation analysis (blur/offset progression) |
|
| 1092 |
-
| **Extractor: font family** | Returns fallback generic | Browser resolves to "sans-serif" | Extract from CSS declaration before computed resolution |
|
| 1093 |
-
|
| 1094 |
-
---
|
| 1095 |
-
|
| 1096 |
-
## EXECUTION STATUS (Updated Feb 2026)
|
| 1097 |
-
|
| 1098 |
-
### Phases 1-3: COMPLETED
|
| 1099 |
-
|
| 1100 |
-
```
|
| 1101 |
-
PHASE 1: FIX NORMALIZER ✅ DONE
|
| 1102 |
-
1a. ✅ Unify color naming → numeric shades only (_generate_preliminary_name)
|
| 1103 |
-
1b. ✅ Add radius normalization (parse, deduplicate, sort, name) — normalizer.py:626-778
|
| 1104 |
-
1c. ✅ Add shadow normalization (parse, sort by blur, name) — normalizer.py:784-940
|
| 1105 |
-
1d. ✅ Feed role hints into normalizer — normalizer._infer_role_hint()
|
| 1106 |
-
|
| 1107 |
-
PHASE 2: FIX STAGE 2 ✅ DONE
|
| 1108 |
-
2a. ✅ Consolidated — llm_agents.py is primary, stage2_graph.py deprecated
|
| 1109 |
-
2b. ✅ AURORA with ReAct + critic + retry — llm_agents.py:420-470
|
| 1110 |
-
2c. ✅ SENTINEL with grounded scoring + cross-reference critic
|
| 1111 |
-
2d. ✅ NEXUS with ToT (two-perspective evaluation)
|
| 1112 |
-
2e. ✅ Post-validation layer — post_validate_stage2()
|
| 1113 |
-
|
| 1114 |
-
PHASE 3: FIX EXPORT ✅ DONE (v3.2)
|
| 1115 |
-
3a. ✅ Color classifier = PRIMARY authority, AURORA = semantic roles only
|
| 1116 |
-
3b. ✅ Radius/shadow export uses normalizer output directly
|
| 1117 |
-
3c. ✅ W3C DTCG v1 compliance with $extensions metadata
|
| 1118 |
-
3d. ✅ filter_aurora_naming_map() enforces role-only boundary
|
| 1119 |
-
|
| 1120 |
-
PHASE 4: EXTRACTION IMPROVEMENTS (NOT STARTED)
|
| 1121 |
-
4a. ❌ Font family detection — still returns "sans-serif" fallback
|
| 1122 |
-
4b. ❌ Rule engine: radius grid analysis
|
| 1123 |
-
4c. ❌ Rule engine: shadow elevation analysis
|
| 1124 |
-
```
|
| 1125 |
-
|
| 1126 |
-
### PHASE 5: COMPONENT GENERATION (NEXT — RESEARCH COMPLETE)
|
| 1127 |
-
|
| 1128 |
-
**Full context**: See `PART2_COMPONENT_GENERATION.md` for detailed research, API checks, and architecture.
|
| 1129 |
-
|
| 1130 |
-
**Research finding (Feb 2026)**: 30+ tools evaluated. No production tool takes DTCG JSON -> Figma Components. This is a genuine market gap.
|
| 1131 |
-
|
| 1132 |
-
**Decision**: Custom Figma Plugin (Option A) — extend existing `code.js` with component generation.
|
| 1133 |
-
|
| 1134 |
-
```
|
| 1135 |
-
PHASE 5: FIGMA COMPONENT GENERATION
|
| 1136 |
-
5a. Component Definition Schema (JSON defining anatomy + token bindings + variants)
|
| 1137 |
-
5b. Token-to-Component binding engine (resolveTokenValue, bindTokenToVariable)
|
| 1138 |
-
5c. Variable Collection builder (primitives, semantic, spacing, radius, shadow, typography)
|
| 1139 |
-
5d. MVP Components:
|
| 1140 |
-
- Button: 4 variants x 3 sizes x 5 states = 60 variants (2-3 days)
|
| 1141 |
-
- TextInput: 4 states x 2 sizes = 8 variants (1-2 days)
|
| 1142 |
-
- Card: 2 configurations (1 day)
|
| 1143 |
-
- Toast: 4 types success/error/warn/info (1 day)
|
| 1144 |
-
- Checkbox+Radio: ~12 variants (1-2 days)
|
| 1145 |
-
5e. Post-MVP: Toggle (4), Select (multi-state), Modal (3 sizes), Table (template)
|
| 1146 |
-
|
| 1147 |
-
Estimated: ~1400 lines new plugin code, 8-12 days total
|
| 1148 |
-
```
|
| 1149 |
-
|
| 1150 |
-
**Figma Plugin API confirmed**: createComponent(), combineAsVariants(), setBoundVariable(),
|
| 1151 |
-
setBoundVariableForPaint(), addComponentProperty(), setReactionsAsync() — ALL supported.
|
| 1152 |
-
|
| 1153 |
-
```
|
| 1154 |
-
PHASE 6: ECOSYSTEM INTEGRATION
|
| 1155 |
-
6a. Style Dictionary v4 compatible output (50+ platform formats for free)
|
| 1156 |
-
6b. Tokens Studio compatible JSON import
|
| 1157 |
-
6c. Dembrandt JSON as alternative input source
|
| 1158 |
-
6d. CI/CD GitHub Action for design system regression checks
|
| 1159 |
-
|
| 1160 |
-
PHASE 7: MCP INTEGRATION
|
| 1161 |
-
7a. Expose extractor as MCP tool server
|
| 1162 |
-
7b. Claude Desktop: "Extract design system from example.com"
|
| 1163 |
-
7c. Community Figma MCP bridge for push-to-Figma
|
| 1164 |
-
```
|
| 1165 |
-
|
| 1166 |
-
### Strategic Positioning
|
| 1167 |
-
|
| 1168 |
-
**"Lighthouse for Design Systems"** — We are NOT a token management platform (Tokens Studio), NOT a documentation platform (Zeroheight), NOT an extraction tool (Dembrandt). We are the **automated audit + bootstrap tool** that sits upstream of all of those.
|
| 1169 |
-
|
| 1170 |
-
**With Phase 5**: We become the ONLY tool that goes from URL -> complete Figma design system WITH components. Fully automated. Nobody else does this end-to-end.
|
| 1171 |
-
|
| 1172 |
-
**Unique differentiators no competitor has:**
|
| 1173 |
-
- Type scale ratio detection + standard scale matching
|
| 1174 |
-
- Spacing grid detection (GCD-based, base-8 alignment scoring)
|
| 1175 |
-
- LLM brand identification from CSS usage patterns
|
| 1176 |
-
- Holistic design system quality score (0-100)
|
| 1177 |
-
- Visual spec page auto-generated in Figma
|
| 1178 |
-
- Benchmark comparison against established design systems
|
| 1179 |
-
- (Phase 5) Automated component generation from extracted tokens
|
| 1180 |
-
|
| 1181 |
-
**Key competitors to watch:**
|
| 1182 |
-
- Dembrandt (1,300 stars) — does extraction better, but no analysis, no components
|
| 1183 |
-
- Tokens Studio (1M+ installs) — manages tokens, no extraction, no component generation
|
| 1184 |
-
- Knapsack ($10M funding) — building ingestion engine, biggest strategic threat
|
| 1185 |
-
- Figr Identity — generates components but from brand config, not extracted tokens
|
| 1186 |
-
- html.to.design — captures layouts but not tokens/variables/components
|
| 1187 |
-
- story.to.design — Storybook->Figma components, but needs full code pipeline
|
| 1188 |
-
|
| 1189 |
-
---
|
| 1190 |
-
|
| 1191 |
-
## CRITIC REVIEW: SHOULD EACH COMPONENT STAY RULE-BASED OR USE LLM?
|
| 1192 |
-
|
| 1193 |
-
Every rule-based component needs to justify itself. Rules are free and fast, but if they produce garbage that LLMs then have to fix, the "free" part is an illusion — you pay in bad output quality instead.
|
| 1194 |
-
|
| 1195 |
-
### Decision Framework
|
| 1196 |
-
|
| 1197 |
-
| Use Rules When... | Use LLM When... |
|
| 1198 |
-
|---|---|
|
| 1199 |
-
| Math with right answers (contrast ratio) | Judgment with context (is this the brand color?) |
|
| 1200 |
-
| Deterministic transforms (hex→RGB) | Ambiguous signals (is this a button or just a styled div?) |
|
| 1201 |
-
| Simple pattern matching (is 16 divisible by 8?) | Weighing competing evidence (high freq but wrong context) |
|
| 1202 |
-
| Zero tolerance for hallucination (export format) | Understanding intent (why is this color used here?) |
|
| 1203 |
-
| Must be 100% reproducible | Acceptable to vary slightly between runs |
|
| 1204 |
-
|
| 1205 |
-
---
|
| 1206 |
-
|
| 1207 |
-
### 1. Color Naming (Normalizer) — ❌ RULES FAILING, NEEDS RETHINK
|
| 1208 |
-
|
| 1209 |
-
**Current**: Rule-based. Two functions: keyword-match for role → numeric shade, fallback → word shade.
|
| 1210 |
-
|
| 1211 |
-
**Critic's Question**: Can rules correctly name 30 colors with just CSS property + element context?
|
| 1212 |
-
|
| 1213 |
-
**Honest Answer**: No. Here's why:
|
| 1214 |
-
|
| 1215 |
-
The normalizer's `_infer_color_role()` searches for keywords like "primary", "button", "background" in the element/context strings. But:
|
| 1216 |
-
|
| 1217 |
-
```
|
| 1218 |
-
Extracted color: #005aa3, freq=47
|
| 1219 |
-
css_properties: ["background-color"]
|
| 1220 |
-
elements: ["div", "a"]
|
| 1221 |
-
contexts: ["background"]
|
| 1222 |
-
```
|
| 1223 |
-
|
| 1224 |
-
No keyword "primary" or "button" anywhere. Rules classify this as "unknown role" → falls to word-based naming → `color.blue.base`. But this is CLEARLY the brand primary (used 47 times on links and divs with background-color).
|
| 1225 |
-
|
| 1226 |
-
An LLM can reason: "47 uses on `<a>` elements with `background-color` = this is a CTA color = brand primary." Rules can't make that inference.
|
| 1227 |
-
|
| 1228 |
-
**But**: An LLM to name 30 colors costs ~$0.001 and adds 2-3 seconds. For something that happens once per extraction, this is acceptable.
|
| 1229 |
-
|
| 1230 |
-
**Verdict**:
|
| 1231 |
-
- **Keep rules for**: Hue family detection (HSL math), shade number assignment (luminance → 50-900), deduplication (exact hex + RGB distance)
|
| 1232 |
-
- **Move to LLM (AURORA)**: Semantic role assignment (brand.primary vs text.secondary vs background.primary). This is already AURORA's job — but currently AURORA only does it for 10 colors. Expand AURORA to name ALL colors.
|
| 1233 |
-
- **ELIMINATE from normalizer**: The `_generate_color_name_from_value()` function and the `_infer_color_role()` function. Replace with a simpler `_generate_preliminary_name()` that just uses hue + numeric shade. Let AURORA do the semantic naming.
|
| 1234 |
-
|
| 1235 |
-
**New flow**:
|
| 1236 |
-
```
|
| 1237 |
-
Normalizer: "color.blue.500" (hue + shade, no role)
|
| 1238 |
-
↓
|
| 1239 |
-
AURORA: "color.brand.primary" (semantic role from context reasoning)
|
| 1240 |
-
↓
|
| 1241 |
-
Export: Uses AURORA name, falls back to normalizer name
|
| 1242 |
-
```
|
| 1243 |
-
|
| 1244 |
-
---
|
| 1245 |
-
|
| 1246 |
-
### 2. Radius Processing — ✅ RULES ARE CORRECT APPROACH, JUST MISSING
|
| 1247 |
-
|
| 1248 |
-
**Current**: No processing at all (raw values stored).
|
| 1249 |
-
|
| 1250 |
-
**Critic's Question**: Does radius naming need LLM intelligence?
|
| 1251 |
-
|
| 1252 |
-
**Honest Answer**: No. Radius is pure math:
|
| 1253 |
-
- Parse CSS value → px number
|
| 1254 |
-
- Skip multi-value shorthand (or take max)
|
| 1255 |
-
- Convert 50% → 9999px (full circle)
|
| 1256 |
-
- Sort by px value
|
| 1257 |
-
- Name by size tier: 0=none, 1-3=sm, 4-8=md, 9-16=lg, 17-24=xl, 25+=2xl, 9999=full
|
| 1258 |
-
|
| 1259 |
-
No ambiguity, no judgment needed. An LLM would add nothing here.
|
| 1260 |
-
|
| 1261 |
-
**Verdict**: Keep rule-based. Just implement the processing that's currently missing.
|
| 1262 |
-
|
| 1263 |
-
---
|
| 1264 |
-
|
| 1265 |
-
### 3. Shadow Processing — ⚠️ MOSTLY RULES, BUT LLM COULD HELP WITH EDGE CASES
|
| 1266 |
-
|
| 1267 |
-
**Current**: No processing at all (hash-based keys).
|
| 1268 |
-
|
| 1269 |
-
**Critic's Question**: Can rules correctly name and sort shadows?
|
| 1270 |
-
|
| 1271 |
-
**Mostly yes**:
|
| 1272 |
-
- Parse CSS shadow string → {x, y, blur, spread, color} — regex, no LLM needed
|
| 1273 |
-
- Sort by blur radius — math
|
| 1274 |
-
- Name by elevation tier (xs/sm/md/lg/xl) — math
|
| 1275 |
-
- Detect non-monotonic progression — math
|
| 1276 |
-
|
| 1277 |
-
**But**: Some edge cases are hard for rules:
|
| 1278 |
-
- `0px 0px 0px 4px rgba(0,0,0,0.2)` — is this a shadow or a border simulation? (spread-only, no blur)
|
| 1279 |
-
- Multiple shadows on same element — which is the "primary" shadow?
|
| 1280 |
-
- `inset` shadows — different semantic meaning (inner glow vs elevation)
|
| 1281 |
-
|
| 1282 |
-
These edge cases affect maybe 10% of shadows. Rules can handle 90% correctly.
|
| 1283 |
-
|
| 1284 |
-
**Verdict**: Keep rule-based for parsing, sorting, naming. Add simple heuristic rules for edge cases (spread-only → treat as border, inset → separate category). NOT worth an LLM call.
|
| 1285 |
-
|
| 1286 |
-
---
|
| 1287 |
-
|
| 1288 |
-
### 4. Accessibility Checking (Rule Engine) — ✅ RULES ARE THE ONLY CORRECT APPROACH
|
| 1289 |
-
|
| 1290 |
-
**Current**: WCAG contrast math + fix suggestions.
|
| 1291 |
-
|
| 1292 |
-
**Critic's Question**: Could an LLM improve accessibility checking?
|
| 1293 |
-
|
| 1294 |
-
**Absolutely not.** WCAG is a mathematical standard. 4.5:1 is 4.5:1. An LLM cannot calculate contrast ratios — it would hallucinate them. The rule engine's `get_relative_luminance()` implementation follows the exact WCAG 2.1 spec. This MUST stay rule-based.
|
| 1295 |
-
|
| 1296 |
-
**What rules CAN'T do** (and LLM CAN): Prioritize which failures matter most. "Brand primary fails AA" is more critical than "a decorative border color fails AA." This is judgment → belongs in SENTINEL.
|
| 1297 |
-
|
| 1298 |
-
**Verdict**: Keep accessibility math 100% rule-based. Use SENTINEL to prioritize/contextualize the results.
|
| 1299 |
-
|
| 1300 |
-
---
|
| 1301 |
-
|
| 1302 |
-
### 5. Type Scale Detection (Rule Engine) — ✅ RULES ARE CORRECT
|
| 1303 |
-
|
| 1304 |
-
**Current**: Ratio calculation between consecutive font sizes, variance check, standard scale matching.
|
| 1305 |
-
|
| 1306 |
-
**Critic's Question**: Could an LLM detect type scales better?
|
| 1307 |
-
|
| 1308 |
-
**No.** Type scale detection is pure math: sizes → ratios → average → closest standard. An LLM would be slower and less accurate at arithmetic.
|
| 1309 |
-
|
| 1310 |
-
**What rules CAN'T do**: Recommend which scale to adopt. "Your ratio is 1.18, should you round to 1.2 (Minor Third) or 1.25 (Major Third)?" — this depends on the site's purpose (content-heavy = 1.2, marketing = 1.333). This is judgment → belongs in ATLAS/NEXUS.
|
| 1311 |
-
|
| 1312 |
-
**Verdict**: Keep rule-based. Already working correctly after the 10px filter fix.
|
| 1313 |
-
|
| 1314 |
-
---
|
| 1315 |
-
|
| 1316 |
-
### 6. Spacing Grid Detection (Rule Engine) — ✅ RULES ARE CORRECT
|
| 1317 |
-
|
| 1318 |
-
**Current**: GCD-based detection, alignment percentage, base-4/base-8 check.
|
| 1319 |
-
|
| 1320 |
-
**Verdict**: Pure math, working correctly. Keep rule-based.
|
| 1321 |
-
|
| 1322 |
-
---
|
| 1323 |
-
|
| 1324 |
-
### 7. Semantic Color Analysis (`semantic_analyzer.py`) — ⚠️ OVERLAPS WITH AURORA, CONSOLIDATE
|
| 1325 |
-
|
| 1326 |
-
**Current**: Rule-based fallback + optional LLM call. Categorizes colors into brand/text/background/border/feedback.
|
| 1327 |
-
|
| 1328 |
-
**Critic's Question**: This does THE SAME JOB as AURORA. Why do we have both?
|
| 1329 |
-
|
| 1330 |
-
**The overlap**:
|
| 1331 |
-
- Semantic Analyzer: "This color is brand.primary because it's on buttons" (rule-based + optional LLM)
|
| 1332 |
-
- AURORA: "This color is brand.primary because it's used 47x on CTAs" (LLM)
|
| 1333 |
-
- Both produce semantic names for colors
|
| 1334 |
-
- Both feed into export
|
| 1335 |
-
|
| 1336 |
-
**The problem**: They run at DIFFERENT STAGES:
|
| 1337 |
-
- Semantic Analyzer runs in Stage 1 (during extraction)
|
| 1338 |
-
- AURORA runs in Stage 2 (during analysis)
|
| 1339 |
-
- Their outputs can conflict
|
| 1340 |
-
- Export tries to merge both → more naming chaos
|
| 1341 |
-
|
| 1342 |
-
**Verdict**: ELIMINATE the semantic analyzer as a separate component. Move its rule-based heuristics INTO the normalizer as `role_hint` field (e.g., "brand_candidate", "text_candidate"). These hints become INPUT to AURORA, not a competing output.
|
| 1343 |
-
|
| 1344 |
-
```
|
| 1345 |
-
BEFORE:
|
| 1346 |
-
Semantic Analyzer → state.semantic_analysis → AURORA (partially uses it)
|
| 1347 |
-
→ Export (also uses it, conflicts)
|
| 1348 |
-
|
| 1349 |
-
AFTER:
|
| 1350 |
-
Normalizer adds role_hints → AURORA uses hints as evidence → AURORA names → Export
|
| 1351 |
-
(no separate semantic analyzer)
|
| 1352 |
-
```
|
| 1353 |
-
|
| 1354 |
-
---
|
| 1355 |
-
|
| 1356 |
-
### 8. Color Deduplication (Normalizer) — ⚠️ RULES ARE CORRECT BUT THRESHOLD IS QUESTIONABLE
|
| 1357 |
-
|
| 1358 |
-
**Current**: RGB Euclidean distance < 10 → merge.
|
| 1359 |
-
|
| 1360 |
-
**Critic's Question**: Is RGB distance the right metric?
|
| 1361 |
-
|
| 1362 |
-
**Not really.** RGB Euclidean distance is NOT perceptually uniform. Two colors that look identical to humans can have large RGB distance, and two that look different can have small RGB distance. The industry standard for perceptual color difference is Delta-E (CIE2000).
|
| 1363 |
-
|
| 1364 |
-
However: For the purpose of "should we keep both #1a1a1a and #1b1b1b in the design system?" — RGB distance < 10 is a reasonable approximation. These truly are near-identical grays.
|
| 1365 |
-
|
| 1366 |
-
The color_utils.py `color_distance()` function also uses RGB Euclidean. It's used in the rule engine for near-duplicate detection.
|
| 1367 |
-
|
| 1368 |
-
**Verdict**: Keep rule-based, but consider switching to Delta-E (CIEDE2000) for better perceptual accuracy. Low priority — the current approach works for most cases.
|
| 1369 |
-
|
| 1370 |
-
---
|
| 1371 |
-
|
| 1372 |
-
### 9. Color Statistics (Rule Engine) — ✅ RULES ARE CORRECT
|
| 1373 |
-
|
| 1374 |
-
Counting uniques, duplicates, hue distribution — pure counting. Keep rule-based.
|
| 1375 |
-
|
| 1376 |
-
---
|
| 1377 |
-
|
| 1378 |
-
### 10. Pre-Processing Layer (NEW — proposed in architecture) — SHOULD THIS BE AN LLM?
|
| 1379 |
-
|
| 1380 |
-
**Current plan**: Deterministic pre-processing before Stage 2 agents.
|
| 1381 |
-
|
| 1382 |
-
**Critic's Question**: The pre-processing unifies names, flattens radius, sorts shadows. Should this use an LLM?
|
| 1383 |
-
|
| 1384 |
-
**No.** Everything pre-processing does is deterministic:
|
| 1385 |
-
- Rename color.blue.light → color.blue.300 (luminance lookup table)
|
| 1386 |
-
- Flatten "0px 0px 16px 16px" → skip or max(16)
|
| 1387 |
-
- Sort shadows by blur px
|
| 1388 |
-
|
| 1389 |
-
No judgment needed, no ambiguity. Keep deterministic.
|
| 1390 |
-
|
| 1391 |
-
---
|
| 1392 |
-
|
| 1393 |
-
## SUMMARY: WHAT STAYS RULE-BASED, WHAT MOVES TO LLM
|
| 1394 |
-
|
| 1395 |
-
```
|
| 1396 |
-
┌─────────────────────────────────────────────────────────────────┐
|
| 1397 |
-
│ KEEP RULE-BASED (correct, no LLM needed) │
|
| 1398 |
-
│ │
|
| 1399 |
-
│ ✅ WCAG contrast calculation │
|
| 1400 |
-
│ ✅ Type scale ratio detection │
|
| 1401 |
-
│ ✅ Spacing grid detection (GCD) │
|
| 1402 |
-
│ ✅ Color deduplication (RGB/Delta-E distance) │
|
| 1403 |
-
│ ✅ Color statistics (counts, hue distribution) │
|
| 1404 |
-
│ ✅ Radius processing (parse, sort, name) — needs implementing │
|
| 1405 |
-
│ ✅ Shadow processing (parse, sort, name) — needs implementing │
|
| 1406 |
-
│ ✅ Color hue family detection (HSL math) │
|
| 1407 |
-
│ ✅ Color shade number assignment (luminance → 50-900) │
|
| 1408 |
-
│ ✅ Pre-processing layer (rename, flatten, sort) │
|
| 1409 |
-
│ ✅ Post-validation layer (check conventions, ranges) │
|
| 1410 |
-
│ ✅ AA fix suggestions (darken/lighten iteration) │
|
| 1411 |
-
│ ✅ Export format (DTCG structure) │
|
| 1412 |
-
└─────────────────────────────────────────────────────────────────┘
|
| 1413 |
-
|
| 1414 |
-
┌─────────────────────────────────────────────────────────────────┐
|
| 1415 |
-
│ MOVE TO LLM (requires judgment, context, ambiguity) │
|
| 1416 |
-
│ │
|
| 1417 |
-
│ 🤖 Color semantic naming (brand.primary vs text.secondary) │
|
| 1418 |
-
│ Currently: normalizer (bad) + semantic analyzer (conflicts) │
|
| 1419 |
-
│ Move to: AURORA (ReAct, names ALL colors) │
|
| 1420 |
-
│ │
|
| 1421 |
-
│ 🤖 Prioritizing which AA failures matter most │
|
| 1422 |
-
│ Currently: all treated equally │
|
| 1423 |
-
│ Move to: SENTINEL (cites data, ranks by impact) │
|
| 1424 |
-
│ │
|
| 1425 |
-
│ 🤖 Scoring cohesion/consistency holistically │
|
| 1426 |
-
│ Currently: simple weighted formula │
|
| 1427 |
-
│ Move to: NEXUS (weighs competing dimensions) │
|
| 1428 |
-
│ │
|
| 1429 |
-
│ 🤖 Recommending which design system to align with │
|
| 1430 |
-
│ Currently: ATLAS (already LLM) — keep as is │
|
| 1431 |
-
│ │
|
| 1432 |
-
│ 🤖 Recommending scale/spacing changes │
|
| 1433 |
-
│ Currently: defaults to "1.25 Major Third" │
|
| 1434 |
-
│ Move to: NEXUS (considers site purpose and brand) │
|
| 1435 |
-
└─────────────────────────────────────────────────────────────────┘
|
| 1436 |
-
|
| 1437 |
-
┌─────────────────────────────────────────────────────────────────┐
|
| 1438 |
-
│ ELIMINATE (redundant or actively harmful) │
|
| 1439 |
-
│ │
|
| 1440 |
-
│ ❌ normalizer._generate_color_name_from_value() │
|
| 1441 |
-
│ Word-based shades (light/dark/base) — root cause of chaos │
|
| 1442 |
-
│ │
|
| 1443 |
-
│ ❌ normalizer._infer_color_role() │
|
| 1444 |
-
│ Keyword matching for role — too low hit rate (5-40%) │
|
| 1445 |
-
│ Replace with: role_hint from CSS property + element type │
|
| 1446 |
-
│ │
|
| 1447 |
-
│ ❌ semantic_analyzer.py as separate component │
|
| 1448 |
-
│ Overlaps with AURORA, creates competing names │
|
| 1449 |
-
│ Replace with: role_hints embedded in normalizer output │
|
| 1450 |
-
│ │
|
| 1451 |
-
│ ❌ app.py _generate_color_name_from_hex() │
|
| 1452 |
-
│ Third naming system (numeric), conflicts with other two │
|
| 1453 |
-
│ Replace with: normalizer's single naming path │
|
| 1454 |
-
│ │
|
| 1455 |
-
│ ❌ app.py _get_semantic_color_overrides() 3-way merge │
|
| 1456 |
-
│ Merges semantic + AURORA + NEXUS names → chaos │
|
| 1457 |
-
│ Replace with: AURORA naming_map as single authority │
|
| 1458 |
-
└─────────────────────────────────────────────────────────────────┘
|
| 1459 |
-
```
|
| 1460 |
-
|
| 1461 |
-
### New LLM Budget After Critic Review
|
| 1462 |
-
|
| 1463 |
-
No new LLM calls needed. We're just:
|
| 1464 |
-
1. Expanding AURORA from "name 10 colors" to "name ALL colors" (same 1 call, slightly larger output)
|
| 1465 |
-
2. Eliminating the semantic analyzer's optional LLM call (saves $0.001)
|
| 1466 |
-
3. All other changes are rule-based fixes
|
| 1467 |
-
|
| 1468 |
-
Net LLM cost: Same or slightly less than today (~$0.005 per extraction).
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
PART2_COMPONENT_GENERATION.md
DELETED
|
@@ -1,418 +0,0 @@
|
|
| 1 |
-
# Design System Extractor — Part 2: Component Generation
|
| 2 |
-
|
| 3 |
-
## Session Context
|
| 4 |
-
|
| 5 |
-
**Prerequisite**: Part 1 (Token Extraction + Analysis) is COMPLETE at v3.2
|
| 6 |
-
- Phases 1-3 DONE: Normalizer, Stage 2 agents, Export all working
|
| 7 |
-
- 113 tests passing, W3C DTCG v1 compliant output
|
| 8 |
-
- GitHub: https://github.com/hiriazmo/design-system-extractor-v3
|
| 9 |
-
- Project: `/Users/yahya/design-system-extractor-v3/`
|
| 10 |
-
|
| 11 |
-
**This session**: Build automated component generation from extracted tokens into Figma.
|
| 12 |
-
|
| 13 |
-
---
|
| 14 |
-
|
| 15 |
-
## THE GAP: Nobody Does This
|
| 16 |
-
|
| 17 |
-
Exhaustive research of 30+ tools (Feb 2026) confirms:
|
| 18 |
-
|
| 19 |
-
**No production tool takes DTCG JSON and outputs Figma Components.**
|
| 20 |
-
|
| 21 |
-
```
|
| 22 |
-
YOUR EXTRACTOR THE GAP FIGMA
|
| 23 |
-
+--------------+ +----------------------------+ +------------------+
|
| 24 |
-
| DTCG JSON |--->| ??? Nothing does this |--->| Button component |
|
| 25 |
-
| with tokens | | tokens -> components | | with 60 variants |
|
| 26 |
-
+--------------+ +----------------------------+ +------------------+
|
| 27 |
-
```
|
| 28 |
-
|
| 29 |
-
### What Exists (and What It Can't Do)
|
| 30 |
-
|
| 31 |
-
| Category | Best Tool | What It Does | Creates Components? |
|
| 32 |
-
|----------|-----------|-------------|-------------------|
|
| 33 |
-
| Token Importers | Tokens Studio (1M+ installs) | JSON -> Figma Variables | NO - variables only |
|
| 34 |
-
| AI Design | Figma Make | Prompt -> prototype | NO - not token-driven |
|
| 35 |
-
| MCP Bridges | Figma Console MCP (543 stars) | AI writes to Figma | YES but non-deterministic |
|
| 36 |
-
| Code-to-Figma | story.to.design | Storybook -> Figma components | YES but needs full Storybook |
|
| 37 |
-
| Generators | Figr Identity | Brand config -> components | YES but can't consume YOUR tokens |
|
| 38 |
-
| Commercial | Knapsack ($10M), Supernova | Token management | NO - manages, doesn't create |
|
| 39 |
-
| DEAD | Specify.app (shutting down), Backlight.dev (shut down June 2025) | - | - |
|
| 40 |
-
|
| 41 |
-
### Key Findings Per Category
|
| 42 |
-
|
| 43 |
-
**Token Importers** (7+ tools evaluated): Tokens Studio, TokensBrucke, Styleframe, DTCG Token Manager, GitFig, Supa Design Tokens, Design System Automator — ALL create Figma Variables from JSON, NONE create components.
|
| 44 |
-
|
| 45 |
-
**MCP Bridges** (5 tools): Figma Console MCP (Southleft), claude-talk-to-figma-mcp, cursor-talk-to-figma-mcp (Grab), figma-mcp-write-server, Figma-MCP-Write-Bridge — ALL have full write access, but component creation is AI-interpreted (non-deterministic, varies per run).
|
| 46 |
-
|
| 47 |
-
**Code-to-Figma**: story.to.design is the standout — creates REAL Figma components with proper variants from Storybook. But requires a full coded component library + running Storybook instance as intermediary.
|
| 48 |
-
|
| 49 |
-
**figma-json2component** (GitHub): Experimental proof-of-concept that generates components from custom JSON schema. Not DTCG, not production quality, but validates the concept IS possible.
|
| 50 |
-
|
| 51 |
-
---
|
| 52 |
-
|
| 53 |
-
## FOUR APPROACHES — RANKED
|
| 54 |
-
|
| 55 |
-
### Option A: Custom Figma Plugin (RECOMMENDED)
|
| 56 |
-
```
|
| 57 |
-
DTCG JSON -> Your Plugin reads JSON -> Creates Variables -> Generates Components -> Done
|
| 58 |
-
```
|
| 59 |
-
- **Effort**: 4-8 weeks (~1400 lines of plugin code for 5 MVP components)
|
| 60 |
-
- **Quality**: Highest — fully deterministic, consistent every run
|
| 61 |
-
- **Advantage**: We already have a working plugin (code.js) that imports tokens
|
| 62 |
-
- **Risk**: Low — Figma Plugin API supports everything needed
|
| 63 |
-
|
| 64 |
-
### Option B: Pipeline — shadcn + Storybook + story.to.design
|
| 65 |
-
```
|
| 66 |
-
DTCG JSON -> Style Dictionary -> CSS vars -> shadcn themed -> Storybook -> story.to.design -> Figma
|
| 67 |
-
```
|
| 68 |
-
- **Effort**: 2-3 days setup, then 15-30 min per extraction
|
| 69 |
-
- **Quality**: High — battle-tested shadcn components
|
| 70 |
-
- **Dependency**: story.to.design (commercial, paid)
|
| 71 |
-
- **Risk**: Medium — many moving parts
|
| 72 |
-
|
| 73 |
-
### Option C: MCP + Claude AI Chain
|
| 74 |
-
```
|
| 75 |
-
DTCG JSON -> Claude reads tokens -> Figma Console MCP -> AI creates components -> Figma
|
| 76 |
-
```
|
| 77 |
-
- **Effort**: 2-3 weeks
|
| 78 |
-
- **Quality**: Medium — non-deterministic
|
| 79 |
-
- **Risk**: High — AI output varies per run
|
| 80 |
-
|
| 81 |
-
### Option D: Figr Identity + Manual Token Swap
|
| 82 |
-
```
|
| 83 |
-
Figr Identity generates base system -> Manually swap tokens -> Adjust
|
| 84 |
-
```
|
| 85 |
-
- **Effort**: 1-2 days
|
| 86 |
-
- **Quality**: Medium — not YOUR tokens
|
| 87 |
-
- **Risk**: Medium — manual alignment needed
|
| 88 |
-
|
| 89 |
-
**Decision: Option A (Custom Plugin)** — we already have 80% of the infrastructure, it's deterministic, no external dependencies, and fills a genuine market gap.
|
| 90 |
-
|
| 91 |
-
---
|
| 92 |
-
|
| 93 |
-
## FIGMA PLUGIN API: FULL CAPABILITY CHECK
|
| 94 |
-
|
| 95 |
-
Every feature needed for component generation is supported:
|
| 96 |
-
|
| 97 |
-
| Requirement | API Method | Status |
|
| 98 |
-
|------------|-----------|--------|
|
| 99 |
-
| Create components | `figma.createComponent()` | Supported |
|
| 100 |
-
| Variant sets (60 variants) | `figma.combineAsVariants()` | Supported |
|
| 101 |
-
| Auto-layout with padding | `layoutMode`, `paddingTop/Right/Bottom/Left`, `itemSpacing` | Supported |
|
| 102 |
-
| Text labels | `figma.createText()` + `loadFontAsync()` | Supported |
|
| 103 |
-
| Icon slot (optional) | `addComponentProperty("ShowIcon", "BOOLEAN", true)` | Supported |
|
| 104 |
-
| Instance swap (icons) | `addComponentProperty("Icon", "INSTANCE_SWAP", id)` | Supported |
|
| 105 |
-
| Border radius from tokens | `setBoundVariable('topLeftRadius', radiusVar)` | Supported |
|
| 106 |
-
| Colors from tokens | `setBoundVariableForPaint()` -> binds to variables | Supported |
|
| 107 |
-
| Shadows from tokens | `setBoundVariableForEffect()` | Supported (has spread bug, workaround exists) |
|
| 108 |
-
| Hover/press interactions | `node.setReactionsAsync()` with `ON_HOVER`/`ON_PRESS` | Supported |
|
| 109 |
-
| Expose text property | `addComponentProperty("Label", "TEXT", "Button")` | Supported |
|
| 110 |
-
| Disabled opacity | `node.opacity = 0.5` | Supported |
|
| 111 |
-
|
| 112 |
-
---
|
| 113 |
-
|
| 114 |
-
## MVP SCOPE: 5 Components, 62 Variants
|
| 115 |
-
|
| 116 |
-
| Component | Variants | Automatable? | Effort |
|
| 117 |
-
|-----------|---------|-------------|--------|
|
| 118 |
-
| **Button** | 4 variants x 3 sizes x 5 states = 60 | Fully | 2-3 days |
|
| 119 |
-
| **Text Input** | 4 states x 2 sizes = 8 | Fully | 1-2 days |
|
| 120 |
-
| **Card** | 2 configurations | Semi | 1 day |
|
| 121 |
-
| **Toast/Notification** | 4 types (success/error/warn/info) | Fully | 1 day |
|
| 122 |
-
| **Checkbox + Radio** | ~12 variants | Fully | 1-2 days |
|
| 123 |
-
| **Total** | **~86 variants** | | **8-12 days** |
|
| 124 |
-
|
| 125 |
-
### Post-MVP Components
|
| 126 |
-
|
| 127 |
-
| Component | Variants | Automatable? | Effort |
|
| 128 |
-
|-----------|---------|-------------|--------|
|
| 129 |
-
| Toggle/Switch | on/off x enabled/disabled = 4 | Fully | 0.5 day |
|
| 130 |
-
| Select/Dropdown | Multiple states | Semi | 1-2 days |
|
| 131 |
-
| Modal/Dialog | 3 sizes | Semi | 1 day |
|
| 132 |
-
| Table | Header + data rows | Template-based | 2 days |
|
| 133 |
-
|
| 134 |
-
---
|
| 135 |
-
|
| 136 |
-
## TOKEN-TO-COMPONENT MAPPING
|
| 137 |
-
|
| 138 |
-
How extracted tokens bind to component properties:
|
| 139 |
-
|
| 140 |
-
### Button Example
|
| 141 |
-
```
|
| 142 |
-
Token -> Figma Property
|
| 143 |
-
-------------------------------------------------
|
| 144 |
-
color.brand.primary -> Fill (default state)
|
| 145 |
-
color.brand.600 -> Fill (hover state)
|
| 146 |
-
color.brand.700 -> Fill (pressed state)
|
| 147 |
-
color.text.inverse -> Text color
|
| 148 |
-
color.neutral.200 -> Fill (secondary variant)
|
| 149 |
-
color.neutral.300 -> Fill (secondary hover)
|
| 150 |
-
radius.md -> Corner radius (all corners)
|
| 151 |
-
shadow.sm -> Drop shadow (elevated variant)
|
| 152 |
-
spacing.3 -> Padding horizontal (16px)
|
| 153 |
-
spacing.2 -> Padding vertical (8px)
|
| 154 |
-
font.body.md -> Text style (label)
|
| 155 |
-
```
|
| 156 |
-
|
| 157 |
-
### Variable Collections Needed
|
| 158 |
-
```
|
| 159 |
-
1. Primitives -> Raw color palette (blue.50 through blue.900, etc.)
|
| 160 |
-
2. Semantic -> Role-based aliases (brand.primary -> blue.500)
|
| 161 |
-
3. Spacing -> 4px grid (spacing.1=4, spacing.2=8, spacing.3=12...)
|
| 162 |
-
4. Radius -> none/sm/md/lg/xl/full
|
| 163 |
-
5. Shadow -> xs/sm/md/lg/xl elevation levels
|
| 164 |
-
6. Typography -> Font families, sizes, weights, line-heights
|
| 165 |
-
```
|
| 166 |
-
|
| 167 |
-
---
|
| 168 |
-
|
| 169 |
-
## COMPONENT DEFINITION SCHEMA (Proposed)
|
| 170 |
-
|
| 171 |
-
Each component needs a JSON definition describing its anatomy, token bindings, and variant matrix:
|
| 172 |
-
|
| 173 |
-
```json
|
| 174 |
-
{
|
| 175 |
-
"component": "Button",
|
| 176 |
-
"anatomy": {
|
| 177 |
-
"root": {
|
| 178 |
-
"type": "frame",
|
| 179 |
-
"layout": "horizontal",
|
| 180 |
-
"padding": { "h": "spacing.3", "v": "spacing.2" },
|
| 181 |
-
"radius": "radius.md",
|
| 182 |
-
"fill": "color.brand.primary",
|
| 183 |
-
"gap": "spacing.2"
|
| 184 |
-
},
|
| 185 |
-
"icon_slot": {
|
| 186 |
-
"type": "instance_swap",
|
| 187 |
-
"size": 16,
|
| 188 |
-
"visible": false,
|
| 189 |
-
"property": "ShowIcon"
|
| 190 |
-
},
|
| 191 |
-
"label": {
|
| 192 |
-
"type": "text",
|
| 193 |
-
"style": "font.body.md",
|
| 194 |
-
"color": "color.text.inverse",
|
| 195 |
-
"content": "Button",
|
| 196 |
-
"property": "Label"
|
| 197 |
-
}
|
| 198 |
-
},
|
| 199 |
-
"variants": {
|
| 200 |
-
"Variant": ["Primary", "Secondary", "Outline", "Ghost"],
|
| 201 |
-
"Size": ["Small", "Medium", "Large"],
|
| 202 |
-
"State": ["Default", "Hover", "Pressed", "Focused", "Disabled"]
|
| 203 |
-
},
|
| 204 |
-
"variant_overrides": {
|
| 205 |
-
"Variant=Secondary": {
|
| 206 |
-
"root.fill": "color.neutral.200",
|
| 207 |
-
"label.color": "color.text.primary"
|
| 208 |
-
},
|
| 209 |
-
"Variant=Outline": {
|
| 210 |
-
"root.fill": "transparent",
|
| 211 |
-
"root.stroke": "color.border.primary",
|
| 212 |
-
"root.strokeWeight": 1,
|
| 213 |
-
"label.color": "color.brand.primary"
|
| 214 |
-
},
|
| 215 |
-
"Variant=Ghost": {
|
| 216 |
-
"root.fill": "transparent",
|
| 217 |
-
"label.color": "color.brand.primary"
|
| 218 |
-
},
|
| 219 |
-
"State=Hover": {
|
| 220 |
-
"root.fill": "color.brand.600"
|
| 221 |
-
},
|
| 222 |
-
"State=Pressed": {
|
| 223 |
-
"root.fill": "color.brand.700"
|
| 224 |
-
},
|
| 225 |
-
"State=Disabled": {
|
| 226 |
-
"root.opacity": 0.5
|
| 227 |
-
},
|
| 228 |
-
"Size=Small": {
|
| 229 |
-
"root.padding.h": "spacing.2",
|
| 230 |
-
"root.padding.v": "spacing.1",
|
| 231 |
-
"label.style": "font.body.sm"
|
| 232 |
-
},
|
| 233 |
-
"Size=Large": {
|
| 234 |
-
"root.padding.h": "spacing.4",
|
| 235 |
-
"root.padding.v": "spacing.3",
|
| 236 |
-
"label.style": "font.body.lg"
|
| 237 |
-
}
|
| 238 |
-
}
|
| 239 |
-
}
|
| 240 |
-
```
|
| 241 |
-
|
| 242 |
-
### Component Generation Pattern (Plugin Code)
|
| 243 |
-
|
| 244 |
-
Every component follows the same pipeline:
|
| 245 |
-
```
|
| 246 |
-
1. Read tokens from DTCG JSON
|
| 247 |
-
2. Create Variable Collections (if not exist)
|
| 248 |
-
3. For each variant combination:
|
| 249 |
-
a. Create frame with auto-layout
|
| 250 |
-
b. Add child nodes (icon slot, label, etc.)
|
| 251 |
-
c. Apply token bindings via setBoundVariable()
|
| 252 |
-
d. Apply variant-specific overrides
|
| 253 |
-
4. combineAsVariants() -> component set
|
| 254 |
-
5. Add component properties (Label text, ShowIcon boolean)
|
| 255 |
-
```
|
| 256 |
-
|
| 257 |
-
---
|
| 258 |
-
|
| 259 |
-
## ARCHITECTURE FOR PLUGIN EXTENSION
|
| 260 |
-
|
| 261 |
-
Current plugin (`code.js`) already does:
|
| 262 |
-
- Parse DTCG JSON (isDTCGFormat detection)
|
| 263 |
-
- Create paint styles from colors
|
| 264 |
-
- Create text styles from typography
|
| 265 |
-
- Create effect styles from shadows
|
| 266 |
-
- Create variable collections
|
| 267 |
-
|
| 268 |
-
What needs to be ADDED:
|
| 269 |
-
```
|
| 270 |
-
code.js (existing ~1200 lines)
|
| 271 |
-
|
|
| 272 |
-
+-- componentGenerator.js (NEW ~1400 lines)
|
| 273 |
-
| |-- generateButton() ~250 lines
|
| 274 |
-
| |-- generateTextInput() ~200 lines
|
| 275 |
-
| |-- generateCard() ~150 lines
|
| 276 |
-
| |-- generateToast() ~150 lines
|
| 277 |
-
| |-- generateCheckbox() ~200 lines
|
| 278 |
-
| |-- generateRadio() ~150 lines
|
| 279 |
-
| +-- shared utilities ~300 lines
|
| 280 |
-
| |-- createAutoLayoutFrame()
|
| 281 |
-
| |-- bindTokenToVariable()
|
| 282 |
-
| |-- buildVariantMatrix()
|
| 283 |
-
| |-- resolveTokenValue()
|
| 284 |
-
|
|
| 285 |
-
+-- componentDefinitions.json (NEW ~500 lines)
|
| 286 |
-
|-- Button definition
|
| 287 |
-
|-- TextInput definition
|
| 288 |
-
|-- Card definition
|
| 289 |
-
|-- Toast definition
|
| 290 |
-
+-- Checkbox/Radio definition
|
| 291 |
-
```
|
| 292 |
-
|
| 293 |
-
### Implementation Order
|
| 294 |
-
```
|
| 295 |
-
Week 1-2: Infrastructure
|
| 296 |
-
- Variable collection builder (primitives, semantic, spacing, radius, shadow)
|
| 297 |
-
- Token resolver (DTCG path -> Figma variable reference)
|
| 298 |
-
- Auto-layout frame builder with token bindings
|
| 299 |
-
- Variant matrix generator
|
| 300 |
-
|
| 301 |
-
Week 3-4: MVP Components
|
| 302 |
-
- Button (60 variants) — most complex, validates the full pipeline
|
| 303 |
-
- TextInput (8 variants) — validates form patterns
|
| 304 |
-
- Toast (4 variants) — validates feedback patterns
|
| 305 |
-
|
| 306 |
-
Week 5-6: Remaining MVP + Polish
|
| 307 |
-
- Card (2 configs) — validates layout composition
|
| 308 |
-
- Checkbox + Radio (12 variants) — validates toggle patterns
|
| 309 |
-
- Error handling, edge cases, testing
|
| 310 |
-
|
| 311 |
-
Week 7-8: Post-MVP (if time)
|
| 312 |
-
- Toggle/Switch, Select, Modal
|
| 313 |
-
- Documentation
|
| 314 |
-
```
|
| 315 |
-
|
| 316 |
-
---
|
| 317 |
-
|
| 318 |
-
## EXISTING FILES TO KNOW ABOUT
|
| 319 |
-
|
| 320 |
-
| File | Purpose | Lines |
|
| 321 |
-
|------|---------|-------|
|
| 322 |
-
| `app.py` | Main Gradio app, token extraction orchestration | ~5000 |
|
| 323 |
-
| `agents/llm_agents.py` | AURORA, ATLAS, SENTINEL, NEXUS LLM agents | ~1200 |
|
| 324 |
-
| `agents/normalizer.py` | Token normalization (colors, radius, shadows) | ~950 |
|
| 325 |
-
| `core/color_classifier.py` | Rule-based color classification (PRIMARY authority) | ~815 |
|
| 326 |
-
| `core/color_utils.py` | Color math (hex/RGB/HSL, contrast, ramps) | ~400 |
|
| 327 |
-
| `core/rule_engine.py` | Type scale, WCAG, spacing grid analysis | ~1100 |
|
| 328 |
-
| `output_json/figma-plugin-extracted/figma-design-token-creator 5/src/code.js` | **Figma plugin — EXTEND THIS** | ~1200 |
|
| 329 |
-
| `output_json/figma-plugin-extracted/figma-design-token-creator 5/src/ui.html` | Plugin UI | ~500 |
|
| 330 |
-
|
| 331 |
-
### DTCG Output Format (What the Plugin Receives)
|
| 332 |
-
|
| 333 |
-
```json
|
| 334 |
-
{
|
| 335 |
-
"color": {
|
| 336 |
-
"brand": {
|
| 337 |
-
"primary": {
|
| 338 |
-
"$type": "color",
|
| 339 |
-
"$value": "#005aa3",
|
| 340 |
-
"$description": "[classifier] brand: primary_action",
|
| 341 |
-
"$extensions": {
|
| 342 |
-
"com.design-system-extractor": {
|
| 343 |
-
"frequency": 47,
|
| 344 |
-
"confidence": "high",
|
| 345 |
-
"category": "brand",
|
| 346 |
-
"evidence": ["background-color on <a>", "background-color on <button>"]
|
| 347 |
-
}
|
| 348 |
-
}
|
| 349 |
-
}
|
| 350 |
-
}
|
| 351 |
-
},
|
| 352 |
-
"radius": {
|
| 353 |
-
"md": { "$type": "dimension", "$value": "8px" },
|
| 354 |
-
"lg": { "$type": "dimension", "$value": "16px" },
|
| 355 |
-
"full": { "$type": "dimension", "$value": "9999px" }
|
| 356 |
-
},
|
| 357 |
-
"shadow": {
|
| 358 |
-
"sm": {
|
| 359 |
-
"$type": "shadow",
|
| 360 |
-
"$value": {
|
| 361 |
-
"offsetX": "0px",
|
| 362 |
-
"offsetY": "2px",
|
| 363 |
-
"blur": "8px",
|
| 364 |
-
"spread": "0px",
|
| 365 |
-
"color": "#00000026"
|
| 366 |
-
}
|
| 367 |
-
}
|
| 368 |
-
},
|
| 369 |
-
"typography": {
|
| 370 |
-
"body": {
|
| 371 |
-
"md": {
|
| 372 |
-
"$type": "typography",
|
| 373 |
-
"$value": {
|
| 374 |
-
"fontFamily": "Inter",
|
| 375 |
-
"fontSize": "16px",
|
| 376 |
-
"fontWeight": 400,
|
| 377 |
-
"lineHeight": 1.5,
|
| 378 |
-
"letterSpacing": "0px"
|
| 379 |
-
}
|
| 380 |
-
}
|
| 381 |
-
}
|
| 382 |
-
},
|
| 383 |
-
"spacing": {
|
| 384 |
-
"1": { "$type": "dimension", "$value": "4px" },
|
| 385 |
-
"2": { "$type": "dimension", "$value": "8px" },
|
| 386 |
-
"3": { "$type": "dimension", "$value": "16px" }
|
| 387 |
-
}
|
| 388 |
-
}
|
| 389 |
-
```
|
| 390 |
-
|
| 391 |
-
---
|
| 392 |
-
|
| 393 |
-
## COMPETITIVE ADVANTAGE
|
| 394 |
-
|
| 395 |
-
Building this fills a genuine market gap:
|
| 396 |
-
- **Tokens Studio** (1M+ installs) = token management, no component generation
|
| 397 |
-
- **Figr Identity** = generates components but from brand config, not YOUR tokens
|
| 398 |
-
- **story.to.design** = needs full Storybook pipeline as intermediary
|
| 399 |
-
- **MCP bridges** = non-deterministic AI interpretation
|
| 400 |
-
- **Us** = DTCG JSON in, deterministic Figma components out. Nobody else does this.
|
| 401 |
-
|
| 402 |
-
### Strategic Position
|
| 403 |
-
```
|
| 404 |
-
[Extract from website] -> [Analyze & Score] -> [Generate Components in Figma]
|
| 405 |
-
Part 1 (DONE) Part 1 (DONE) Part 2 (THIS)
|
| 406 |
-
```
|
| 407 |
-
|
| 408 |
-
We become the only tool that goes from URL to complete Figma design system with components — fully automated.
|
| 409 |
-
|
| 410 |
-
---
|
| 411 |
-
|
| 412 |
-
## OPEN QUESTIONS FOR THIS SESSION
|
| 413 |
-
|
| 414 |
-
1. Should component definitions live in JSON (data-driven) or be hardcoded in JS (simpler)?
|
| 415 |
-
2. Should we generate all 60 Button variants at once, or let user pick which variants?
|
| 416 |
-
3. How to handle missing tokens? (e.g., site has no shadow tokens — skip shadow on buttons or use defaults?)
|
| 417 |
-
4. Should we support dark mode variants from the start, or add later?
|
| 418 |
-
5. Icon system — use a bundled icon set (Lucide?) or just placeholder frames?
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
PLAN_W3C_DTCG_UPDATE.md
DELETED
|
@@ -1,318 +0,0 @@
|
|
| 1 |
-
# PLAN: Update to W3C DTCG Design Token Format
|
| 2 |
-
|
| 3 |
-
## Overview
|
| 4 |
-
|
| 5 |
-
Update both the **Design System Extractor export** and the **Figma plugin** to use the official **W3C DTCG (Design Tokens Community Group)** format - the industry standard as of October 2025.
|
| 6 |
-
|
| 7 |
-
---
|
| 8 |
-
|
| 9 |
-
## Current vs Target Format
|
| 10 |
-
|
| 11 |
-
### CURRENT (Custom/Legacy)
|
| 12 |
-
```json
|
| 13 |
-
{
|
| 14 |
-
"global": {
|
| 15 |
-
"colors": {
|
| 16 |
-
"color.brand.primary": {
|
| 17 |
-
"value": "#540b79",
|
| 18 |
-
"type": "color"
|
| 19 |
-
}
|
| 20 |
-
},
|
| 21 |
-
"typography": {
|
| 22 |
-
"font.heading.xl.desktop": {
|
| 23 |
-
"value": {
|
| 24 |
-
"fontFamily": "Open Sans",
|
| 25 |
-
"fontSize": "32px",
|
| 26 |
-
"fontWeight": "700",
|
| 27 |
-
"lineHeight": "1.3"
|
| 28 |
-
},
|
| 29 |
-
"type": "typography"
|
| 30 |
-
}
|
| 31 |
-
},
|
| 32 |
-
"spacing": {
|
| 33 |
-
"space.1.desktop": {
|
| 34 |
-
"value": "8px",
|
| 35 |
-
"type": "dimension"
|
| 36 |
-
}
|
| 37 |
-
},
|
| 38 |
-
"borderRadius": {
|
| 39 |
-
"radius.md": {
|
| 40 |
-
"value": "8px",
|
| 41 |
-
"type": "borderRadius"
|
| 42 |
-
}
|
| 43 |
-
},
|
| 44 |
-
"shadows": {
|
| 45 |
-
"shadow.sm": {
|
| 46 |
-
"value": { "x": "0", "y": "2", "blur": "4", ... },
|
| 47 |
-
"type": "boxShadow"
|
| 48 |
-
}
|
| 49 |
-
}
|
| 50 |
-
}
|
| 51 |
-
}
|
| 52 |
-
```
|
| 53 |
-
|
| 54 |
-
### TARGET (W3C DTCG Standard)
|
| 55 |
-
```json
|
| 56 |
-
{
|
| 57 |
-
"color": {
|
| 58 |
-
"brand": {
|
| 59 |
-
"primary": {
|
| 60 |
-
"$type": "color",
|
| 61 |
-
"$value": "#540b79",
|
| 62 |
-
"$description": "Main brand color"
|
| 63 |
-
}
|
| 64 |
-
}
|
| 65 |
-
},
|
| 66 |
-
"font": {
|
| 67 |
-
"heading": {
|
| 68 |
-
"xl": {
|
| 69 |
-
"desktop": {
|
| 70 |
-
"$type": "typography",
|
| 71 |
-
"$value": {
|
| 72 |
-
"fontFamily": "Open Sans",
|
| 73 |
-
"fontSize": "32px",
|
| 74 |
-
"fontWeight": "700",
|
| 75 |
-
"lineHeight": "1.3"
|
| 76 |
-
}
|
| 77 |
-
}
|
| 78 |
-
}
|
| 79 |
-
}
|
| 80 |
-
},
|
| 81 |
-
"spacing": {
|
| 82 |
-
"1": {
|
| 83 |
-
"desktop": {
|
| 84 |
-
"$type": "dimension",
|
| 85 |
-
"$value": "8px"
|
| 86 |
-
}
|
| 87 |
-
}
|
| 88 |
-
},
|
| 89 |
-
"borderRadius": {
|
| 90 |
-
"md": {
|
| 91 |
-
"$type": "dimension",
|
| 92 |
-
"$value": "8px"
|
| 93 |
-
}
|
| 94 |
-
},
|
| 95 |
-
"shadow": {
|
| 96 |
-
"sm": {
|
| 97 |
-
"$type": "shadow",
|
| 98 |
-
"$value": {
|
| 99 |
-
"color": "#00000026",
|
| 100 |
-
"offsetX": "0px",
|
| 101 |
-
"offsetY": "2px",
|
| 102 |
-
"blur": "4px",
|
| 103 |
-
"spread": "0px"
|
| 104 |
-
}
|
| 105 |
-
}
|
| 106 |
-
}
|
| 107 |
-
}
|
| 108 |
-
```
|
| 109 |
-
|
| 110 |
-
---
|
| 111 |
-
|
| 112 |
-
## Key Changes Summary
|
| 113 |
-
|
| 114 |
-
| Aspect | Current | DTCG Target |
|
| 115 |
-
|--------|---------|-------------|
|
| 116 |
-
| Property prefix | `value`, `type` | `$value`, `$type` |
|
| 117 |
-
| Root wrapper | `global` | None (flat root) |
|
| 118 |
-
| Token nesting | Flat keys (`color.brand.primary`) | Nested objects (`color.brand.primary`) |
|
| 119 |
-
| Color type | `"type": "color"` | `"$type": "color"` |
|
| 120 |
-
| Typography type | `"type": "typography"` | `"$type": "typography"` |
|
| 121 |
-
| Spacing type | `"type": "dimension"` | `"$type": "dimension"` |
|
| 122 |
-
| Radius type | `"type": "borderRadius"` | `"$type": "dimension"` |
|
| 123 |
-
| Shadow type | `"type": "boxShadow"` | `"$type": "shadow"` |
|
| 124 |
-
|
| 125 |
-
---
|
| 126 |
-
|
| 127 |
-
## Files to Update
|
| 128 |
-
|
| 129 |
-
### 1. Export Functions (`app.py`)
|
| 130 |
-
|
| 131 |
-
**File:** `/Users/yahya/design-system-extractor-v2-hf-fix/app.py`
|
| 132 |
-
|
| 133 |
-
**Functions to modify:**
|
| 134 |
-
- `export_stage1_json()` (~line 3095)
|
| 135 |
-
- `export_tokens_json()` (~line 3248)
|
| 136 |
-
|
| 137 |
-
**Changes:**
|
| 138 |
-
1. Remove `global` wrapper - tokens at root level
|
| 139 |
-
2. Change `value` → `$value`, `type` → `$type`
|
| 140 |
-
3. Convert flat keys to nested structure:
|
| 141 |
-
- `color.brand.primary` → `{ color: { brand: { primary: {...} } } }`
|
| 142 |
-
- `font.heading.xl.desktop` → `{ font: { heading: { xl: { desktop: {...} } } } }`
|
| 143 |
-
4. Add helper function to convert flat key to nested object
|
| 144 |
-
5. Update shadow format to DTCG spec
|
| 145 |
-
6. Keep `$description` for semantic tokens
|
| 146 |
-
|
| 147 |
-
### 2. Figma Plugin (`code.js`)
|
| 148 |
-
|
| 149 |
-
**File:** `/Users/yahya/design-system-extractor-v2-hf-fix/output_json/figma-plugin-extracted/figma-design-token-creator 5/src/code.js`
|
| 150 |
-
|
| 151 |
-
**Changes:**
|
| 152 |
-
1. Update `normalizeTokens()` to detect DTCG format (look for `$value`, `$type`)
|
| 153 |
-
2. Update `extractColors()` to handle:
|
| 154 |
-
- `$value` instead of `value`
|
| 155 |
-
- Nested structure traversal
|
| 156 |
-
3. Update `extractTypography()` to handle DTCG composite format
|
| 157 |
-
4. Update `extractSpacing()` for dimension tokens
|
| 158 |
-
5. Add shadow extraction (currently not implemented)
|
| 159 |
-
6. Support both legacy AND DTCG formats for backwards compatibility
|
| 160 |
-
|
| 161 |
-
### 3. Plugin UI (`ui.html`)
|
| 162 |
-
|
| 163 |
-
**File:** `/Users/yahya/design-system-extractor-v2-hf-fix/output_json/figma-plugin-extracted/figma-design-token-creator 5/ui/ui.html`
|
| 164 |
-
|
| 165 |
-
**Changes:**
|
| 166 |
-
1. Update `extractColorsForPreview()` to handle `$value`
|
| 167 |
-
2. Update `extractSpacingForPreview()` to handle `$value`
|
| 168 |
-
3. Update `buildTypographyPreview()` for nested + DTCG format
|
| 169 |
-
4. Add format detection message for DTCG
|
| 170 |
-
5. Add shadow preview section
|
| 171 |
-
|
| 172 |
-
---
|
| 173 |
-
|
| 174 |
-
## Detailed Implementation Steps
|
| 175 |
-
|
| 176 |
-
### Step 1: Create DTCG Export Helper Functions (app.py)
|
| 177 |
-
|
| 178 |
-
```python
|
| 179 |
-
def _key_to_nested_path(flat_key: str) -> list:
|
| 180 |
-
"""Convert 'color.brand.primary' to ['color', 'brand', 'primary']"""
|
| 181 |
-
return flat_key.split('.')
|
| 182 |
-
|
| 183 |
-
def _set_nested_value(obj: dict, path: list, value: dict):
|
| 184 |
-
"""Set a value at a nested path in a dictionary"""
|
| 185 |
-
for key in path[:-1]:
|
| 186 |
-
if key not in obj:
|
| 187 |
-
obj[key] = {}
|
| 188 |
-
obj = obj[key]
|
| 189 |
-
obj[path[-1]] = value
|
| 190 |
-
|
| 191 |
-
def _to_dtcg_token(value, token_type: str, description: str = None) -> dict:
|
| 192 |
-
"""Convert to DTCG format with $value, $type, $description"""
|
| 193 |
-
token = {
|
| 194 |
-
"$type": token_type,
|
| 195 |
-
"$value": value
|
| 196 |
-
}
|
| 197 |
-
if description:
|
| 198 |
-
token["$description"] = description
|
| 199 |
-
return token
|
| 200 |
-
```
|
| 201 |
-
|
| 202 |
-
### Step 2: Update Export Functions (app.py)
|
| 203 |
-
|
| 204 |
-
Rewrite `export_stage1_json()` and `export_tokens_json()` to:
|
| 205 |
-
1. Build nested structure instead of flat
|
| 206 |
-
2. Use `$value`, `$type`, `$description`
|
| 207 |
-
3. Map token types correctly:
|
| 208 |
-
- `borderRadius` → `dimension` (DTCG uses dimension for radii)
|
| 209 |
-
- `boxShadow` → `shadow`
|
| 210 |
-
- Keep `color`, `typography`, `dimension`
|
| 211 |
-
|
| 212 |
-
### Step 3: Update Plugin Token Extraction (code.js)
|
| 213 |
-
|
| 214 |
-
Add DTCG detection and extraction:
|
| 215 |
-
|
| 216 |
-
```javascript
|
| 217 |
-
// Detect if DTCG format
|
| 218 |
-
function isDTCGFormat(obj) {
|
| 219 |
-
if (!obj || typeof obj !== 'object') return false;
|
| 220 |
-
var keys = Object.keys(obj);
|
| 221 |
-
for (var i = 0; i < keys.length; i++) {
|
| 222 |
-
var val = obj[keys[i]];
|
| 223 |
-
if (val && typeof val === 'object') {
|
| 224 |
-
if (val['$value'] !== undefined || val['$type'] !== undefined) {
|
| 225 |
-
return true;
|
| 226 |
-
}
|
| 227 |
-
}
|
| 228 |
-
}
|
| 229 |
-
return false;
|
| 230 |
-
}
|
| 231 |
-
|
| 232 |
-
// Extract from DTCG format
|
| 233 |
-
function extractColorsDTCG(obj, prefix, results) {
|
| 234 |
-
// Handle $value, $type
|
| 235 |
-
// Recursively traverse nested structure
|
| 236 |
-
}
|
| 237 |
-
```
|
| 238 |
-
|
| 239 |
-
### Step 4: Update Plugin UI (ui.html)
|
| 240 |
-
|
| 241 |
-
Update preview functions to handle both formats.
|
| 242 |
-
|
| 243 |
-
### Step 5: Add Shadow Support to Plugin
|
| 244 |
-
|
| 245 |
-
Currently the plugin doesn't create Effect Styles for shadows. Add:
|
| 246 |
-
|
| 247 |
-
```javascript
|
| 248 |
-
// CREATE EFFECT STYLES (Shadows)
|
| 249 |
-
for (var si = 0; si < tokens.shadows.length; si++) {
|
| 250 |
-
var shadowToken = tokens.shadows[si];
|
| 251 |
-
var effectStyle = figma.createEffectStyle();
|
| 252 |
-
effectStyle.name = 'shadows/' + shadowToken.name;
|
| 253 |
-
effectStyle.effects = [{
|
| 254 |
-
type: 'DROP_SHADOW',
|
| 255 |
-
color: { r: 0, g: 0, b: 0, a: 0.25 },
|
| 256 |
-
offset: { x: parseFloat(shadowToken.value.offsetX), y: parseFloat(shadowToken.value.offsetY) },
|
| 257 |
-
radius: parseFloat(shadowToken.value.blur),
|
| 258 |
-
spread: parseFloat(shadowToken.value.spread),
|
| 259 |
-
visible: true,
|
| 260 |
-
blendMode: 'NORMAL'
|
| 261 |
-
}];
|
| 262 |
-
}
|
| 263 |
-
```
|
| 264 |
-
|
| 265 |
-
---
|
| 266 |
-
|
| 267 |
-
## Testing Checklist
|
| 268 |
-
|
| 269 |
-
After implementation, verify:
|
| 270 |
-
|
| 271 |
-
- [ ] Export Stage 1 JSON produces valid DTCG format
|
| 272 |
-
- [ ] Export Final JSON produces valid DTCG format
|
| 273 |
-
- [ ] Token names are properly nested (`color.brand.primary` → nested object)
|
| 274 |
-
- [ ] All `$value`, `$type` prefixes present
|
| 275 |
-
- [ ] Figma plugin successfully imports DTCG JSON
|
| 276 |
-
- [ ] Colors → Paint Styles created correctly
|
| 277 |
-
- [ ] Typography → Text Styles created correctly
|
| 278 |
-
- [ ] Spacing → Variables created correctly
|
| 279 |
-
- [ ] Border Radius → Variables created correctly
|
| 280 |
-
- [ ] Shadows → Effect Styles created correctly
|
| 281 |
-
- [ ] Plugin still works with legacy format (backwards compatible)
|
| 282 |
-
|
| 283 |
-
---
|
| 284 |
-
|
| 285 |
-
## Benefits After Implementation
|
| 286 |
-
|
| 287 |
-
1. **Interoperability** - Works with Figma, Sketch, Framer, Style Dictionary, Tokens Studio
|
| 288 |
-
2. **Future-proof** - Official W3C standard, adopted by industry
|
| 289 |
-
3. **Tool ecosystem** - Compatible with 10+ design tools
|
| 290 |
-
4. **Code generation** - Works with Style Dictionary for CSS/iOS/Android
|
| 291 |
-
5. **No vendor lock-in** - Standard format, portable
|
| 292 |
-
|
| 293 |
-
---
|
| 294 |
-
|
| 295 |
-
## Estimated Effort
|
| 296 |
-
|
| 297 |
-
| Task | Complexity | Time |
|
| 298 |
-
|------|------------|------|
|
| 299 |
-
| Export helper functions | Low | 15 min |
|
| 300 |
-
| Update export_stage1_json | Medium | 30 min |
|
| 301 |
-
| Update export_tokens_json | Medium | 30 min |
|
| 302 |
-
| Update plugin code.js | Medium | 45 min |
|
| 303 |
-
| Update plugin ui.html | Low | 20 min |
|
| 304 |
-
| Add shadow support to plugin | Medium | 30 min |
|
| 305 |
-
| Testing & fixes | Medium | 30 min |
|
| 306 |
-
| **Total** | | **~3 hours** |
|
| 307 |
-
|
| 308 |
-
---
|
| 309 |
-
|
| 310 |
-
## Awaiting Confirmation
|
| 311 |
-
|
| 312 |
-
Please confirm:
|
| 313 |
-
1. ✅ Proceed with W3C DTCG format update?
|
| 314 |
-
2. ✅ Update both app.py export AND Figma plugin?
|
| 315 |
-
3. ✅ Add shadow Effect Style support to plugin?
|
| 316 |
-
4. ✅ Maintain backwards compatibility for legacy format in plugin?
|
| 317 |
-
|
| 318 |
-
**Reply "approved" or provide feedback to proceed.**
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
PROJECT_CONTEXT.md
DELETED
|
@@ -1,170 +0,0 @@
|
|
| 1 |
-
# Design System Extractor v2 — Project Context
|
| 2 |
-
|
| 3 |
-
## Architecture Overview
|
| 4 |
-
|
| 5 |
-
```
|
| 6 |
-
Stage 0: Configuration Stage 1: Discovery & Extraction Stage 2: AI Analysis Stage 3: Export
|
| 7 |
-
┌──────────────────┐ ┌──────────────────────────┐ ┌──────────────────────────┐ ┌──────────────┐
|
| 8 |
-
│ HF Token Setup │ ──────> │ URL Discovery (sitemap/ │ ──────> │ Layer 1: Rule Engine │ ──> │ Figma Tokens │
|
| 9 |
-
│ Benchmark Select │ │ crawl) + Token Extraction │ │ Layer 2: Benchmarks │ │ JSON Export │
|
| 10 |
-
└──────────────────┘ │ (Desktop + Mobile CSS) │ │ Layer 3: LLM Agents (x3) │ └──────────────┘
|
| 11 |
-
└──────────────────────────┘ │ Layer 4: HEAD Synthesizer│
|
| 12 |
-
└──────────────────────────┘
|
| 13 |
-
```
|
| 14 |
-
|
| 15 |
-
### Stage 1: Discovery & Extraction (Rule-Based, Free)
|
| 16 |
-
- **Discover Pages**: Fetches sitemap.xml or crawls site to find pages
|
| 17 |
-
- **Extract Tokens**: Playwright visits each page at 2 viewports (Desktop 1440px, Mobile 375px), extracts computed CSS for colors, typography, spacing, radius, shadows
|
| 18 |
-
- **User Review**: Interactive tables with Accept/Reject checkboxes + visual previews
|
| 19 |
-
|
| 20 |
-
### Stage 2: AI-Powered Analysis (4 Layers)
|
| 21 |
-
|
| 22 |
-
| Layer | Type | What It Does | Cost |
|
| 23 |
-
|-------|------|--------------|------|
|
| 24 |
-
| **Layer 1** | Rule Engine | Type scale detection, AA contrast checking, spacing grid analysis, color statistics | FREE |
|
| 25 |
-
| **Layer 2** | Benchmark Research | Compare against Material Design 3, Apple HIG, Tailwind, etc. | ~$0.001 |
|
| 26 |
-
| **Layer 3** | LLM Agents (x3) | AURORA (Brand ID) + ATLAS (Benchmark) + SENTINEL (Best Practices) | ~$0.002 |
|
| 27 |
-
| **Layer 4** | HEAD Synthesizer | NEXUS combines all outputs into final recommendations | ~$0.001 |
|
| 28 |
-
|
| 29 |
-
### Stage 3: Export
|
| 30 |
-
- Apply/reject individual color, typography, spacing recommendations
|
| 31 |
-
- Export Figma Tokens Studio-compatible JSON
|
| 32 |
-
|
| 33 |
-
---
|
| 34 |
-
|
| 35 |
-
## Agent Roster
|
| 36 |
-
|
| 37 |
-
| Agent | Codename | Model | Temp | Input | Output | Specialty |
|
| 38 |
-
|-------|----------|-------|------|-------|--------|-----------|
|
| 39 |
-
| Brand Identifier | **AURORA** | Qwen/Qwen2.5-72B-Instruct | 0.4 | Color tokens + semantic CSS analysis | Brand primary/secondary/accent, palette strategy, cohesion score, semantic names | Creative/visual reasoning, color harmony assessment |
|
| 40 |
-
| Benchmark Advisor | **ATLAS** | meta-llama/Llama-3.3-70B-Instruct | 0.25 | User's type scale, spacing, font sizes + benchmark comparison data | Recommended benchmark, alignment changes, pros/cons | 128K context for large benchmark data, comparative reasoning |
|
| 41 |
-
| Best Practices Validator | **SENTINEL** | Qwen/Qwen2.5-72B-Instruct | 0.2 | Rule Engine results (typography, accessibility, spacing, color stats) | Overall score (0-100), check results, prioritized fix list | Methodical rule-following, precise judgment |
|
| 42 |
-
| HEAD Synthesizer | **NEXUS** | meta-llama/Llama-3.3-70B-Instruct | 0.3 | All 3 agent outputs + Rule Engine facts | Executive summary, scores, top 3 actions, color/type/spacing recs | 128K context for combined inputs, synthesis capability |
|
| 43 |
-
|
| 44 |
-
### Why These Models
|
| 45 |
-
|
| 46 |
-
- **Qwen 72B** (AURORA, SENTINEL): Strong creative reasoning for brand analysis; methodical structured output for best practices. Available on HF serverless without gated access.
|
| 47 |
-
- **Llama 3.3 70B** (ATLAS, NEXUS): 128K context window handles large combined inputs from multiple agents. Excellent comparative and synthesis reasoning.
|
| 48 |
-
- **Fallback**: Qwen/Qwen2.5-7B-Instruct (free tier, available when primary models fail)
|
| 49 |
-
|
| 50 |
-
### Temperature Rationale
|
| 51 |
-
|
| 52 |
-
- **0.4** (AURORA): Allows creative interpretation of color stories and palette harmony
|
| 53 |
-
- **0.25** (ATLAS): Analytical comparison needs consistency but some flexibility for trade-off reasoning
|
| 54 |
-
- **0.2** (SENTINEL): Strict rule evaluation — consistency is critical for compliance scoring
|
| 55 |
-
- **0.3** (NEXUS): Balanced — needs to synthesize creatively but stay grounded in agent data
|
| 56 |
-
|
| 57 |
-
---
|
| 58 |
-
|
| 59 |
-
## Evaluation & Scoring
|
| 60 |
-
|
| 61 |
-
### Self-Evaluation (All Agents)
|
| 62 |
-
Each agent includes a `self_evaluation` block in its JSON output:
|
| 63 |
-
```json
|
| 64 |
-
{
|
| 65 |
-
"confidence": 8, // 1-10: How confident the agent is
|
| 66 |
-
"reasoning": "Clear usage patterns with 20+ colors",
|
| 67 |
-
"data_quality": "good", // good | fair | poor
|
| 68 |
-
"flags": [] // e.g., ["insufficient_context", "ambiguous_data"]
|
| 69 |
-
}
|
| 70 |
-
```
|
| 71 |
-
|
| 72 |
-
### AURORA Scoring Rubric (Cohesion 1-10)
|
| 73 |
-
- **9-10**: Clear harmony rule, distinct brand colors, consistent palette
|
| 74 |
-
- **7-8**: Mostly harmonious, clear brand identity
|
| 75 |
-
- **5-6**: Some relationships visible but not systematic
|
| 76 |
-
- **3-4**: Random palette, no clear strategy
|
| 77 |
-
- **1-2**: Conflicting colors, no brand identity
|
| 78 |
-
|
| 79 |
-
### SENTINEL Scoring Rubric (Overall 0-100)
|
| 80 |
-
Weighted checks:
|
| 81 |
-
- AA Compliance: 25 points
|
| 82 |
-
- Type Scale Consistency: 15 points
|
| 83 |
-
- Base Size Accessible: 15 points
|
| 84 |
-
- Spacing Grid: 15 points
|
| 85 |
-
- Type Scale Standard Ratio: 10 points
|
| 86 |
-
- Color Count: 10 points
|
| 87 |
-
- No Near-Duplicates: 10 points
|
| 88 |
-
|
| 89 |
-
### NEXUS Scoring Rubric (Overall 0-100)
|
| 90 |
-
- **90-100**: Production-ready, minor polishing only
|
| 91 |
-
- **75-89**: Solid foundation, 2-3 targeted improvements
|
| 92 |
-
- **60-74**: Functional but needs focused attention
|
| 93 |
-
- **40-59**: Significant gaps requiring systematic improvement
|
| 94 |
-
- **20-39**: Major rework needed
|
| 95 |
-
- **0-19**: Fundamental redesign recommended
|
| 96 |
-
|
| 97 |
-
### Evaluation Summary (Logged After Analysis)
|
| 98 |
-
```
|
| 99 |
-
═══════════════════════════════════════════════════
|
| 100 |
-
🔍 AGENT EVALUATION SUMMARY
|
| 101 |
-
═══════════════════════════════════════════════════
|
| 102 |
-
🎨 AURORA (Brand ID): confidence=8/10, data=good
|
| 103 |
-
🏢 ATLAS (Benchmark): confidence=7/10, data=good
|
| 104 |
-
✅ SENTINEL (Practices): confidence=9/10, data=good, score=72/100
|
| 105 |
-
🧠 NEXUS (Synthesis): confidence=8/10, data=good, overall=65/100
|
| 106 |
-
═══════════════════════════════════════════════════
|
| 107 |
-
```
|
| 108 |
-
|
| 109 |
-
---
|
| 110 |
-
|
| 111 |
-
## User Journey
|
| 112 |
-
|
| 113 |
-
1. **Enter HF Token** — Required for LLM inference (free tier works)
|
| 114 |
-
2. **Enter Website URL** — The site to extract design tokens from
|
| 115 |
-
3. **Discover Pages** — Auto-finds pages via sitemap or crawling
|
| 116 |
-
4. **Select Pages** — Check/uncheck pages to include (max 10)
|
| 117 |
-
5. **Extract Tokens** — Scans selected pages at Desktop + Mobile viewports
|
| 118 |
-
6. **Review Stage 1** — Interactive tables: Colors, Typography, Spacing, Radius, Shadows, Semantic Colors. Each tab has a data table + visual preview accordion. Accept/reject individual tokens.
|
| 119 |
-
7. **Proceed to Stage 2** — Select benchmarks to compare against
|
| 120 |
-
8. **Run AI Analysis** — 4-layer pipeline executes (Rule Engine -> Benchmarks -> LLM Agents -> Synthesis)
|
| 121 |
-
9. **Review Analysis** — Dashboard with scores, recommendations, benchmark comparison, color recs
|
| 122 |
-
10. **Apply Upgrades** — Accept/reject individual recommendations
|
| 123 |
-
11. **Export JSON** — Download Figma Tokens Studio-compatible JSON
|
| 124 |
-
|
| 125 |
-
---
|
| 126 |
-
|
| 127 |
-
## File Structure
|
| 128 |
-
|
| 129 |
-
| File | Responsibility |
|
| 130 |
-
|------|----------------|
|
| 131 |
-
| `app.py` | Main Gradio UI — all stages, CSS, event bindings, formatting functions |
|
| 132 |
-
| `agents/llm_agents.py` | 4 LLM agent classes (AURORA, ATLAS, SENTINEL, NEXUS) + dataclasses |
|
| 133 |
-
| `agents/semantic_analyzer.py` | Semantic color categorization (brand, text, background, etc.) |
|
| 134 |
-
| `config/settings.py` | Model routing, env var loading, agent-to-model mapping |
|
| 135 |
-
| `core/hf_inference.py` | HF Inference API client, model registry, temperature mapping |
|
| 136 |
-
| `core/preview_generator.py` | HTML preview generators for Stage 1 visual previews |
|
| 137 |
-
| `core/rule_engine.py` | Layer 1: Type scale, AA contrast, spacing grid, color stats |
|
| 138 |
-
| `core/benchmarks.py` | Benchmark definitions (Material Design 3, Apple HIG, etc.) |
|
| 139 |
-
| `core/extractor.py` | Playwright-based CSS token extraction |
|
| 140 |
-
| `core/discovery.py` | Page discovery via sitemap.xml / crawling |
|
| 141 |
-
|
| 142 |
-
---
|
| 143 |
-
|
| 144 |
-
## Configuration
|
| 145 |
-
|
| 146 |
-
### Environment Variables
|
| 147 |
-
|
| 148 |
-
| Variable | Default | Description |
|
| 149 |
-
|----------|---------|-------------|
|
| 150 |
-
| `HF_TOKEN` | (required) | HuggingFace API token |
|
| 151 |
-
| `BRAND_IDENTIFIER_MODEL` | `Qwen/Qwen2.5-72B-Instruct` | Model for AURORA |
|
| 152 |
-
| `BENCHMARK_ADVISOR_MODEL` | `meta-llama/Llama-3.3-70B-Instruct` | Model for ATLAS |
|
| 153 |
-
| `BEST_PRACTICES_MODEL` | `Qwen/Qwen2.5-72B-Instruct` | Model for SENTINEL |
|
| 154 |
-
| `HEAD_SYNTHESIZER_MODEL` | `meta-llama/Llama-3.3-70B-Instruct` | Model for NEXUS |
|
| 155 |
-
| `FALLBACK_MODEL` | `Qwen/Qwen2.5-7B-Instruct` | Fallback when primary fails |
|
| 156 |
-
| `HF_MAX_NEW_TOKENS` | `2048` | Max tokens per LLM response |
|
| 157 |
-
| `HF_TEMPERATURE` | `0.3` | Global default temperature |
|
| 158 |
-
| `MAX_PAGES` | `20` | Max pages to discover |
|
| 159 |
-
| `BROWSER_TIMEOUT` | `30000` | Playwright timeout (ms) |
|
| 160 |
-
|
| 161 |
-
### Model Override Examples
|
| 162 |
-
```bash
|
| 163 |
-
# Use Llama for all agents
|
| 164 |
-
export BRAND_IDENTIFIER_MODEL="meta-llama/Llama-3.3-70B-Instruct"
|
| 165 |
-
export BEST_PRACTICES_MODEL="meta-llama/Llama-3.3-70B-Instruct"
|
| 166 |
-
|
| 167 |
-
# Use budget models
|
| 168 |
-
export BRAND_IDENTIFIER_MODEL="Qwen/Qwen2.5-7B-Instruct"
|
| 169 |
-
export BENCHMARK_ADVISOR_MODEL="mistralai/Mixtral-8x7B-Instruct-v0.1"
|
| 170 |
-
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
README.md
CHANGED
|
@@ -1,5 +1,5 @@
|
|
| 1 |
---
|
| 2 |
-
title: Design System
|
| 3 |
emoji: 🎨
|
| 4 |
colorFrom: purple
|
| 5 |
colorTo: blue
|
|
@@ -8,7 +8,7 @@ pinned: false
|
|
| 8 |
license: mit
|
| 9 |
---
|
| 10 |
|
| 11 |
-
# Design System
|
| 12 |
|
| 13 |
> 🎨 A semi-automated, human-in-the-loop agentic system that reverse-engineers design systems from live websites.
|
| 14 |
|
|
@@ -65,7 +65,7 @@ This is **not a magic button** — it's a design-aware co-pilot.
|
|
| 65 |
```bash
|
| 66 |
# Clone the repository
|
| 67 |
git clone <repo-url>
|
| 68 |
-
cd design-system-
|
| 69 |
|
| 70 |
# Create virtual environment
|
| 71 |
python -m venv venv
|
|
@@ -118,7 +118,7 @@ Open `http://localhost:7860` in your browser.
|
|
| 118 |
## 📁 Project Structure
|
| 119 |
|
| 120 |
```
|
| 121 |
-
design-system-
|
| 122 |
├── app.py # Main Gradio application
|
| 123 |
├── requirements.txt
|
| 124 |
├── README.md
|
|
|
|
| 1 |
---
|
| 2 |
+
title: Design System Automation v3
|
| 3 |
emoji: 🎨
|
| 4 |
colorFrom: purple
|
| 5 |
colorTo: blue
|
|
|
|
| 8 |
license: mit
|
| 9 |
---
|
| 10 |
|
| 11 |
+
# Design System Automation v3
|
| 12 |
|
| 13 |
> 🎨 A semi-automated, human-in-the-loop agentic system that reverse-engineers design systems from live websites.
|
| 14 |
|
|
|
|
| 65 |
```bash
|
| 66 |
# Clone the repository
|
| 67 |
git clone <repo-url>
|
| 68 |
+
cd design-system-automation
|
| 69 |
|
| 70 |
# Create virtual environment
|
| 71 |
python -m venv venv
|
|
|
|
| 118 |
## 📁 Project Structure
|
| 119 |
|
| 120 |
```
|
| 121 |
+
design-system-automation/
|
| 122 |
├── app.py # Main Gradio application
|
| 123 |
├── requirements.txt
|
| 124 |
├── README.md
|
agents/__init__.py
CHANGED
|
@@ -1,5 +1,5 @@
|
|
| 1 |
"""
|
| 2 |
-
Agents for Design System
|
| 3 |
|
| 4 |
This package contains:
|
| 5 |
- Stage 1 Agents: Crawler, Extractor, Normalizer, Semantic Analyzer
|
|
|
|
| 1 |
"""
|
| 2 |
+
Agents for Design System Automation.
|
| 3 |
|
| 4 |
This package contains:
|
| 5 |
- Stage 1 Agents: Crawler, Extractor, Normalizer, Semantic Analyzer
|
agents/advisor.py
CHANGED
|
@@ -1,6 +1,6 @@
|
|
| 1 |
"""
|
| 2 |
Agent 3: Design System Best Practices Advisor
|
| 3 |
-
Design System
|
| 4 |
|
| 5 |
Persona: Senior Staff Design Systems Architect
|
| 6 |
|
|
|
|
| 1 |
"""
|
| 2 |
Agent 3: Design System Best Practices Advisor
|
| 3 |
+
Design System Automation
|
| 4 |
|
| 5 |
Persona: Senior Staff Design Systems Architect
|
| 6 |
|
agents/crawler.py
CHANGED
|
@@ -1,6 +1,6 @@
|
|
| 1 |
"""
|
| 2 |
Agent 1: Website Crawler
|
| 3 |
-
Design System
|
| 4 |
|
| 5 |
Persona: Meticulous Design Archaeologist
|
| 6 |
|
|
|
|
| 1 |
"""
|
| 2 |
Agent 1: Website Crawler
|
| 3 |
+
Design System Automation
|
| 4 |
|
| 5 |
Persona: Meticulous Design Archaeologist
|
| 6 |
|
agents/extractor.py
CHANGED
|
@@ -1,6 +1,6 @@
|
|
| 1 |
"""
|
| 2 |
Agent 1: Token Extractor
|
| 3 |
-
Design System
|
| 4 |
|
| 5 |
Persona: Meticulous Design Archaeologist
|
| 6 |
|
|
|
|
| 1 |
"""
|
| 2 |
Agent 1: Token Extractor
|
| 3 |
+
Design System Automation
|
| 4 |
|
| 5 |
Persona: Meticulous Design Archaeologist
|
| 6 |
|
agents/firecrawl_extractor.py
CHANGED
|
@@ -1,6 +1,6 @@
|
|
| 1 |
"""
|
| 2 |
Agent 1B: Firecrawl CSS Extractor
|
| 3 |
-
Design System
|
| 4 |
|
| 5 |
Persona: CSS Deep Diver
|
| 6 |
|
|
|
|
| 1 |
"""
|
| 2 |
Agent 1B: Firecrawl CSS Extractor
|
| 3 |
+
Design System Automation
|
| 4 |
|
| 5 |
Persona: CSS Deep Diver
|
| 6 |
|
agents/graph.py
CHANGED
|
@@ -1,6 +1,6 @@
|
|
| 1 |
"""
|
| 2 |
LangGraph Workflow Orchestration
|
| 3 |
-
Design System
|
| 4 |
|
| 5 |
Defines the main workflow graph with agents, checkpoints, and transitions.
|
| 6 |
"""
|
|
|
|
| 1 |
"""
|
| 2 |
LangGraph Workflow Orchestration
|
| 3 |
+
Design System Automation
|
| 4 |
|
| 5 |
Defines the main workflow graph with agents, checkpoints, and transitions.
|
| 6 |
"""
|
agents/normalizer.py
CHANGED
|
@@ -1,6 +1,6 @@
|
|
| 1 |
"""
|
| 2 |
Agent 2: Token Normalizer & Structurer
|
| 3 |
-
Design System
|
| 4 |
|
| 5 |
Persona: Design System Librarian
|
| 6 |
|
|
|
|
| 1 |
"""
|
| 2 |
Agent 2: Token Normalizer & Structurer
|
| 3 |
+
Design System Automation v3
|
| 4 |
|
| 5 |
Persona: Design System Librarian
|
| 6 |
|
agents/semantic_analyzer.py
CHANGED
|
@@ -1,6 +1,6 @@
|
|
| 1 |
"""
|
| 2 |
Agent 1C: Semantic Color Analyzer
|
| 3 |
-
Design System
|
| 4 |
|
| 5 |
⚠️ DEPRECATED in v3.2 — Superseded by:
|
| 6 |
- core/color_classifier.py (rule-based, primary naming authority)
|
|
|
|
| 1 |
"""
|
| 2 |
Agent 1C: Semantic Color Analyzer
|
| 3 |
+
Design System Automation
|
| 4 |
|
| 5 |
⚠️ DEPRECATED in v3.2 — Superseded by:
|
| 6 |
- core/color_classifier.py (rule-based, primary naming authority)
|
agents/state.py
CHANGED
|
@@ -1,6 +1,6 @@
|
|
| 1 |
"""
|
| 2 |
LangGraph State Definitions
|
| 3 |
-
Design System
|
| 4 |
|
| 5 |
Defines the state schema and type hints for LangGraph workflow.
|
| 6 |
"""
|
|
|
|
| 1 |
"""
|
| 2 |
LangGraph State Definitions
|
| 3 |
+
Design System Automation
|
| 4 |
|
| 5 |
Defines the state schema and type hints for LangGraph workflow.
|
| 6 |
"""
|
app.py
CHANGED
|
@@ -1,5 +1,5 @@
|
|
| 1 |
"""
|
| 2 |
-
Design System
|
| 3 |
==============================================
|
| 4 |
|
| 5 |
Flow:
|
|
@@ -2969,7 +2969,7 @@ def _to_dtcg_token(value, token_type: str, description: str = None,
|
|
| 2969 |
elif description:
|
| 2970 |
token["$description"] = description
|
| 2971 |
if extensions:
|
| 2972 |
-
token["$extensions"] = {"com.design-system-
|
| 2973 |
return token
|
| 2974 |
|
| 2975 |
|
|
@@ -4473,7 +4473,7 @@ def create_ui():
|
|
| 4473 |
"""
|
| 4474 |
|
| 4475 |
with gr.Blocks(
|
| 4476 |
-
title="Design System
|
| 4477 |
theme=corporate_theme,
|
| 4478 |
css=custom_css
|
| 4479 |
) as app:
|
|
@@ -4481,7 +4481,7 @@ def create_ui():
|
|
| 4481 |
# Header with branding
|
| 4482 |
gr.HTML("""
|
| 4483 |
<div class="app-header">
|
| 4484 |
-
<h1>🎨 Design System
|
| 4485 |
<p>Reverse-engineer design systems from live websites • AI-powered analysis • Figma-ready export</p>
|
| 4486 |
</div>
|
| 4487 |
""")
|
|
@@ -5077,7 +5077,7 @@ def create_ui():
|
|
| 5077 |
gr.Markdown("""
|
| 5078 |
---
|
| 5079 |
<div style="text-align: center; color: #94a3b8; font-size: 12px; padding: 12px 0;">
|
| 5080 |
-
<strong>Design System
|
| 5081 |
Rule Engine (FREE) + ReAct LLM Agents (AURORA · ATLAS · SENTINEL · NEXUS)
|
| 5082 |
</div>
|
| 5083 |
""")
|
|
|
|
| 1 |
"""
|
| 2 |
+
Design System Automation — Main Application
|
| 3 |
==============================================
|
| 4 |
|
| 5 |
Flow:
|
|
|
|
| 2969 |
elif description:
|
| 2970 |
token["$description"] = description
|
| 2971 |
if extensions:
|
| 2972 |
+
token["$extensions"] = {"com.design-system-automation": extensions}
|
| 2973 |
return token
|
| 2974 |
|
| 2975 |
|
|
|
|
| 4473 |
"""
|
| 4474 |
|
| 4475 |
with gr.Blocks(
|
| 4476 |
+
title="Design System Automation v3",
|
| 4477 |
theme=corporate_theme,
|
| 4478 |
css=custom_css
|
| 4479 |
) as app:
|
|
|
|
| 4481 |
# Header with branding
|
| 4482 |
gr.HTML("""
|
| 4483 |
<div class="app-header">
|
| 4484 |
+
<h1>🎨 Design System Automation</h1>
|
| 4485 |
<p>Reverse-engineer design systems from live websites • AI-powered analysis • Figma-ready export</p>
|
| 4486 |
</div>
|
| 4487 |
""")
|
|
|
|
| 5077 |
gr.Markdown("""
|
| 5078 |
---
|
| 5079 |
<div style="text-align: center; color: #94a3b8; font-size: 12px; padding: 12px 0;">
|
| 5080 |
+
<strong>Design System Automation v3</strong> · Playwright + Firecrawl + HuggingFace<br/>
|
| 5081 |
Rule Engine (FREE) + ReAct LLM Agents (AURORA · ATLAS · SENTINEL · NEXUS)
|
| 5082 |
</div>
|
| 5083 |
""")
|
config/agents.yaml
CHANGED
|
@@ -1,5 +1,5 @@
|
|
| 1 |
# =============================================================================
|
| 2 |
-
# DESIGN SYSTEM
|
| 3 |
# =============================================================================
|
| 4 |
#
|
| 5 |
# This file defines the personas and configurations for each agent in the
|
|
|
|
| 1 |
# =============================================================================
|
| 2 |
+
# DESIGN SYSTEM AUTOMATION - AGENT CONFIGURATIONS
|
| 3 |
# =============================================================================
|
| 4 |
#
|
| 5 |
# This file defines the personas and configurations for each agent in the
|
config/settings.py
CHANGED
|
@@ -1,6 +1,6 @@
|
|
| 1 |
"""
|
| 2 |
Application Settings
|
| 3 |
-
Design System
|
| 4 |
|
| 5 |
Loads configuration from environment variables and YAML files.
|
| 6 |
"""
|
|
|
|
| 1 |
"""
|
| 2 |
Application Settings
|
| 3 |
+
Design System Automation
|
| 4 |
|
| 5 |
Loads configuration from environment variables and YAML files.
|
| 6 |
"""
|
content/LINKEDIN_POST.md
DELETED
|
@@ -1,40 +0,0 @@
|
|
| 1 |
-
# LinkedIn Post
|
| 2 |
-
|
| 3 |
-
---
|
| 4 |
-
|
| 5 |
-
I built a system that audits any website's design system — automatically.
|
| 6 |
-
|
| 7 |
-
Point it at a URL. It extracts every color, font, spacing value from the DOM. Then 4 AI agents analyze it like a senior design team.
|
| 8 |
-
|
| 9 |
-
The secret? Not everything needs AI.
|
| 10 |
-
|
| 11 |
-
Layer 1 (free, <1 second):
|
| 12 |
-
- WCAG contrast checker (pure math)
|
| 13 |
-
- Type scale detection
|
| 14 |
-
- Spacing grid analysis
|
| 15 |
-
- Color deduplication
|
| 16 |
-
|
| 17 |
-
Layer 2 (~$0.003):
|
| 18 |
-
- AURORA: identifies brand colors from usage context
|
| 19 |
-
- ATLAS: recommends which design system to align with
|
| 20 |
-
- SENTINEL: prioritizes fixes by business impact
|
| 21 |
-
- NEXUS: synthesizes everything into a final report
|
| 22 |
-
|
| 23 |
-
My V1 used LLMs for everything.
|
| 24 |
-
Cost: ~$1/run. Accuracy: mediocre (LLMs hallucinate math).
|
| 25 |
-
|
| 26 |
-
V2 flipped the approach:
|
| 27 |
-
Deterministic code handles certainty. LLMs handle ambiguity.
|
| 28 |
-
|
| 29 |
-
Result: 100-300x cheaper. More accurate. Always produces output even when LLMs fail.
|
| 30 |
-
|
| 31 |
-
The rule engine does 80% of the work for $0.
|
| 32 |
-
The agents handle the 20% that requires judgment.
|
| 33 |
-
|
| 34 |
-
Built with: Playwright + HuggingFace Inference API (Qwen 72B, Llama 3.3 70B) + Gradio + Docker
|
| 35 |
-
|
| 36 |
-
Full write-up on Medium (link in comments).
|
| 37 |
-
|
| 38 |
-
What design workflows are you automating? Would love to hear.
|
| 39 |
-
|
| 40 |
-
#UXDesign #AIEngineering #DesignSystems #HuggingFace #LLM #Accessibility #WCAG #MultiAgent #Gradio #BuildInPublic
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
content/MEDIUM_ARTICLE.md
DELETED
|
@@ -1,406 +0,0 @@
|
|
| 1 |
-
# 🚅 AI in My Daily Work — Episode [X]: Building a Design System Analyzer with 4 AI Agents + a Free Rule Engine
|
| 2 |
-
|
| 3 |
-
*How I built a system that extracts any website's design tokens and audits them like a senior design team — for ~$0.003 per run.*
|
| 4 |
-
|
| 5 |
-
[IMAGE: Hero banner — Gradio UI showing the pipeline output]
|
| 6 |
-
|
| 7 |
-
---
|
| 8 |
-
|
| 9 |
-
## The Problem
|
| 10 |
-
|
| 11 |
-
Every week, the same story.
|
| 12 |
-
|
| 13 |
-
A designer opens a website and squints: "Is that our brand blue? Why does this button look different on mobile? How many shades of gray are we actually using?"
|
| 14 |
-
|
| 15 |
-
Design systems are supposed to prevent this. But **auditing** one? That's a different problem entirely.
|
| 16 |
-
|
| 17 |
-
- Open DevTools on every page
|
| 18 |
-
- Manually extract colors, fonts, spacing
|
| 19 |
-
- Cross-reference against WCAG accessibility guidelines
|
| 20 |
-
- Compare to industry benchmarks like Material Design or Polaris
|
| 21 |
-
- Write a report with prioritized recommendations
|
| 22 |
-
|
| 23 |
-
For a 20-page website, this takes **2–3 days of manual work**. And by the time you're done, the codebase has already changed.
|
| 24 |
-
|
| 25 |
-
I wanted a system that could think like a design team:
|
| 26 |
-
|
| 27 |
-
- a **crawler** discovering every page
|
| 28 |
-
- an **extractor** pulling every token from the DOM
|
| 29 |
-
- a **rule engine** checking accessibility and consistency — for free
|
| 30 |
-
- and **specialized AI agents** interpreting what the numbers actually mean
|
| 31 |
-
|
| 32 |
-
So I built one.
|
| 33 |
-
|
| 34 |
-
---
|
| 35 |
-
|
| 36 |
-
## The Solution (In One Sentence)
|
| 37 |
-
|
| 38 |
-
I built a 4-agent system backed by a free rule engine that acts like an entire design audit team: data extraction + WCAG compliance + benchmark comparison + brand analysis + prioritized recommendations. It runs on HuggingFace Spaces, costs ~$0.003 per analysis, and delivers actionable output automatically.
|
| 39 |
-
|
| 40 |
-
---
|
| 41 |
-
|
| 42 |
-
## Architecture Overview: Two Layers, Four Agents
|
| 43 |
-
|
| 44 |
-
My first attempt (V1) made a classic mistake:
|
| 45 |
-
**I used a large language model for everything.**
|
| 46 |
-
|
| 47 |
-
### Why Two Layers?
|
| 48 |
-
|
| 49 |
-
My V1 mistake: Used GPT-4 for everything
|
| 50 |
-
❌ Cost: $0.50–1.00 per run
|
| 51 |
-
❌ Speed: 15+ seconds for basic math
|
| 52 |
-
❌ Accuracy: LLMs hallucinate contrast ratios
|
| 53 |
-
|
| 54 |
-
The fix: **Not every task needs AI. Some need good engineering.**
|
| 55 |
-
|
| 56 |
-
V2 flipped the approach.
|
| 57 |
-
|
| 58 |
-
> **Deterministic code handles certainty. LLMs handle ambiguity.**
|
| 59 |
-
|
| 60 |
-
This led to a two-layer architecture.
|
| 61 |
-
|
| 62 |
-
[IMAGE: Architecture diagram — Layer 1 (Deterministic) → Layer 2 (AI Agents)]
|
| 63 |
-
|
| 64 |
-
```
|
| 65 |
-
┌─────────────────────────────────────────────────┐
|
| 66 |
-
│ LAYER 1: DETERMINISTIC (Free — $0.00) │
|
| 67 |
-
│ ├─ Crawler + Extractor + Normalizer │
|
| 68 |
-
│ ├─ WCAG Contrast Checker (math) │
|
| 69 |
-
│ ├─ Type Scale Detection (ratio math) │
|
| 70 |
-
│ ├─ Spacing Grid Analysis (GCD math) │
|
| 71 |
-
│ └─ Color Statistics (deduplication) │
|
| 72 |
-
├─────────────────────────────────────────────────┤
|
| 73 |
-
│ LAYER 2: AI AGENTS (~$0.003) │
|
| 74 |
-
│ ├─ AURORA — Brand Color Analyst │
|
| 75 |
-
│ ├─ ATLAS — Benchmark Advisor │
|
| 76 |
-
│ ├─ SENTINEL — Best Practices Auditor │
|
| 77 |
-
│ └─ NEXUS — Head Synthesizer │
|
| 78 |
-
└─────────────────────────────────────────────────┘
|
| 79 |
-
```
|
| 80 |
-
|
| 81 |
-
---
|
| 82 |
-
|
| 83 |
-
## Layer 1: Deterministic Intelligence (No LLM)
|
| 84 |
-
|
| 85 |
-
These agents do the heavy lifting — no LLMs involved.
|
| 86 |
-
|
| 87 |
-
### What This Layer Does
|
| 88 |
-
|
| 89 |
-
- Crawls every page with Playwright (desktop 1440px + mobile 375px)
|
| 90 |
-
- Extracts tokens from **7 sources**: DOM computed styles, CSS variables, SVG colors, inline styles, stylesheet rules, external CSS files (Firecrawl), brute-force page scan
|
| 91 |
-
- Deduplicates colors (exact hex + Delta-E distance)
|
| 92 |
-
- Checks **actual FG/BG pairs** against WCAG — not just "color vs white"
|
| 93 |
-
- Detects type scale ratio and spacing grid
|
| 94 |
-
- Scores overall consistency (0–100)
|
| 95 |
-
|
| 96 |
-
### Rule Engine Output:
|
| 97 |
-
|
| 98 |
-
```
|
| 99 |
-
📐 TYPE SCALE ANALYSIS
|
| 100 |
-
├─ Detected Ratio: 1.167
|
| 101 |
-
├─ Closest Standard: Minor Third (1.2)
|
| 102 |
-
├─ Consistent: ⚠️ No (variance: 0.24)
|
| 103 |
-
└─ 💡 Recommendation: 1.25 (Major Third)
|
| 104 |
-
|
| 105 |
-
♿ ACCESSIBILITY CHECK (WCAG AA/AAA)
|
| 106 |
-
├─ Colors Analyzed: 210
|
| 107 |
-
├─ FG/BG Pairs Checked: 220
|
| 108 |
-
├─ AA Pass: 143 ✅
|
| 109 |
-
├─ AA Fail (real FG/BG pairs): 67 ❌
|
| 110 |
-
│ ├─ fg:#06b2c4 on bg:#ffffff → 💡 Fix: #048391 (4.5:1)
|
| 111 |
-
│ ├─ fg:#999999 on bg:#ffffff → 💡 Fix: #757575 (4.6:1)
|
| 112 |
-
│ └─ ... and 62 more
|
| 113 |
-
|
| 114 |
-
📏 SPACING GRID
|
| 115 |
-
├─ Detected Base: 1px (GCD)
|
| 116 |
-
├─ Grid Aligned: ⚠️ 0%
|
| 117 |
-
└─ 💡 Recommendation: 8px grid
|
| 118 |
-
|
| 119 |
-
📊 CONSISTENCY SCORE: 52/100
|
| 120 |
-
```
|
| 121 |
-
|
| 122 |
-
This entire layer runs **in under 1 second** and costs nothing beyond compute — the single biggest cost optimization in the system.
|
| 123 |
-
|
| 124 |
-
---
|
| 125 |
-
|
| 126 |
-
## Layer 2: AI Analysis & Interpretation (4 Agents)
|
| 127 |
-
|
| 128 |
-
This is where language models actually add value — tasks that require **context, reasoning, and judgment**.
|
| 129 |
-
|
| 130 |
-
[IMAGE: Agent pipeline diagram — AURORA → ATLAS → SENTINEL → NEXUS]
|
| 131 |
-
|
| 132 |
-
---
|
| 133 |
-
|
| 134 |
-
### Agent 1: AURORA — Brand Color Analyst
|
| 135 |
-
**Model:** Qwen 72B (HuggingFace PRO)
|
| 136 |
-
**Cost:** Free within PRO subscription ($9/month)
|
| 137 |
-
**Temperature:** 0.4
|
| 138 |
-
|
| 139 |
-
**The Challenge:** The rule engine found 143 colors. Which one is the *brand* primary?
|
| 140 |
-
|
| 141 |
-
A rule engine can count that `#06b2c4` appears in 33 buttons. But it can't reason: "33 buttons + 12 CTAs + dominant accent positioning = this is almost certainly the brand primary." That requires **context understanding**.
|
| 142 |
-
|
| 143 |
-
**Sample Output:**
|
| 144 |
-
|
| 145 |
-
```
|
| 146 |
-
AURORA's Analysis:
|
| 147 |
-
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
| 148 |
-
🎨 Brand Primary: #06b2c4 (confidence: HIGH)
|
| 149 |
-
└─ 33 buttons, 12 CTAs, dominant accent
|
| 150 |
-
|
| 151 |
-
🎨 Brand Secondary: #373737 (confidence: HIGH)
|
| 152 |
-
└─ 89 text elements, consistent dark tone
|
| 153 |
-
|
| 154 |
-
Palette Strategy: Complementary
|
| 155 |
-
Cohesion Score: 7/10
|
| 156 |
-
└─ "Clear hierarchy, accent colors differentiated"
|
| 157 |
-
|
| 158 |
-
Self-Evaluation: confidence=8/10, data=good
|
| 159 |
-
```
|
| 160 |
-
|
| 161 |
-
---
|
| 162 |
-
|
| 163 |
-
### Agent 2: ATLAS — Benchmark Advisor
|
| 164 |
-
**Model:** Llama 3.3 70B (128K context)
|
| 165 |
-
**Cost:** Free within PRO subscription
|
| 166 |
-
**Temperature:** 0.25
|
| 167 |
-
|
| 168 |
-
**Unique Capability:** Industry benchmarking against 8 design systems (Material 3, Polaris, Atlassian, Carbon, Apple HIG, Tailwind, Ant, Chakra).
|
| 169 |
-
|
| 170 |
-
[IMAGE: Benchmark comparison table from the UI]
|
| 171 |
-
|
| 172 |
-
This agent doesn't just pick the closest match — it reasons about **effort vs. value**:
|
| 173 |
-
|
| 174 |
-
```
|
| 175 |
-
ATLAS's Recommendation:
|
| 176 |
-
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
| 177 |
-
Recommended: Shopify Polaris (87% match)
|
| 178 |
-
|
| 179 |
-
Alignment Changes:
|
| 180 |
-
├─ Type scale: 1.17 → 1.25 (effort: medium)
|
| 181 |
-
├─ Spacing grid: mixed → 4px (effort: high)
|
| 182 |
-
└─ Base size: 16px → 16px (already aligned ✅)
|
| 183 |
-
|
| 184 |
-
Pros: Closest match, e-commerce proven, well-documented
|
| 185 |
-
Cons: Spacing migration is significant effort
|
| 186 |
-
|
| 187 |
-
Alternative: Material 3 (77% match)
|
| 188 |
-
└─ "Stronger mobile patterns, but 8px grid
|
| 189 |
-
requires more restructuring"
|
| 190 |
-
```
|
| 191 |
-
|
| 192 |
-
ATLAS's Value Add:
|
| 193 |
-
|
| 194 |
-
> "You're 87% aligned to Polaris already. Closing the gap on type scale takes ~1 hour and makes your system industry-standard. **Priority: MEDIUM.**"
|
| 195 |
-
|
| 196 |
-
---
|
| 197 |
-
|
| 198 |
-
### Agent 3: SENTINEL — Best Practices Auditor
|
| 199 |
-
**Model:** Qwen 72B
|
| 200 |
-
**Cost:** Free within PRO subscription
|
| 201 |
-
**Temperature:** 0.2 (strict, consistent)
|
| 202 |
-
|
| 203 |
-
**The Challenge:** The rule engine says "67 AA failures." But which ones matter most?
|
| 204 |
-
|
| 205 |
-
SENTINEL prioritizes by **business impact** — not just severity:
|
| 206 |
-
|
| 207 |
-
```
|
| 208 |
-
SENTINEL's Priority Fixes:
|
| 209 |
-
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
| 210 |
-
Overall Score: 68/100
|
| 211 |
-
|
| 212 |
-
Checks:
|
| 213 |
-
├─ ✅ Type Scale Standard (1.25 ratio)
|
| 214 |
-
├─ ⚠️ Type Scale Consistency (variance 0.18)
|
| 215 |
-
├─ ✅ Base Size Accessible (16px)
|
| 216 |
-
├─ ❌ AA Compliance (67 failures)
|
| 217 |
-
├─ ⚠️ Spacing Grid (0% aligned)
|
| 218 |
-
└─ ❌ Near-Duplicates (351 pairs)
|
| 219 |
-
|
| 220 |
-
Priority Fixes:
|
| 221 |
-
#1 Fix brand color AA compliance
|
| 222 |
-
Impact: HIGH | Effort: 5 min
|
| 223 |
-
→ "Affects 40% of interactive elements"
|
| 224 |
-
|
| 225 |
-
#2 Consolidate near-duplicate colors
|
| 226 |
-
Impact: MEDIUM | Effort: 2 hours
|
| 227 |
-
|
| 228 |
-
#3 Align spacing to 8px grid
|
| 229 |
-
Impact: MEDIUM | Effort: 1 hour
|
| 230 |
-
```
|
| 231 |
-
|
| 232 |
-
---
|
| 233 |
-
|
| 234 |
-
### Agent 4: NEXUS — Head Synthesizer (Final Output)
|
| 235 |
-
**Model:** Llama 3.3 70B (128K context)
|
| 236 |
-
**Cost:** ~$0.001
|
| 237 |
-
**Temperature:** 0.3
|
| 238 |
-
|
| 239 |
-
**No AI for Agents 1–3 can replace this.** NEXUS takes outputs from ALL three agents + the rule engine and synthesizes a final recommendation — **resolving contradictions**, weighting scores, and producing the executive summary the user actually sees.
|
| 240 |
-
|
| 241 |
-
If ATLAS says "close to Polaris" but SENTINEL says "spacing misaligned," NEXUS reconciles: *"Align to Polaris type scale now (low effort) but defer spacing migration (high effort)."*
|
| 242 |
-
|
| 243 |
-
```
|
| 244 |
-
NEXUS Final Synthesis:
|
| 245 |
-
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
| 246 |
-
📝 Executive Summary:
|
| 247 |
-
"Your design system scores 68/100. Critical:
|
| 248 |
-
67 color pairs fail AA. Top action: fix brand
|
| 249 |
-
primary contrast (5 min, high impact)."
|
| 250 |
-
|
| 251 |
-
📊 Scores:
|
| 252 |
-
├─ Overall: 68/100
|
| 253 |
-
├─ Accessibility: 45/100
|
| 254 |
-
├─ Consistency: 75/100
|
| 255 |
-
└─ Organization: 70/100
|
| 256 |
-
|
| 257 |
-
🎯 Top 3 Actions:
|
| 258 |
-
1. Fix brand color AA (#06b2c4 → #048391)
|
| 259 |
-
Impact: HIGH | Effort: 5 min
|
| 260 |
-
2. Align type scale to 1.25
|
| 261 |
-
Impact: MEDIUM | Effort: 1 hour
|
| 262 |
-
3. Consolidate 143 → ~20 semantic colors
|
| 263 |
-
Impact: MEDIUM | Effort: 2 hours
|
| 264 |
-
|
| 265 |
-
🎨 Color Recommendations:
|
| 266 |
-
├─ ✅ brand.primary: #06b2c4 → #048391 (auto-accept)
|
| 267 |
-
├─ ✅ text.secondary: #999999 → #757575 (auto-accept)
|
| 268 |
-
└─ ❌ brand.accent: #FF6B35 → #E65100 (user decides)
|
| 269 |
-
```
|
| 270 |
-
|
| 271 |
-
---
|
| 272 |
-
|
| 273 |
-
## Real Analysis: Two Websites
|
| 274 |
-
|
| 275 |
-
### Website A: The Clean System
|
| 276 |
-
|
| 277 |
-
```
|
| 278 |
-
Landing → Product → Cart → Checkout
|
| 279 |
-
```
|
| 280 |
-
|
| 281 |
-
**Consistency Score:** 78/100
|
| 282 |
-
**AA Failures:** 3 (all minor text colors)
|
| 283 |
-
**Type Scale:** 1.25 ratio, consistent across pages
|
| 284 |
-
**Agent Insight:** "Well-structured system. Minor AA fixes on secondary text. Already 92% aligned to Material 3."
|
| 285 |
-
|
| 286 |
-
### Website B: The Messy System
|
| 287 |
-
|
| 288 |
-
```
|
| 289 |
-
Landing → Features → Pricing → ⚠️ Contact → Signup
|
| 290 |
-
```
|
| 291 |
-
|
| 292 |
-
**Consistency Score:** 34/100
|
| 293 |
-
**AA Failures:** 67
|
| 294 |
-
**Colors:** 143 unique (351 near-duplicates)
|
| 295 |
-
**Agent Insight:** "No clear type scale. Brand primary fails AA on every interactive element. 143 colors suggests no design system is actually enforced."
|
| 296 |
-
|
| 297 |
-
**NEXUS's Diagnosis:**
|
| 298 |
-
> "This isn't a broken design system — it's the absence of one. Start with AA compliance (5 min fix), then consolidate to ~20 semantic colors (2 hours). Align to Polaris as your foundation."
|
| 299 |
-
|
| 300 |
-
That last line is the difference between a report and an **action plan**.
|
| 301 |
-
|
| 302 |
-
---
|
| 303 |
-
|
| 304 |
-
## Cost & Model Strategy
|
| 305 |
-
|
| 306 |
-
Different agents use different models — intentionally.
|
| 307 |
-
|
| 308 |
-
[IMAGE: Cost comparison table]
|
| 309 |
-
|
| 310 |
-
| Agent | Model | Why This Model | Cost |
|
| 311 |
-
|-------|-------|---------------|------|
|
| 312 |
-
| Rule Engine | None | Math doesn't need AI | $0.00 |
|
| 313 |
-
| AURORA | Qwen 72B | Creative color reasoning | ~Free (HF PRO) |
|
| 314 |
-
| ATLAS | Llama 3.3 70B | 128K context for benchmarks | ~Free (HF PRO) |
|
| 315 |
-
| SENTINEL | Qwen 72B | Strict, consistent evaluation | ~Free (HF PRO) |
|
| 316 |
-
| NEXUS | Llama 3.3 70B | 128K context for synthesis | ~$0.001 |
|
| 317 |
-
| **Total** | | | **~$0.003** |
|
| 318 |
-
|
| 319 |
-
For designer-scale usage (weekly runs), inference costs are effectively negligible, with HuggingFace PRO ($9/month) covering most models.
|
| 320 |
-
|
| 321 |
-
Compared to V1, this architecture delivers:
|
| 322 |
-
- **~100–300x cost reduction**
|
| 323 |
-
- **Faster execution** (rule engine: <1s vs LLM: 15s for the same math)
|
| 324 |
-
- **Better accuracy** (LLMs hallucinate math; rule engines don't)
|
| 325 |
-
- **Graceful degradation** (always produces output, even when LLMs fail)
|
| 326 |
-
|
| 327 |
-
---
|
| 328 |
-
|
| 329 |
-
## Graceful Degradation
|
| 330 |
-
|
| 331 |
-
The system **always produces output**, even when components fail:
|
| 332 |
-
|
| 333 |
-
| If This Fails... | What Happens |
|
| 334 |
-
|-------------------|-------------|
|
| 335 |
-
| LLM agents down | Rule engine analysis still works (free) |
|
| 336 |
-
| Firecrawl unavailable | DOM-only extraction (slightly fewer tokens) |
|
| 337 |
-
| Benchmark fetch fails | Hardcoded fallback data from 8 systems |
|
| 338 |
-
| NEXUS synthesis fails | `create_fallback_synthesis()` from rule engine |
|
| 339 |
-
| **Entire AI layer** | **Full rule-engine-only report — still useful** |
|
| 340 |
-
|
| 341 |
-
---
|
| 342 |
-
|
| 343 |
-
## What I Learned
|
| 344 |
-
|
| 345 |
-
**1. Overusing LLMs is a design failure.**
|
| 346 |
-
If rules can do it faster and cheaper — use rules. My WCAG checker is 100% accurate. An LLM's contrast ratio calculation? Maybe 85% accurate, and 100x slower.
|
| 347 |
-
|
| 348 |
-
**2. Industry benchmarks are gold.**
|
| 349 |
-
Without benchmarks: "Your type scale is inconsistent" → *PM nods*
|
| 350 |
-
With benchmarks: "You're 87% aligned to Shopify Polaris. Closing the gap takes 1 hour and makes your system industry-standard." → *PM schedules meeting*
|
| 351 |
-
|
| 352 |
-
Time to build benchmark database: 1 day
|
| 353 |
-
Value: Transforms analysis into prioritized action
|
| 354 |
-
|
| 355 |
-
**3. Specialized agents > one big prompt.**
|
| 356 |
-
One mega-prompt doing brand analysis + benchmark comparison + accessibility audit + synthesis = confused, unfocused output. Four agents, each with a single responsibility = sharp, reliable analysis.
|
| 357 |
-
|
| 358 |
-
The same principle as microservices: do one thing well.
|
| 359 |
-
|
| 360 |
-
**4. UX skills transfer directly to AI systems.**
|
| 361 |
-
Agent design feels a lot like service design:
|
| 362 |
-
- flows
|
| 363 |
-
- handoffs
|
| 364 |
-
- failure modes
|
| 365 |
-
- human interpretation
|
| 366 |
-
|
| 367 |
-
The best AI architectures are the ones designed like good products.
|
| 368 |
-
|
| 369 |
-
---
|
| 370 |
-
|
| 371 |
-
## A Note on the Tech Stack
|
| 372 |
-
|
| 373 |
-
**On HuggingFace Spaces:** I'm using HF Spaces as the hosting platform with a Gradio frontend running in Docker. The LLM models (Qwen 72B, Llama 3.3 70B) are called via HuggingFace Inference API. Browser automation (Playwright + Chromium) runs inside the container.
|
| 374 |
-
|
| 375 |
-
**On the Data:** This system works on **live websites** — point it at any URL and it extracts real design tokens from the actual DOM. No synthetic data. The architecture, LLM integrations, and rule engine are production-ready.
|
| 376 |
-
|
| 377 |
-
🔗 **HuggingFace Space** (Live Demo): [link]
|
| 378 |
-
|
| 379 |
-
[IMAGE: Screenshot of the Gradio UI showing full analysis results]
|
| 380 |
-
|
| 381 |
-
---
|
| 382 |
-
|
| 383 |
-
## Closing Thought
|
| 384 |
-
|
| 385 |
-
AI engineering isn't about fancy models or complex architecture. It's about knowing which problems need AI vs good engineering.
|
| 386 |
-
|
| 387 |
-
It's **compression** — compressing days of manual audit, multiple expert perspectives, and industry benchmarking into something a team can act on Monday morning.
|
| 388 |
-
|
| 389 |
-
Instead of 2–3 days reviewing DevTools, your team gets:
|
| 390 |
-
> "Top 3 issues, ranked by impact, with specific fixes, benchmark alignment, and brand color identification"
|
| 391 |
-
|
| 392 |
-
That's AI amplifying design systems impact.
|
| 393 |
-
|
| 394 |
-
🔗 Full code on GitHub: [link]
|
| 395 |
-
|
| 396 |
-
---
|
| 397 |
-
|
| 398 |
-
*This is Episode [X] of "AI in My Daily Work."*
|
| 399 |
-
|
| 400 |
-
*If you missed the previous episodes:*
|
| 401 |
-
- *Episode 5: Building a 7-Agent UX Friction Analysis System in Databricks*
|
| 402 |
-
- *Episode 4: Automating UI Regression Testing with AI Agents (Part-1)*
|
| 403 |
-
- *Episode 3: Building a Multi-Agent Review Intelligence System*
|
| 404 |
-
- *Episode 2: How I Use a Team of AI Agents to Automate Secondary Research*
|
| 405 |
-
|
| 406 |
-
*What problems are you automating with AI? Drop a comment — I'd love to discuss what you're building.*
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
core/__init__.py
CHANGED
|
@@ -1,5 +1,5 @@
|
|
| 1 |
"""
|
| 2 |
-
Core utilities for Design System
|
| 3 |
"""
|
| 4 |
|
| 5 |
from core.token_schema import (
|
|
|
|
| 1 |
"""
|
| 2 |
+
Core utilities for Design System Automation.
|
| 3 |
"""
|
| 4 |
|
| 5 |
from core.token_schema import (
|
core/color_classifier.py
CHANGED
|
@@ -1,6 +1,6 @@
|
|
| 1 |
"""
|
| 2 |
Rule-Based Color Classifier
|
| 3 |
-
Design System
|
| 4 |
|
| 5 |
100% deterministic color classification and naming.
|
| 6 |
NO LLM involved. Every decision logged with evidence.
|
|
|
|
| 1 |
"""
|
| 2 |
Rule-Based Color Classifier
|
| 3 |
+
Design System Automation v3.1
|
| 4 |
|
| 5 |
100% deterministic color classification and naming.
|
| 6 |
NO LLM involved. Every decision logged with evidence.
|
core/color_utils.py
CHANGED
|
@@ -1,6 +1,6 @@
|
|
| 1 |
"""
|
| 2 |
Color Utilities
|
| 3 |
-
Design System
|
| 4 |
|
| 5 |
Functions for color analysis, contrast calculation, and ramp generation.
|
| 6 |
"""
|
|
|
|
| 1 |
"""
|
| 2 |
Color Utilities
|
| 3 |
+
Design System Automation
|
| 4 |
|
| 5 |
Functions for color analysis, contrast calculation, and ramp generation.
|
| 6 |
"""
|
core/hf_inference.py
CHANGED
|
@@ -1,6 +1,6 @@
|
|
| 1 |
"""
|
| 2 |
HuggingFace Inference Client
|
| 3 |
-
Design System
|
| 4 |
|
| 5 |
Handles all LLM inference calls using HuggingFace Inference API.
|
| 6 |
Supports diverse models from different providers for specialized tasks.
|
|
|
|
| 1 |
"""
|
| 2 |
HuggingFace Inference Client
|
| 3 |
+
Design System Automation
|
| 4 |
|
| 5 |
Handles all LLM inference calls using HuggingFace Inference API.
|
| 6 |
Supports diverse models from different providers for specialized tasks.
|
core/logging.py
CHANGED
|
@@ -1,5 +1,5 @@
|
|
| 1 |
"""
|
| 2 |
-
Structured Logging for Design System
|
| 3 |
================================================
|
| 4 |
|
| 5 |
Provides consistent logging across the application using loguru.
|
|
|
|
| 1 |
"""
|
| 2 |
+
Structured Logging for Design System Automation
|
| 3 |
================================================
|
| 4 |
|
| 5 |
Provides consistent logging across the application using loguru.
|
core/token_schema.py
CHANGED
|
@@ -1,6 +1,6 @@
|
|
| 1 |
"""
|
| 2 |
Token Schema Definitions
|
| 3 |
-
Design System
|
| 4 |
|
| 5 |
Pydantic models for all token types and extraction results.
|
| 6 |
These are the core data structures used throughout the application.
|
|
@@ -401,7 +401,7 @@ class TokenMetadata(BaseModel):
|
|
| 401 |
extracted_at: datetime
|
| 402 |
version: str
|
| 403 |
viewport: Viewport
|
| 404 |
-
generator: str = "Design System
|
| 405 |
|
| 406 |
|
| 407 |
class FinalTokens(BaseModel):
|
|
|
|
| 1 |
"""
|
| 2 |
Token Schema Definitions
|
| 3 |
+
Design System Automation v3
|
| 4 |
|
| 5 |
Pydantic models for all token types and extraction results.
|
| 6 |
These are the core data structures used throughout the application.
|
|
|
|
| 401 |
extracted_at: datetime
|
| 402 |
version: str
|
| 403 |
viewport: Viewport
|
| 404 |
+
generator: str = "Design System Automation v3"
|
| 405 |
|
| 406 |
|
| 407 |
class FinalTokens(BaseModel):
|
docs/CONTEXT.md
DELETED
|
@@ -1,190 +0,0 @@
|
|
| 1 |
-
# Design System Extractor v3.2 — Master Context File
|
| 2 |
-
|
| 3 |
-
> **Upload this file to refresh Claude's context when continuing work on this project.**
|
| 4 |
-
|
| 5 |
-
**Last Updated:** February 2026
|
| 6 |
-
|
| 7 |
-
---
|
| 8 |
-
|
| 9 |
-
## Current Status
|
| 10 |
-
|
| 11 |
-
| Component | Status | Version |
|
| 12 |
-
|-----------|--------|---------|
|
| 13 |
-
| Token Extraction (Part 1) | COMPLETE | v3.2 |
|
| 14 |
-
| Color Classification | COMPLETE | v3.1 |
|
| 15 |
-
| DTCG Compliance | COMPLETE | v3.2 |
|
| 16 |
-
| Naming Authority Chain | COMPLETE | v3.2 |
|
| 17 |
-
| Figma Plugin (Visual Spec) | COMPLETE | v7 |
|
| 18 |
-
| Component Generation (Part 2) | RESEARCH DONE | - |
|
| 19 |
-
| Tests | 113 passing | - |
|
| 20 |
-
|
| 21 |
-
---
|
| 22 |
-
|
| 23 |
-
## Project Goal
|
| 24 |
-
|
| 25 |
-
Build a **semi-automated, human-in-the-loop system** that:
|
| 26 |
-
1. Reverse-engineers a design system from a live website
|
| 27 |
-
2. Classifies colors deterministically by CSS evidence
|
| 28 |
-
3. Audits against industry benchmarks and best practices
|
| 29 |
-
4. Outputs W3C DTCG v1 compliant JSON
|
| 30 |
-
5. Generates Figma Variables, Styles, and Visual Spec pages
|
| 31 |
-
6. (Part 2) Auto-generates Figma components from tokens
|
| 32 |
-
|
| 33 |
-
**Philosophy:** AI as copilot, not autopilot. Humans decide, agents propose.
|
| 34 |
-
|
| 35 |
-
---
|
| 36 |
-
|
| 37 |
-
## Architecture (v3.2)
|
| 38 |
-
|
| 39 |
-
```
|
| 40 |
-
+--------------------------------------------------+
|
| 41 |
-
| LAYER 1: EXTRACTION + NORMALIZATION (Free) |
|
| 42 |
-
| +- Crawler + 7-Source Extractor (Playwright) |
|
| 43 |
-
| +- Normalizer: colors, radius, shadows, typo |
|
| 44 |
-
| +- Firecrawl: deep CSS parsing |
|
| 45 |
-
+--------------------------------------------------+
|
| 46 |
-
| LAYER 2: CLASSIFICATION + RULE ENGINE (Free) |
|
| 47 |
-
| +- Color Classifier (815 lines, deterministic) |
|
| 48 |
-
| +- WCAG Contrast Checker (actual FG/BG pairs) |
|
| 49 |
-
| +- Type Scale Detection (ratio math) |
|
| 50 |
-
| +- Spacing Grid Analysis (GCD math) |
|
| 51 |
-
+--------------------------------------------------+
|
| 52 |
-
| LAYER 3: 4 AI AGENTS (~$0.003) |
|
| 53 |
-
| +- AURORA - Brand Advisor (Qwen 72B) |
|
| 54 |
-
| +- ATLAS - Benchmark Advisor (Llama 70B) |
|
| 55 |
-
| +- SENTINEL - Best Practices Audit (Qwen 72B) |
|
| 56 |
-
| +- NEXUS - Head Synthesizer (Llama 70B) |
|
| 57 |
-
+--------------------------------------------------+
|
| 58 |
-
| EXPORT: W3C DTCG v1 Compliant JSON |
|
| 59 |
-
| +- $type, $value, $description, $extensions |
|
| 60 |
-
| +- Figma Plugin: Variables + Styles + Visual Spec|
|
| 61 |
-
+--------------------------------------------------+
|
| 62 |
-
```
|
| 63 |
-
|
| 64 |
-
### Naming Authority Chain (v3.2)
|
| 65 |
-
|
| 66 |
-
```
|
| 67 |
-
1. Color Classifier (PRIMARY) - deterministic, covers ALL colors
|
| 68 |
-
+- CSS evidence -> category -> token name
|
| 69 |
-
+- 100% reproducible, logged with evidence
|
| 70 |
-
|
| 71 |
-
2. AURORA LLM (SECONDARY) - semantic role enhancer ONLY
|
| 72 |
-
+- Can promote "color.blue.500" -> "color.brand.primary"
|
| 73 |
-
+- CANNOT rename palette colors
|
| 74 |
-
+- filter_aurora_naming_map() enforces boundary
|
| 75 |
-
|
| 76 |
-
3. Normalizer (FALLBACK) - preliminary hue+shade names
|
| 77 |
-
```
|
| 78 |
-
|
| 79 |
-
---
|
| 80 |
-
|
| 81 |
-
## File Structure
|
| 82 |
-
|
| 83 |
-
```
|
| 84 |
-
design-system-extractor-v3/
|
| 85 |
-
+-- app.py # Main Gradio app (~5000 lines)
|
| 86 |
-
+-- CLAUDE.md # Project context and architecture
|
| 87 |
-
+-- PART2_COMPONENT_GENERATION.md # Part 2 research + plan
|
| 88 |
-
|
|
| 89 |
-
+-- agents/
|
| 90 |
-
| +-- crawler.py # Page discovery
|
| 91 |
-
| +-- extractor.py # Playwright 7-source extraction
|
| 92 |
-
| +-- firecrawl_extractor.py # Deep CSS parsing
|
| 93 |
-
| +-- normalizer.py # Token normalization (~950 lines)
|
| 94 |
-
| +-- llm_agents.py # AURORA, ATLAS, SENTINEL, NEXUS
|
| 95 |
-
| +-- semantic_analyzer.py # DEPRECATED in v3.2
|
| 96 |
-
| +-- stage2_graph.py # DEPRECATED in v3.2
|
| 97 |
-
|
|
| 98 |
-
+-- core/
|
| 99 |
-
| +-- color_classifier.py # Rule-based classification (815 lines)
|
| 100 |
-
| +-- color_utils.py # Color math (hex/RGB/HSL, contrast)
|
| 101 |
-
| +-- rule_engine.py # Type scale, WCAG, spacing grid (~1100 lines)
|
| 102 |
-
| +-- hf_inference.py # HuggingFace Inference API client
|
| 103 |
-
| +-- token_schema.py # Pydantic models
|
| 104 |
-
|
|
| 105 |
-
+-- config/
|
| 106 |
-
| +-- settings.py # Configuration
|
| 107 |
-
|
|
| 108 |
-
+-- tests/
|
| 109 |
-
| +-- test_stage1_extraction.py # 82 deterministic tests
|
| 110 |
-
| +-- test_agent_evals.py # 27 LLM agent schema/behavior tests
|
| 111 |
-
| +-- test_stage2_pipeline.py # Pipeline integration tests
|
| 112 |
-
|
|
| 113 |
-
+-- output_json/
|
| 114 |
-
| +-- figma-plugin-extracted/
|
| 115 |
-
| +-- figma-design-token-creator 5/
|
| 116 |
-
| +-- src/code.js # Figma plugin (~1200 lines)
|
| 117 |
-
| +-- src/ui.html # Plugin UI (~500 lines)
|
| 118 |
-
|
|
| 119 |
-
+-- docs/
|
| 120 |
-
+-- MEDIUM_ARTICLE_EPISODE_6.md # Medium article
|
| 121 |
-
+-- LINKEDIN_POST_EPISODE_6.md # LinkedIn post
|
| 122 |
-
+-- IMAGE_GUIDE_EPISODE_6.md # Image specs for article
|
| 123 |
-
+-- FIGMA_SPECIMEN_IDEAS.md # Visual spec layout reference
|
| 124 |
-
+-- CONTEXT.md # THIS FILE
|
| 125 |
-
```
|
| 126 |
-
|
| 127 |
-
---
|
| 128 |
-
|
| 129 |
-
## Model Assignments
|
| 130 |
-
|
| 131 |
-
| Agent | Model | Temperature | Role |
|
| 132 |
-
|-------|-------|-------------|------|
|
| 133 |
-
| Rule Engine | None | - | WCAG, type scale, spacing (FREE) |
|
| 134 |
-
| Color Classifier | None | - | CSS evidence -> category (FREE) |
|
| 135 |
-
| AURORA | Qwen/Qwen2.5-72B-Instruct | 0.4 | Brand advisor (SECONDARY) |
|
| 136 |
-
| ATLAS | meta-llama/Llama-3.3-70B-Instruct | 0.25 | Benchmark comparison |
|
| 137 |
-
| SENTINEL | Qwen/Qwen2.5-72B-Instruct | 0.2 | Best practices audit |
|
| 138 |
-
| NEXUS | meta-llama/Llama-3.3-70B-Instruct | 0.3 | Final synthesis |
|
| 139 |
-
|
| 140 |
-
**Total cost per analysis:** ~$0.003
|
| 141 |
-
|
| 142 |
-
---
|
| 143 |
-
|
| 144 |
-
## Key Technical Decisions
|
| 145 |
-
|
| 146 |
-
| Decision | Choice | Rationale |
|
| 147 |
-
|----------|--------|-----------|
|
| 148 |
-
| Color naming | Numeric shades (50-900) | Never words (light/dark/base) |
|
| 149 |
-
| Naming authority | Classifier PRIMARY, LLM SECONDARY | One source of truth |
|
| 150 |
-
| Export format | W3C DTCG v1 | Industry standard (Oct 2025) |
|
| 151 |
-
| Token metadata | $extensions (namespaced) | Frequency, confidence, evidence |
|
| 152 |
-
| Radius processing | Parse, deduplicate, sort, name | none/sm/md/lg/xl/2xl/full |
|
| 153 |
-
| Shadow processing | Parse, sort by blur, name | xs/sm/md/lg/xl (always 5 levels) |
|
| 154 |
-
| Accessibility | Actual FG/BG pairs from DOM | Not just color vs white |
|
| 155 |
-
| Figma output | Variables + Styles + Visual Spec | Auto-generated specimen page |
|
| 156 |
-
| LLM role | Advisory only, never naming authority | Deterministic reproducibility |
|
| 157 |
-
|
| 158 |
-
---
|
| 159 |
-
|
| 160 |
-
## Execution Status
|
| 161 |
-
|
| 162 |
-
### Part 1: Token Extraction + Analysis (COMPLETE)
|
| 163 |
-
|
| 164 |
-
```
|
| 165 |
-
PHASE 1: NORMALIZER [DONE]
|
| 166 |
-
PHASE 2: STAGE 2 AGENTS [DONE]
|
| 167 |
-
PHASE 3: EXPORT + DTCG [DONE]
|
| 168 |
-
PHASE 4: EXTRACTION IMPROVEMENTS [NOT STARTED]
|
| 169 |
-
4a. Font family detection (still returns "sans-serif")
|
| 170 |
-
4b. Rule engine: radius grid analysis
|
| 171 |
-
4c. Rule engine: shadow elevation analysis
|
| 172 |
-
```
|
| 173 |
-
|
| 174 |
-
### Part 2: Component Generation (RESEARCH COMPLETE)
|
| 175 |
-
|
| 176 |
-
**Decision:** Custom Figma Plugin (Option A)
|
| 177 |
-
**Scope:** 5 MVP components, ~86 variants, ~1400 lines new plugin code
|
| 178 |
-
**See:** `PART2_COMPONENT_GENERATION.md` for full details
|
| 179 |
-
|
| 180 |
-
---
|
| 181 |
-
|
| 182 |
-
## GitHub
|
| 183 |
-
|
| 184 |
-
- **Repository:** https://github.com/hiriazmo/design-system-extractor-v3
|
| 185 |
-
- **Latest commit:** `6b43e51` (DTCG compliance + naming authority)
|
| 186 |
-
- **Tests:** 113 passing
|
| 187 |
-
|
| 188 |
-
---
|
| 189 |
-
|
| 190 |
-
*Last updated: 2026-02-23*
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
docs/FIGMA_SPECIMEN_IDEAS.md
DELETED
|
@@ -1,508 +0,0 @@
|
|
| 1 |
-
# Figma Design System Specimen Page Ideas
|
| 2 |
-
|
| 3 |
-
## Purpose
|
| 4 |
-
After importing the JSON (AS-IS or TO-BE) via your plugin, you need a visual way to **display and review** the design tokens. This document provides layout ideas and methods to auto-generate specimen pages.
|
| 5 |
-
|
| 6 |
-
---
|
| 7 |
-
|
| 8 |
-
## Specimen Page Layout
|
| 9 |
-
|
| 10 |
-
### Overall Structure
|
| 11 |
-
```
|
| 12 |
-
┌─────────────────────────────────────────────────────────────────────────────┐
|
| 13 |
-
│ │
|
| 14 |
-
│ DESIGN SYSTEM SPECIMEN │
|
| 15 |
-
│ [AS-IS] or [TO-BE] │
|
| 16 |
-
│ Source: example.com | Generated: Jan 29, 2026 │
|
| 17 |
-
│ │
|
| 18 |
-
├─────────────────────────────────────────────────────────────────────────────┤
|
| 19 |
-
│ │
|
| 20 |
-
│ ┌─────────────────────────────┐ ┌─────────────────────────────┐ │
|
| 21 |
-
│ │ TYPOGRAPHY DESKTOP │ │ TYPOGRAPHY MOBILE │ │
|
| 22 |
-
│ └─────────────────────────────┘ └─────────────────────────────┘ │
|
| 23 |
-
│ │
|
| 24 |
-
│ ┌─────────────────────────────────────────────────────────────┐ │
|
| 25 |
-
│ │ COLORS │ │
|
| 26 |
-
│ │ Brand | Text | Background | Border | Feedback │ │
|
| 27 |
-
│ └─────────────────────────────────────────────────────────────┘ │
|
| 28 |
-
│ │
|
| 29 |
-
│ ┌─────────────────────────────┐ ┌─────────────────────────────┐ │
|
| 30 |
-
│ │ SPACING │ │ BORDER RADIUS │ │
|
| 31 |
-
│ └─────────────────────────────┘ └─────────────────────────────┘ │
|
| 32 |
-
│ │
|
| 33 |
-
│ ┌─────────────────────────────────────────────────────────────┐ │
|
| 34 |
-
│ │ SHADOWS │ │
|
| 35 |
-
│ └─────────────────────────────────────────────────────────────┘ │
|
| 36 |
-
│ │
|
| 37 |
-
└─────────────────────────────────────────────────────────────────────────────┘
|
| 38 |
-
```
|
| 39 |
-
|
| 40 |
-
---
|
| 41 |
-
|
| 42 |
-
## Section 1: Typography
|
| 43 |
-
|
| 44 |
-
### Desktop Typography (Left Column)
|
| 45 |
-
```
|
| 46 |
-
┌─────────────────────────────────────────────────────────────────┐
|
| 47 |
-
│ TYPOGRAPHY — DESKTOP (1440px) │
|
| 48 |
-
├─────────────────────────────────────────────────────────────────┤
|
| 49 |
-
│ │
|
| 50 |
-
│ Display XL │
|
| 51 |
-
│ The quick brown fox │
|
| 52 |
-
│ ─────────────────────────────────────────────── │
|
| 53 |
-
│ Open Sans · 72px · Bold · 1.1 line-height │
|
| 54 |
-
│ Token: font.display.xl.desktop │
|
| 55 |
-
│ │
|
| 56 |
-
│ ───────────────────────────────────────────────────────── │
|
| 57 |
-
│ │
|
| 58 |
-
│ Heading 1 │
|
| 59 |
-
│ The quick brown fox jumps │
|
| 60 |
-
│ ─────────────────────────────────────────────── │
|
| 61 |
-
│ Open Sans · 48px · Bold · 1.2 line-height │
|
| 62 |
-
│ Token: font.heading.1.desktop │
|
| 63 |
-
│ │
|
| 64 |
-
│ ───────────────────────────────────────────────────────── │
|
| 65 |
-
│ │
|
| 66 |
-
│ Heading 2 │
|
| 67 |
-
│ The quick brown fox jumps over │
|
| 68 |
-
│ ─────────────────────────────────────────────── │
|
| 69 |
-
│ Open Sans · 36px · Semibold · 1.25 line-height │
|
| 70 |
-
│ Token: font.heading.2.desktop │
|
| 71 |
-
│ │
|
| 72 |
-
│ ───────────────────────────────────────────────────────── │
|
| 73 |
-
│ │
|
| 74 |
-
│ Heading 3 │
|
| 75 |
-
│ The quick brown fox jumps over the lazy dog │
|
| 76 |
-
│ ─────────────────────────────────────────────── │
|
| 77 |
-
│ Open Sans · 28px · Semibold · 1.3 line-height │
|
| 78 |
-
│ Token: font.heading.3.desktop │
|
| 79 |
-
│ │
|
| 80 |
-
│ ───────────────────────────────────────────────────────── │
|
| 81 |
-
│ │
|
| 82 |
-
│ Body Large │
|
| 83 |
-
│ The quick brown fox jumps over the lazy dog. Pack my box │
|
| 84 |
-
│ with five dozen liquor jugs. │
|
| 85 |
-
│ ─────────────────────────────────────────────── │
|
| 86 |
-
│ Open Sans · 18px · Regular · 1.5 line-height │
|
| 87 |
-
│ Token: font.body.lg.desktop │
|
| 88 |
-
│ │
|
| 89 |
-
│ ───────────────────────────────────────────────────────── │
|
| 90 |
-
│ │
|
| 91 |
-
│ Body │
|
| 92 |
-
│ The quick brown fox jumps over the lazy dog. Pack my box │
|
| 93 |
-
│ with five dozen liquor jugs. How vexingly quick daft zebras │
|
| 94 |
-
│ jump! │
|
| 95 |
-
│ ─────────────────────────────────────────────── │
|
| 96 |
-
│ Open Sans · 16px · Regular · 1.5 line-height │
|
| 97 |
-
│ Token: font.body.desktop │
|
| 98 |
-
│ │
|
| 99 |
-
│ ───────────────────────────────────────────────────────── │
|
| 100 |
-
│ │
|
| 101 |
-
│ Caption │
|
| 102 |
-
│ The quick brown fox jumps over the lazy dog │
|
| 103 |
-
│ ─────────────────────────────────────────────── │
|
| 104 |
-
│ Open Sans · 12px · Regular · 1.4 line-height │
|
| 105 |
-
│ Token: font.caption.desktop │
|
| 106 |
-
│ │
|
| 107 |
-
└─────────────────────────────────────────────────────────────────┘
|
| 108 |
-
```
|
| 109 |
-
|
| 110 |
-
### Mobile Typography (Right Column)
|
| 111 |
-
Same structure but with mobile values:
|
| 112 |
-
- Smaller sizes (e.g., Display XL: 48px instead of 72px)
|
| 113 |
-
- Token names: font.display.xl.mobile
|
| 114 |
-
|
| 115 |
-
### Typography Comparison View (Alternative)
|
| 116 |
-
```
|
| 117 |
-
┌─────────────────────────────────────────────────────────────────────────────┐
|
| 118 |
-
│ TYPOGRAPHY SCALE COMPARISON │
|
| 119 |
-
├─────────────────────────────────────────────────────────────────────────────┤
|
| 120 |
-
│ │
|
| 121 |
-
│ Token Desktop Mobile Ratio │
|
| 122 |
-
│ ───────────────────────────────────────────────────────────────────── │
|
| 123 |
-
│ display.xl 72px 48px 1.5x │
|
| 124 |
-
│ heading.1 48px 36px 1.33x │
|
| 125 |
-
│ heading.2 36px 28px 1.29x │
|
| 126 |
-
│ heading.3 28px 24px 1.17x │
|
| 127 |
-
│ body.lg 18px 16px 1.13x │
|
| 128 |
-
│ body 16px 16px 1x │
|
| 129 |
-
│ caption 12px 12px 1x │
|
| 130 |
-
│ │
|
| 131 |
-
│ Scale Ratio: 1.25 (Major Third) │
|
| 132 |
-
│ │
|
| 133 |
-
└─────────────────────────────────────────────────────────────────────────────┘
|
| 134 |
-
```
|
| 135 |
-
|
| 136 |
-
---
|
| 137 |
-
|
| 138 |
-
## Section 2: Colors
|
| 139 |
-
|
| 140 |
-
### Semantic Color Groups
|
| 141 |
-
```
|
| 142 |
-
┌─────────────────────────────────────────────────────────────────────────────┐
|
| 143 |
-
│ COLORS │
|
| 144 |
-
├─────────────────────────────────────────────────────────────────────────────┤
|
| 145 |
-
│ │
|
| 146 |
-
│ 🎨 BRAND │
|
| 147 |
-
│ ┌──────────┐ ┌──────────┐ ┌──────────┐ │
|
| 148 |
-
│ │ │ │ │ │ │ │
|
| 149 |
-
│ │ #06b2c4 │ │ #c1df1f │ │ #3860be │ │
|
| 150 |
-
│ │ │ │ │ │ │ │
|
| 151 |
-
│ └──────────┘ └──────────┘ └──────────┘ │
|
| 152 |
-
│ Primary Secondary Accent │
|
| 153 |
-
│ AA: ⚠️ 3.2 AA: ⚠️ 2.1 AA: ✓ 4.8 │
|
| 154 |
-
│ │
|
| 155 |
-
│ ───────────────────────────────────────────────────────────────────── │
|
| 156 |
-
│ │
|
| 157 |
-
│ 📝 TEXT │
|
| 158 |
-
│ ┌──────────┐ ┌──────────┐ ┌──────────┐ ┌──────────┐ │
|
| 159 |
-
│ │ │ │ │ │ │ │ │ │
|
| 160 |
-
│ │ #1a1a1a │ │ #373737 │ │ #666666 │ │ #999999 │ │
|
| 161 |
-
│ │ │ │ │ │ │ │ │ │
|
| 162 |
-
│ └──────────┘ └──────────┘ └──────────┘ └──────────┘ │
|
| 163 |
-
│ Primary Secondary Tertiary Muted │
|
| 164 |
-
│ AA: ✓ 16.1 AA: ✓ 12.6 AA: ✓ 5.7 AA: ⚠️ 3.0 │
|
| 165 |
-
│ │
|
| 166 |
-
│ ───────────────────────────────────────────────────────────────────── │
|
| 167 |
-
│ │
|
| 168 |
-
│ 🖼️ BACKGROUND │
|
| 169 |
-
│ ┌──────────┐ ┌──────────┐ ┌──────────┐ ┌──────────┐ │
|
| 170 |
-
│ │ │ │ │ │ │ │ │ │
|
| 171 |
-
│ │ #ffffff │ │ #f5f5f5 │ │ #e8e8e8 │ │ #1a1a1a │ │
|
| 172 |
-
│ │ │ │ │ │ │ │ │ │
|
| 173 |
-
│ └──────────┘ └──────────┘ └──────────┘ └──────────┘ │
|
| 174 |
-
│ Primary Secondary Tertiary Inverse │
|
| 175 |
-
│ │
|
| 176 |
-
│ ───────────────────────────────────────────────────────────────────── │
|
| 177 |
-
│ │
|
| 178 |
-
│ 📏 BORDER │
|
| 179 |
-
│ ┌──────────┐ ┌──────────┐ ┌──────────┐ │
|
| 180 |
-
│ │ ┌────┐ │ │ ┌────┐ │ │ ┌────┐ │ │
|
| 181 |
-
│ │ │ │ │ │ │ │ │ │ │ │ │ │
|
| 182 |
-
│ │ └────┘ │ │ └────┘ │ │ └────┘ │ │
|
| 183 |
-
│ └──────────┘ └──────────┘ └──────────┘ │
|
| 184 |
-
│ #e0e0e0 #d0d0d0 #c0c0c0 │
|
| 185 |
-
│ Default Strong Focus │
|
| 186 |
-
│ │
|
| 187 |
-
│ ───────────────────────────────────────────────────────────────────── │
|
| 188 |
-
│ │
|
| 189 |
-
│ 🚨 FEEDBACK │
|
| 190 |
-
│ ┌──────────┐ ┌──────────┐ ┌──────────┐ ┌──────────┐ │
|
| 191 |
-
│ │ │ │ │ │ │ │ │ │
|
| 192 |
-
│ │ #dc2626 │ │ #16a34a │ │ #f59e0b │ │ #3b82f6 │ │
|
| 193 |
-
│ │ │ │ │ │ │ │ │ │
|
| 194 |
-
│ └──────────┘ └──────────┘ └──────────┘ └──────────┘ │
|
| 195 |
-
│ Error Success Warning Info │
|
| 196 |
-
│ │
|
| 197 |
-
└─────────────────────────────────────────────────────────────────────────────┘
|
| 198 |
-
```
|
| 199 |
-
|
| 200 |
-
### Color Ramps (If Generated)
|
| 201 |
-
```
|
| 202 |
-
┌─────────────────────────────────────────────────────────────────────────────┐
|
| 203 |
-
│ COLOR RAMPS │
|
| 204 |
-
├─────────────────────────────────────────────────────────────────────────────┤
|
| 205 |
-
│ │
|
| 206 |
-
│ Brand Primary │
|
| 207 |
-
│ ┌────┬────┬────┬────┬────┬��───┬────┬────┬────┬────┬────┐ │
|
| 208 |
-
│ │ 50 │100 │200 │300 │400 │500 │600 │700 │800 │900 │950 │ │
|
| 209 |
-
│ │ │ │ │ │ │ ◆ │ │ │ │ │ │ │
|
| 210 |
-
│ └────┴────┴────┴────┴────┴────┴────┴────┴────┴────┴────┘ │
|
| 211 |
-
│ ◆ = Base color (#06b2c4) │
|
| 212 |
-
│ │
|
| 213 |
-
│ Neutral │
|
| 214 |
-
│ ┌────┬────┬────┬────┬────┬────┬────┬────┬────┬────┬────┐ │
|
| 215 |
-
│ │ 50 │100 │200 │300 │400 │500 │600 │700 │800 │900 │950 │ │
|
| 216 |
-
│ └────┴────┴────┴────┴────┴────┴────┴────┴────┴────┴────┘ │
|
| 217 |
-
│ │
|
| 218 |
-
└─────────────────────────────────────────────────────────────────────────────┘
|
| 219 |
-
```
|
| 220 |
-
|
| 221 |
-
---
|
| 222 |
-
|
| 223 |
-
## Section 3: Spacing
|
| 224 |
-
|
| 225 |
-
### Visual Spacing Scale
|
| 226 |
-
```
|
| 227 |
-
┌─────────────────────────────────────────────────────────────────────────────┐
|
| 228 |
-
│ SPACING SCALE (8px Grid) │
|
| 229 |
-
├─────────────────────────────────────────────────────────────────────────────┤
|
| 230 |
-
│ │
|
| 231 |
-
│ Token Value Visual │
|
| 232 |
-
│ ───────────────────────────────────────────────────────────────────── │
|
| 233 |
-
│ │
|
| 234 |
-
│ space.0 0px (none) │
|
| 235 |
-
│ │
|
| 236 |
-
│ space.1 4px ████ │
|
| 237 |
-
│ │
|
| 238 |
-
│ space.2 8px ████████ │
|
| 239 |
-
│ │
|
| 240 |
-
│ space.3 12px ████████████ │
|
| 241 |
-
│ │
|
| 242 |
-
│ space.4 16px ████████████████ │
|
| 243 |
-
│ │
|
| 244 |
-
│ space.5 20px ████████████████████ │
|
| 245 |
-
│ │
|
| 246 |
-
│ space.6 24px ████████████████████████ │
|
| 247 |
-
│ │
|
| 248 |
-
│ space.8 32px ████████████████████████████████ │
|
| 249 |
-
│ │
|
| 250 |
-
│ space.10 40px ████████████████████████████████████████ │
|
| 251 |
-
│ │
|
| 252 |
-
│ space.12 48px ████████████████████████████████████████████████ │
|
| 253 |
-
│ │
|
| 254 |
-
│ space.16 64px ████████████████████████████████████████████... │
|
| 255 |
-
│ │
|
| 256 |
-
└─────────────────────────────────────────────────────────────────────────────┘
|
| 257 |
-
```
|
| 258 |
-
|
| 259 |
-
### Spacing with Boxes
|
| 260 |
-
```
|
| 261 |
-
┌─────────────────────────────────────────────────────────────────────────────┐
|
| 262 |
-
│ SPACING — VISUAL REFERENCE │
|
| 263 |
-
├─────────────────────────────────────────────────────────────────────────────┤
|
| 264 |
-
│ │
|
| 265 |
-
│ 4px 8px 16px 24px 32px 48px │
|
| 266 |
-
│ space.1 space.2 space.4 space.6 space.8 space.12│
|
| 267 |
-
│ │
|
| 268 |
-
│ ┌┐ ┌──┐ ┌────┐ ┌──────┐ ┌────────┐ ┌──────┐│
|
| 269 |
-
│ └┘ └──┘ │ │ │ │ │ │ │ ││
|
| 270 |
-
│ └────┘ │ │ │ │ │ ││
|
| 271 |
-
│ └──────┘ │ │ │ ││
|
| 272 |
-
│ └────────┘ │ ││
|
| 273 |
-
│ │ ││
|
| 274 |
-
│ └──────┘│
|
| 275 |
-
│ │
|
| 276 |
-
└─────────────────────────────────────────────────────────────────────────────┘
|
| 277 |
-
```
|
| 278 |
-
|
| 279 |
-
---
|
| 280 |
-
|
| 281 |
-
## Section 4: Border Radius
|
| 282 |
-
|
| 283 |
-
### Radius Visual Display
|
| 284 |
-
```
|
| 285 |
-
┌─────────────────────────────────────────────────────────────────────────────┐
|
| 286 |
-
│ BORDER RADIUS │
|
| 287 |
-
├─────────────────────────────────────────────────────────────────────────────┤
|
| 288 |
-
│ │
|
| 289 |
-
│ ┌────────┐ ┌────────┐ ┌────────┐ ┌────────┐ ╭────────╮ │
|
| 290 |
-
│ │ │ │ │ │ │ │ │ │ │ │
|
| 291 |
-
│ │ │ │ │ │ │ │ │ │ │ │
|
| 292 |
-
│ │ │ │ │ │ │ │ │ │ │ │
|
| 293 |
-
│ └────────┘ └────────┘ └────────┘ └────────┘ ╰────────╯ │
|
| 294 |
-
│ │
|
| 295 |
-
│ 0px 4px 8px 12px 9999px │
|
| 296 |
-
│ radius.none radius.sm radius.md radius.lg radius.full │
|
| 297 |
-
│ │
|
| 298 |
-
└─────────────────────────────────────────────────────────────────────────────┘
|
| 299 |
-
```
|
| 300 |
-
|
| 301 |
-
---
|
| 302 |
-
|
| 303 |
-
## Section 5: Shadows
|
| 304 |
-
|
| 305 |
-
### Shadow Visual Display
|
| 306 |
-
```
|
| 307 |
-
┌─────────────────────────────────────────────────────────────────────────────┐
|
| 308 |
-
│ SHADOWS / ELEVATION │
|
| 309 |
-
├─────────────────────────────────────────────────────────────────────────────┤
|
| 310 |
-
│ │
|
| 311 |
-
│ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ │
|
| 312 |
-
│ │ │ │ │ │ │ │
|
| 313 |
-
│ │ │ │ │ │ │ │
|
| 314 |
-
│ │ Level 1 │ │ Level 2 │ │ Level 3 │ │
|
| 315 |
-
│ │ │ │ │ │ │ │
|
| 316 |
-
│ │ │ │ │ │ │ │
|
| 317 |
-
│ └─────────────┘ └─────────────┘ └─────────────┘ │
|
| 318 |
-
│ ░░░░░░░░░░░░░ ▒▒▒▒▒▒▒▒▒▒▒▒▒ ▓▓▓▓▓▓▓▓▓▓▓▓▓ │
|
| 319 |
-
│ │
|
| 320 |
-
│ shadow.sm shadow.md shadow.lg │
|
| 321 |
-
│ 0 1px 2px 0 4px 8px 0 8px 24px │
|
| 322 |
-
│ rgba(0,0,0,0.05) rgba(0,0,0,0.1) rgba(0,0,0,0.15) │
|
| 323 |
-
│ │
|
| 324 |
-
│ Use: Subtle lift Use: Cards, menus Use: Modals, dialogs │
|
| 325 |
-
│ │
|
| 326 |
-
└─────────────────────────────────────────────────────────────────────────────┘
|
| 327 |
-
```
|
| 328 |
-
|
| 329 |
-
---
|
| 330 |
-
|
| 331 |
-
## Methods to Auto-Generate Specimen in Figma
|
| 332 |
-
|
| 333 |
-
### Method 1: Figma Plugin Extension (Recommended)
|
| 334 |
-
|
| 335 |
-
Extend your existing plugin to:
|
| 336 |
-
1. Import JSON → Create Variables (already done)
|
| 337 |
-
2. **NEW: Generate Specimen Page**
|
| 338 |
-
- Create a new page called "📋 Design System Specimen"
|
| 339 |
-
- Auto-generate frames for each token category
|
| 340 |
-
- Apply variables to the specimen elements
|
| 341 |
-
|
| 342 |
-
**Plugin Code Concept:**
|
| 343 |
-
```javascript
|
| 344 |
-
// After importing variables...
|
| 345 |
-
async function generateSpecimenPage() {
|
| 346 |
-
// Create page
|
| 347 |
-
const page = figma.createPage();
|
| 348 |
-
page.name = "📋 Design System Specimen";
|
| 349 |
-
|
| 350 |
-
// Create Typography section
|
| 351 |
-
const typoFrame = createTypographySpecimen(typographyVariables);
|
| 352 |
-
|
| 353 |
-
// Create Colors section
|
| 354 |
-
const colorFrame = createColorSpecimen(colorVariables);
|
| 355 |
-
|
| 356 |
-
// Create Spacing section
|
| 357 |
-
const spacingFrame = createSpacingSpecimen(spacingVariables);
|
| 358 |
-
|
| 359 |
-
// ... etc
|
| 360 |
-
}
|
| 361 |
-
```
|
| 362 |
-
|
| 363 |
-
### Method 2: Figma Template + Variables
|
| 364 |
-
|
| 365 |
-
1. **Create a Master Template** (one-time setup):
|
| 366 |
-
- Design the specimen layout manually
|
| 367 |
-
- Use placeholder text/colors
|
| 368 |
-
|
| 369 |
-
2. **Connect to Variables**:
|
| 370 |
-
- Bind text layers to typography variables
|
| 371 |
-
- Bind fills to color variables
|
| 372 |
-
- Bind auto-layout gaps to spacing variables
|
| 373 |
-
|
| 374 |
-
3. **On Import**:
|
| 375 |
-
- Variables update → Specimen auto-updates
|
| 376 |
-
|
| 377 |
-
**Advantage:** Beautiful, customized design
|
| 378 |
-
**Disadvantage:** Manual template creation
|
| 379 |
-
|
| 380 |
-
### Method 3: Community Plugin — "Design Tokens to Figma"
|
| 381 |
-
|
| 382 |
-
Use existing plugins that can generate visual specimens:
|
| 383 |
-
- **Tokens Studio for Figma** — Has specimen generation
|
| 384 |
-
- **Themer** — Creates color ramps visually
|
| 385 |
-
- **Design System Organizer** — Structures tokens
|
| 386 |
-
|
| 387 |
-
### Method 4: Widget (Most Interactive)
|
| 388 |
-
|
| 389 |
-
Create a **Figma Widget** that:
|
| 390 |
-
- Reads variables from the document
|
| 391 |
-
- Renders an interactive specimen
|
| 392 |
-
- Updates in real-time
|
| 393 |
-
|
| 394 |
-
**Advantage:** Live, interactive
|
| 395 |
-
**Disadvantage:** More complex to build
|
| 396 |
-
|
| 397 |
-
---
|
| 398 |
-
|
| 399 |
-
## Recommended Approach for You
|
| 400 |
-
|
| 401 |
-
Given you already have a plugin:
|
| 402 |
-
|
| 403 |
-
### Quick Win (30 min)
|
| 404 |
-
1. Create a **Figma template file** with the specimen layout
|
| 405 |
-
2. Manually connect elements to variables
|
| 406 |
-
3. Duplicate template for each project
|
| 407 |
-
|
| 408 |
-
### Better Solution (2-4 hours)
|
| 409 |
-
Extend your plugin to auto-generate the specimen page:
|
| 410 |
-
|
| 411 |
-
```javascript
|
| 412 |
-
// Add to your existing plugin
|
| 413 |
-
figma.ui.onmessage = async (msg) => {
|
| 414 |
-
if (msg.type === 'import-json') {
|
| 415 |
-
// Your existing import code...
|
| 416 |
-
await importVariables(msg.data);
|
| 417 |
-
|
| 418 |
-
// NEW: Generate specimen
|
| 419 |
-
if (msg.generateSpecimen) {
|
| 420 |
-
await generateSpecimenPage();
|
| 421 |
-
}
|
| 422 |
-
}
|
| 423 |
-
};
|
| 424 |
-
|
| 425 |
-
async function generateSpecimenPage() {
|
| 426 |
-
const page = figma.createPage();
|
| 427 |
-
page.name = `📋 Specimen — ${new Date().toLocaleDateString()}`;
|
| 428 |
-
figma.currentPage = page;
|
| 429 |
-
|
| 430 |
-
let yOffset = 0;
|
| 431 |
-
|
| 432 |
-
// Typography
|
| 433 |
-
yOffset = await createTypographySection(0, yOffset);
|
| 434 |
-
|
| 435 |
-
// Colors
|
| 436 |
-
yOffset = await createColorSection(0, yOffset + 100);
|
| 437 |
-
|
| 438 |
-
// Spacing
|
| 439 |
-
yOffset = await createSpacingSection(0, yOffset + 100);
|
| 440 |
-
|
| 441 |
-
// Radius
|
| 442 |
-
yOffset = await createRadiusSection(0, yOffset + 100);
|
| 443 |
-
|
| 444 |
-
// Shadows
|
| 445 |
-
await createShadowSection(0, yOffset + 100);
|
| 446 |
-
|
| 447 |
-
figma.viewport.scrollAndZoomIntoView(page.children);
|
| 448 |
-
}
|
| 449 |
-
```
|
| 450 |
-
|
| 451 |
-
---
|
| 452 |
-
|
| 453 |
-
## AS-IS vs TO-BE Comparison View
|
| 454 |
-
|
| 455 |
-
For comparing before/after:
|
| 456 |
-
|
| 457 |
-
```
|
| 458 |
-
┌────────────────────────────────────────────���────────────────────────────────┐
|
| 459 |
-
│ COMPARISON: AS-IS → TO-BE │
|
| 460 |
-
├─────────────────────────────────────────────────────────────────────────────┤
|
| 461 |
-
│ │
|
| 462 |
-
│ TYPOGRAPHY │
|
| 463 |
-
│ ───────────────────────────────────────────────────────────────────── │
|
| 464 |
-
│ │
|
| 465 |
-
│ Token AS-IS TO-BE Change │
|
| 466 |
-
│ display.xl 72px 72px — │
|
| 467 |
-
│ heading.1 46px 48px +2px (scale aligned) │
|
| 468 |
-
│ heading.2 34px 36px +2px (scale aligned) │
|
| 469 |
-
│ body 16px 16px — │
|
| 470 |
-
│ │
|
| 471 |
-
│ Scale Ratio: ~1.18 (random) 1.25 (Major Third) ✓ Improved │
|
| 472 |
-
│ │
|
| 473 |
-
│ ───────────────────────────────────────────────────────────────────── │
|
| 474 |
-
│ │
|
| 475 |
-
│ COLORS │
|
| 476 |
-
│ ───────────────────────────────────────────────────────────────────── │
|
| 477 |
-
│ │
|
| 478 |
-
│ Token AS-IS TO-BE Change │
|
| 479 |
-
│ │
|
| 480 |
-
│ brand.primary #06b2c4 #0891a8 AA: 3.2 → 4.6 ✓ │
|
| 481 |
-
│ ┌────┐ ┌────┐ │
|
| 482 |
-
│ │ │ → │ │ │
|
| 483 |
-
│ └────┘ └────┘ │
|
| 484 |
-
│ │
|
| 485 |
-
│ text.primary #373737 #373737 — (no change) │
|
| 486 |
-
│ │
|
| 487 |
-
│ ───────────────────────────────────────────────────────────────────── │
|
| 488 |
-
│ │
|
| 489 |
-
│ SPACING │
|
| 490 |
-
│ ───────────────────────────────────────────────────────────────────── │
|
| 491 |
-
│ │
|
| 492 |
-
│ Grid: Mixed 8px ✓ Standardized │
|
| 493 |
-
│ │
|
| 494 |
-
└─────────────────────────────────────────────────────────────────────────────┘
|
| 495 |
-
```
|
| 496 |
-
|
| 497 |
-
---
|
| 498 |
-
|
| 499 |
-
## Summary
|
| 500 |
-
|
| 501 |
-
| Method | Effort | Best For |
|
| 502 |
-
|--------|--------|----------|
|
| 503 |
-
| Template + Variables | Low | Quick setup, one-off projects |
|
| 504 |
-
| Plugin Extension | Medium | Reusable, consistent output |
|
| 505 |
-
| Widget | High | Interactive, real-time updates |
|
| 506 |
-
| Community Plugin | None | If existing solution fits |
|
| 507 |
-
|
| 508 |
-
**My Recommendation:** Extend your plugin to auto-generate the specimen page. It's a one-time investment that pays off every time you use the workflow.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
docs/IMAGE_GUIDE_EPISODE_6.md
CHANGED
|
@@ -179,7 +179,7 @@ Category Caps: brand(3) text(3) bg(3) border(3) feedback(4) palette(rest)
|
|
| 179 |
"$type": "color",
|
| 180 |
"$value": "#005aa3",
|
| 181 |
"$extensions": {
|
| 182 |
-
"com.design-system-
|
| 183 |
"frequency": 47,
|
| 184 |
"confidence": "high"
|
| 185 |
}
|
|
|
|
| 179 |
"$type": "color",
|
| 180 |
"$value": "#005aa3",
|
| 181 |
"$extensions": {
|
| 182 |
+
"com.design-system-automation": {
|
| 183 |
"frequency": 47,
|
| 184 |
"confidence": "high"
|
| 185 |
}
|
docs/LINKEDIN_POST_EPISODE_6.md
CHANGED
|
@@ -1,4 +1,4 @@
|
|
| 1 |
-
# LinkedIn Post - Episode 6: Design System
|
| 2 |
|
| 3 |
## Main Post (Copy-Paste Ready)
|
| 4 |
|
|
|
|
| 1 |
+
# LinkedIn Post - Episode 6: Design System Automation v3.2
|
| 2 |
|
| 3 |
## Main Post (Copy-Paste Ready)
|
| 4 |
|
docs/MEDIUM_ARTICLE_EPISODE_6.md
CHANGED
|
@@ -487,7 +487,7 @@ V3's export follows the W3C Design Tokens Community Group specification (stable
|
|
| 487 |
"$value": "#005aa3",
|
| 488 |
"$description": "[classifier] brand: primary_action",
|
| 489 |
"$extensions": {
|
| 490 |
-
"com.design-system-
|
| 491 |
"frequency": 47,
|
| 492 |
"confidence": "high",
|
| 493 |
"category": "brand",
|
|
|
|
| 487 |
"$value": "#005aa3",
|
| 488 |
"$description": "[classifier] brand: primary_action",
|
| 489 |
"$extensions": {
|
| 490 |
+
"com.design-system-automation": {
|
| 491 |
"frequency": 47,
|
| 492 |
"confidence": "high",
|
| 493 |
"category": "brand",
|
docs/MEDIUM_ARTICLE_EPISODE_6_V2.md
ADDED
|
@@ -0,0 +1,264 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# AI in My Daily Work — Episode 6: How 4 AI Agents + a Color Classifier Reverse-Engineer Any Website's Design System
|
| 2 |
+
|
| 3 |
+
## From URL to Figma in 15 Minutes (Not 5 Days)
|
| 4 |
+
|
| 5 |
+
*I built a system that extracts design tokens from any live website, classifies colors by actual CSS usage, audits everything against industry standards, and drops it into Figma as a visual spec — for less than a cent per run.*
|
| 6 |
+
|
| 7 |
+
[IMAGE: Hero - Website URL -> AI Agents -> Figma Visual Spec]
|
| 8 |
+
|
| 9 |
+
---
|
| 10 |
+
|
| 11 |
+
## The 5-Day Problem
|
| 12 |
+
|
| 13 |
+
If you've ever inherited a website and needed to understand its design system, you know the drill. Open DevTools. Click an element. Copy the hex code. Repeat 200 times. Manually check contrast ratios. Squint at font sizes trying to figure out if they follow a scale. Paste everything into a spreadsheet. Spend another day recreating it in Figma.
|
| 14 |
+
|
| 15 |
+
I've done this dozens of times across 10+ years managing design systems. It takes **3-5 days per site**. And honestly? The result is never complete.
|
| 16 |
+
|
| 17 |
+
I wanted something that thinks the way a design team does — one person extracting values, another classifying colors, someone checking accessibility, and a lead synthesizing it all into clear recommendations.
|
| 18 |
+
|
| 19 |
+
So I built one. Three versions and many mistakes later, here's what actually works.
|
| 20 |
+
|
| 21 |
+
---
|
| 22 |
+
|
| 23 |
+
## What It Does (The 30-Second Version)
|
| 24 |
+
|
| 25 |
+
You paste a URL. The system visits the site, extracts every design token it can find (colors, fonts, spacing, shadows, border radius), classifies and normalizes them, runs accessibility and consistency checks, then hands the data to 4 AI agents who analyze it like a senior design team.
|
| 26 |
+
|
| 27 |
+
You get a clean JSON file. Drop it into Figma with a custom plugin. Out comes a full visual spec page — every token displayed, organized, with AA compliance badges.
|
| 28 |
+
|
| 29 |
+
**15 minutes. Not 5 days.**
|
| 30 |
+
|
| 31 |
+
---
|
| 32 |
+
|
| 33 |
+
## How It Works: One Workflow, Three Layers
|
| 34 |
+
|
| 35 |
+
The biggest lesson from building V1 and V2 was this: **don't use AI for things math can do better.** My first version used a language model for everything — including contrast ratio calculations. It cost $1 per run and hallucinated the math.
|
| 36 |
+
|
| 37 |
+
V3 splits the work into three layers. The first two are free. Only the third uses AI, and only for tasks that genuinely need judgment.
|
| 38 |
+
|
| 39 |
+
[IMAGE: Architecture + Workflow combined — URL enters Layer 1, flows through Layer 2, then Layer 3, out to Figma]
|
| 40 |
+
|
| 41 |
+
### Layer 1 — Extraction & Normalization (Free, ~90 seconds)
|
| 42 |
+
|
| 43 |
+
A headless browser (Playwright) visits your site at two screen sizes — desktop and mobile — and pulls design values from **8 different sources**: computed styles, CSS variables, inline styles, SVG attributes, stylesheets, external CSS files, page scan, and a deep CSS parser (Firecrawl) that bypasses restrictions.
|
| 44 |
+
|
| 45 |
+
Why 8 sources? Because no single method catches everything. A brand color might live in a CSS variable, an inline style on a hero section, and an SVG logo — all at once. Casting a wide net means fewer missed tokens.
|
| 46 |
+
|
| 47 |
+
The raw output is messy. You'll get the same blue in three slightly different hex values. Border radius values like `"0px 0px 16px 16px"` that Figma can't use. Shadow CSS strings with no meaningful names.
|
| 48 |
+
|
| 49 |
+
The normalizer cleans all of this:
|
| 50 |
+
|
| 51 |
+
- **Colors** — Merges near-duplicates (if two blues are almost identical, keep one). Assigns a hue family and numeric shade: `color.blue.500`, `color.neutral.200`. Never vague labels like "light" or "dark."
|
| 52 |
+
- **Border Radius** — Parses multi-value shorthand, converts percentages and rem units to pixels, removes duplicates, and names them logically: `radius.sm` (4px), `radius.md` (8px), `radius.full` (9999px).
|
| 53 |
+
- **Shadows** — Breaks down CSS shadow strings into components, filters out fake shadows (like spread-only borders), sorts by blur amount, and always produces 5 clean elevation levels: `shadow.xs` through `shadow.xl`.
|
| 54 |
+
|
| 55 |
+
Nothing here uses AI. It's parsing, math, and sorting.
|
| 56 |
+
|
| 57 |
+
### Layer 2 — Classification & Rules (Free, <1 second)
|
| 58 |
+
|
| 59 |
+
This is where V3 made its biggest leap. Instead of asking an AI to figure out which color is "brand primary," I wrote 815 lines of deterministic code that reads the CSS evidence directly.
|
| 60 |
+
|
| 61 |
+
**The Color Classifier** looks at how each color is actually used on the page:
|
| 62 |
+
|
| 63 |
+
- A saturated color on `<button>` elements, appearing 30+ times? That's a brand color.
|
| 64 |
+
- A low-saturation color on `<p>` and `<span>` text? That's a text color.
|
| 65 |
+
- A neutral on `<div>` and `<body>` backgrounds? That's a background color.
|
| 66 |
+
- A red with high saturation appearing infrequently? Likely an error/feedback color.
|
| 67 |
+
- Everything else goes into the palette by hue family.
|
| 68 |
+
|
| 69 |
+
Every single decision gets logged with evidence: *"#06b2c4 classified as brand — found on background-color of button elements, frequency 33."* Run it twice, get the exact same result. An LLM can't promise that.
|
| 70 |
+
|
| 71 |
+
The classifier also caps each category (max 3 brand colors, max 3 text colors, etc.) so you don't end up with 15 things all called "brand."
|
| 72 |
+
|
| 73 |
+
**The Rule Engine** then runs pure-math checks on the classified tokens:
|
| 74 |
+
|
| 75 |
+
- **Accessibility**: Tests actual foreground/background color pairs found on the page (not just "does this color pass on white?"). Generates AA-compliant alternatives automatically.
|
| 76 |
+
- **Type Scale**: Calculates the ratio between consecutive font sizes, finds the closest standard scale (Major Third, Minor Third, etc.), and flags inconsistencies.
|
| 77 |
+
- **Spacing Grid**: Detects the mathematical base (4px? 8px?) and measures how well the site's spacing values align.
|
| 78 |
+
- **Color Statistics**: Counts near-duplicates, hue distribution, and saturation patterns.
|
| 79 |
+
|
| 80 |
+
The result is a consistency score out of 100, backed entirely by data.
|
| 81 |
+
|
| 82 |
+
### Layer 3 — 4 AI Agents (~$0.003)
|
| 83 |
+
|
| 84 |
+
Now the AI enters — but with strict guardrails. Each agent has one job, uses one model, and is **advisory only**. They cannot override the classifier's naming.
|
| 85 |
+
|
| 86 |
+
**AURORA (Brand Advisor)** — *Qwen 72B*
|
| 87 |
+
Looks at the classified colors and identifies brand strategy. Is it complementary? Monochrome? Which palette color deserves promotion to a semantic role like `brand.primary`? AURORA can suggest promotions, but a filter (`filter_aurora_naming_map`) rejects anything that isn't a valid semantic role. No creative renaming allowed.
|
| 88 |
+
|
| 89 |
+
**ATLAS (Benchmark Advisor)** — *Llama 3.3 70B*
|
| 90 |
+
Compares your extracted system against 8 industry design systems (Material 3, Shopify Polaris, Atlassian, Carbon, Apple HIG, Tailwind, Ant Design, Chakra). Tells you which one you're closest to and what it would take to align: *"You're 87% aligned to Polaris. Closing the type scale gap takes about an hour."*
|
| 91 |
+
|
| 92 |
+
**SENTINEL (Best Practices Auditor)** — *Qwen 72B*
|
| 93 |
+
Scores your system across 6 checks (AA compliance, type scale consistency, spacing grid, near-duplicates, etc.) and prioritizes fixes by business impact. Must cite actual data from the rule engine — if the engine found 67 AA failures, SENTINEL can't claim accessibility "passes." A cross-reference critic catches contradictions.
|
| 94 |
+
|
| 95 |
+
**NEXUS (Head Synthesizer)** — *Llama 3.3 70B*
|
| 96 |
+
Takes everything — classifier output, rule engine scores, all three agents' analyses — and produces a final executive summary. Evaluates from two perspectives (accessibility-weighted vs. balanced), picks the one that best reflects reality, and outputs a ranked top-3 action list with specific hex values and effort estimates.
|
| 97 |
+
|
| 98 |
+
```
|
| 99 |
+
NEXUS Summary:
|
| 100 |
+
Score: 68/100
|
| 101 |
+
Top Action: Fix brand primary contrast (#06b2c4 -> #048391)
|
| 102 |
+
Impact: HIGH | Effort: 5 min | Affects 40% of CTAs
|
| 103 |
+
```
|
| 104 |
+
|
| 105 |
+
---
|
| 106 |
+
|
| 107 |
+
## The Naming Problem (And Why It Matters)
|
| 108 |
+
|
| 109 |
+
This deserves its own callout because it was the hardest problem to solve — and it's invisible to most people.
|
| 110 |
+
|
| 111 |
+
In V2, three different systems produced color names:
|
| 112 |
+
|
| 113 |
+
| System | Output | Example |
|
| 114 |
+
|--------|--------|---------|
|
| 115 |
+
| Normalizer | Word shades | `color.blue.light` |
|
| 116 |
+
| Export function | Numeric shades | `color.blue.500` |
|
| 117 |
+
| AURORA (LLM) | Creative names | `brand.primary` |
|
| 118 |
+
|
| 119 |
+
The result in Figma? `blue.300`, `blue.dark`, `blue.light`, and `blue.base` — all in the same file. Completely unusable.
|
| 120 |
+
|
| 121 |
+
V3 established a strict chain of command:
|
| 122 |
+
|
| 123 |
+
1. **Color Classifier** (primary authority) — names every color, deterministically
|
| 124 |
+
2. **AURORA** (secondary, advisory) — can suggest semantic role promotions only
|
| 125 |
+
3. **Normalizer** (fallback) — only if the classifier hasn't run
|
| 126 |
+
|
| 127 |
+
One authority. No conflicts. Clean output every time.
|
| 128 |
+
|
| 129 |
+
---
|
| 130 |
+
|
| 131 |
+
## Into Figma: The Last Mile
|
| 132 |
+
|
| 133 |
+
The system exports W3C DTCG-compliant JSON — the industry standard for design tokens (finalized October 2025). Every token includes its type, value, description, and extraction metadata:
|
| 134 |
+
|
| 135 |
+
```json
|
| 136 |
+
{
|
| 137 |
+
"color": {
|
| 138 |
+
"brand": {
|
| 139 |
+
"primary": {
|
| 140 |
+
"$type": "color",
|
| 141 |
+
"$value": "#005aa3",
|
| 142 |
+
"$description": "[classifier] brand: primary_action"
|
| 143 |
+
}
|
| 144 |
+
}
|
| 145 |
+
},
|
| 146 |
+
"radius": {
|
| 147 |
+
"md": { "$type": "dimension", "$value": "8px" }
|
| 148 |
+
}
|
| 149 |
+
}
|
| 150 |
+
```
|
| 151 |
+
|
| 152 |
+
A custom Figma plugin imports this JSON and:
|
| 153 |
+
|
| 154 |
+
1. Creates **Figma Variables** (color, number, and string collections)
|
| 155 |
+
2. Creates **Styles** (paint, text, and effect styles)
|
| 156 |
+
3. Auto-generates a **Visual Spec Page** — separate frames for typography, colors, spacing, radius, and shadows, with AA compliance badges on every color swatch
|
| 157 |
+
|
| 158 |
+
[IMAGE: Figma visual spec page showing organized tokens with AA badges]
|
| 159 |
+
|
| 160 |
+
You run the full workflow twice — once for the AS-IS (what exists today) and once for the TO-BE (with accepted improvements). Place them side by side in Figma and the story tells itself:
|
| 161 |
+
|
| 162 |
+
| Token | AS-IS | TO-BE |
|
| 163 |
+
|-------|-------|-------|
|
| 164 |
+
| Brand Primary | #06b2c4 (fails AA) | #048391 (passes AA) |
|
| 165 |
+
| Type Scale | ~1.18 (random) | 1.25 (Major Third) |
|
| 166 |
+
| Spacing | Mixed values | 8px grid |
|
| 167 |
+
| Unique Colors | 143 | ~20 semantic |
|
| 168 |
+
| Radius | Raw CSS garbage | none/sm/md/lg/xl/full |
|
| 169 |
+
| Shadows | Unsorted, unnamed | 5 progressive levels |
|
| 170 |
+
|
| 171 |
+
---
|
| 172 |
+
|
| 173 |
+
## What It Costs
|
| 174 |
+
|
| 175 |
+
| Component | Cost |
|
| 176 |
+
|-----------|------|
|
| 177 |
+
| Extraction + Normalization | $0.00 |
|
| 178 |
+
| Color Classifier (815 lines of code) | $0.00 |
|
| 179 |
+
| Rule Engine (WCAG, type scale, spacing) | $0.00 |
|
| 180 |
+
| 4 AI Agents (via HuggingFace Inference) | ~$0.003 |
|
| 181 |
+
| **Total per analysis** | **~$0.003** |
|
| 182 |
+
|
| 183 |
+
The free layers do 90% of the work. The AI adds context, benchmarks, and synthesis — the parts that genuinely need language understanding.
|
| 184 |
+
|
| 185 |
+
For context, V1 (all-LLM) cost $0.50-1.00 per run. Same output quality? Worse, actually — it hallucinated contrast ratios and named colors inconsistently.
|
| 186 |
+
|
| 187 |
+
---
|
| 188 |
+
|
| 189 |
+
## When Things Break
|
| 190 |
+
|
| 191 |
+
The system always produces output, even when parts fail:
|
| 192 |
+
|
| 193 |
+
| Failure | What Happens |
|
| 194 |
+
|---------|-------------|
|
| 195 |
+
| AI agents are down | Classifier + rule engine still work (free) |
|
| 196 |
+
| Firecrawl unavailable | 7 Playwright sources still extract |
|
| 197 |
+
| AURORA returns nonsense | Filter strips invalid names automatically |
|
| 198 |
+
| Full AI layer offline | You still get classified tokens + accessibility audit |
|
| 199 |
+
|
| 200 |
+
The architecture was designed so that the free deterministic layers are independently useful. The AI layer is a bonus, not a dependency.
|
| 201 |
+
|
| 202 |
+
---
|
| 203 |
+
|
| 204 |
+
## What I Learned Building Three Versions
|
| 205 |
+
|
| 206 |
+
**Use AI where it adds value, not everywhere.** My WCAG contrast checker is mathematically exact. An LLM doing the same calculation? Slower, expensive, and sometimes wrong. Rules handle certainty. AI handles ambiguity.
|
| 207 |
+
|
| 208 |
+
**When multiple systems touch the same data, pick one authority.** V2's three competing naming systems was the single worst architectural decision. Not because any individual system was bad — but because nobody was in charge.
|
| 209 |
+
|
| 210 |
+
**Benchmarks change conversations.** "Your type scale is inconsistent" gets a nod. "You're 87% aligned to Shopify Polaris and closing the gap takes an hour" gets a meeting scheduled.
|
| 211 |
+
|
| 212 |
+
**Specialized agents beat mega-prompts.** One giant prompt doing brand analysis + benchmarking + accessibility audit = confused output. Four agents, each with a single job = focused, reliable results.
|
| 213 |
+
|
| 214 |
+
**Semi-automation beats full automation.** The workflow has deliberate human checkpoints: review the AS-IS before modernizing, accept or reject each suggestion, inspect the TO-BE before shipping. AI as copilot, not autopilot.
|
| 215 |
+
|
| 216 |
+
**Standards create ecosystems.** Adopting W3C DTCG v1 means our output works with Tokens Studio, Style Dictionary v4, and any tool following the spec. Custom formats create lock-in.
|
| 217 |
+
|
| 218 |
+
---
|
| 219 |
+
|
| 220 |
+
## The Tech Under the Hood
|
| 221 |
+
|
| 222 |
+
**AI Agent App:** Playwright (extraction), Firecrawl (deep CSS), Gradio (UI), Qwen 72B + Llama 3.3 70B (agents), HuggingFace Spaces + Inference API (hosting), Docker, 148 tests.
|
| 223 |
+
|
| 224 |
+
**Figma Plugin:** Custom plugin (v7), W3C DTCG v1 import, Variables API, auto-generated visual spec pages, Tokens Studio compatible.
|
| 225 |
+
|
| 226 |
+
**Open Source:** Full code on GitHub — [link]
|
| 227 |
+
|
| 228 |
+
---
|
| 229 |
+
|
| 230 |
+
## What's Next: From Tokens to Components
|
| 231 |
+
|
| 232 |
+
The token story is complete. But design systems aren't just tokens — they're **components**.
|
| 233 |
+
|
| 234 |
+
After researching 30+ tools, I found a genuine gap: **no production tool takes DTCG JSON and outputs Figma components with proper variants.** Every existing tool either imports tokens without creating components, creates components from its own format but can't consume yours, or uses AI non-deterministically.
|
| 235 |
+
|
| 236 |
+
The Figma Plugin API supports everything needed. Coming in Episode 7: auto-generating Button (60 variants), TextInput, Card, Toast, and Checkbox/Radio — directly from the extracted tokens. Same tokens in, same components out.
|
| 237 |
+
|
| 238 |
+
---
|
| 239 |
+
|
| 240 |
+
*Episode 6 of "AI in My Daily Work."*
|
| 241 |
+
|
| 242 |
+
*Previous episodes:*
|
| 243 |
+
- *Episode 5: Building a 7-Agent UX Friction Analysis System in Databricks*
|
| 244 |
+
- *Episode 4: Automating UI Regression Testing with AI Agents (Part-1)*
|
| 245 |
+
- *Episode 3: Building a Multi-Agent Review Intelligence System*
|
| 246 |
+
- *Episode 2: How I Use a Team of AI Agents to Automate Secondary Research*
|
| 247 |
+
|
| 248 |
+
*What are you automating? Drop a comment — I'd love to hear what you're building.*
|
| 249 |
+
|
| 250 |
+
---
|
| 251 |
+
|
| 252 |
+
**About the Author**
|
| 253 |
+
|
| 254 |
+
I'm Riaz, a UX Design Manager with 10+ years in consumer apps. I combine design thinking with AI engineering to build tools that make design decisions faster and more data-driven.
|
| 255 |
+
|
| 256 |
+
**Connect:** LinkedIn | Medium: @designwithriaz | GitHub
|
| 257 |
+
|
| 258 |
+
---
|
| 259 |
+
|
| 260 |
+
#AIAgents #DesignSystems #UXDesign #Figma #DesignTokens #Automation #AIEngineering #HuggingFace #WCAG #W3CDTCG
|
| 261 |
+
|
| 262 |
+
---
|
| 263 |
+
|
| 264 |
+
*~9 min read*
|
output_json/file (16).json
DELETED
|
@@ -1,584 +0,0 @@
|
|
| 1 |
-
{
|
| 2 |
-
"color": {
|
| 3 |
-
"background": {
|
| 4 |
-
"primary": {
|
| 5 |
-
"$type": "color",
|
| 6 |
-
"$value": "#ebedef"
|
| 7 |
-
},
|
| 8 |
-
"secondary": {
|
| 9 |
-
"$type": "color",
|
| 10 |
-
"$value": "#bfbfbf"
|
| 11 |
-
}
|
| 12 |
-
},
|
| 13 |
-
"border": {
|
| 14 |
-
"default": {
|
| 15 |
-
"$type": "color",
|
| 16 |
-
"$value": "#122f44"
|
| 17 |
-
}
|
| 18 |
-
},
|
| 19 |
-
"text": {
|
| 20 |
-
"primary": {
|
| 21 |
-
"$type": "color",
|
| 22 |
-
"$value": "#000000"
|
| 23 |
-
},
|
| 24 |
-
"secondary": {
|
| 25 |
-
"$type": "color",
|
| 26 |
-
"$value": "#999999"
|
| 27 |
-
},
|
| 28 |
-
"muted": {
|
| 29 |
-
"$type": "color",
|
| 30 |
-
"$value": "#cccccc"
|
| 31 |
-
}
|
| 32 |
-
},
|
| 33 |
-
"brand": {
|
| 34 |
-
"primary": {
|
| 35 |
-
"$type": "color",
|
| 36 |
-
"$value": "#005aa3"
|
| 37 |
-
},
|
| 38 |
-
"secondary": {
|
| 39 |
-
"$type": "color",
|
| 40 |
-
"$value": "#ff0000"
|
| 41 |
-
}
|
| 42 |
-
},
|
| 43 |
-
"feedback": {
|
| 44 |
-
"success": {
|
| 45 |
-
"$type": "color",
|
| 46 |
-
"$value": "#3c7312"
|
| 47 |
-
},
|
| 48 |
-
"warning": {
|
| 49 |
-
"$type": "color",
|
| 50 |
-
"$value": "#ffdc00"
|
| 51 |
-
}
|
| 52 |
-
},
|
| 53 |
-
"button": {
|
| 54 |
-
"$type": "color",
|
| 55 |
-
"$value": "#ffffff"
|
| 56 |
-
},
|
| 57 |
-
"purple": {
|
| 58 |
-
"500": {
|
| 59 |
-
"$type": "color",
|
| 60 |
-
"$value": "#885b9a"
|
| 61 |
-
}
|
| 62 |
-
},
|
| 63 |
-
"neutral": {
|
| 64 |
-
"dark": {
|
| 65 |
-
"$type": "color",
|
| 66 |
-
"$value": "#333333"
|
| 67 |
-
},
|
| 68 |
-
"light": {
|
| 69 |
-
"$type": "color",
|
| 70 |
-
"$value": "#b2b8bf"
|
| 71 |
-
}
|
| 72 |
-
},
|
| 73 |
-
"blue": {
|
| 74 |
-
"dark": {
|
| 75 |
-
"$type": "color",
|
| 76 |
-
"$value": "#2c3e50"
|
| 77 |
-
},
|
| 78 |
-
"light": {
|
| 79 |
-
"$type": "color",
|
| 80 |
-
"$value": "#b9daff"
|
| 81 |
-
},
|
| 82 |
-
"300": {
|
| 83 |
-
"$type": "color",
|
| 84 |
-
"$value": "#7fdbff"
|
| 85 |
-
},
|
| 86 |
-
"base": {
|
| 87 |
-
"$type": "color",
|
| 88 |
-
"$value": "#6f7597"
|
| 89 |
-
}
|
| 90 |
-
},
|
| 91 |
-
"yellow": {
|
| 92 |
-
"light": {
|
| 93 |
-
"$type": "color",
|
| 94 |
-
"$value": "#fff6db"
|
| 95 |
-
}
|
| 96 |
-
},
|
| 97 |
-
"orange": {
|
| 98 |
-
"light": {
|
| 99 |
-
"$type": "color",
|
| 100 |
-
"$value": "#d0bfa4"
|
| 101 |
-
},
|
| 102 |
-
"base": {
|
| 103 |
-
"$type": "color",
|
| 104 |
-
"$value": "#a85410"
|
| 105 |
-
},
|
| 106 |
-
"100": {
|
| 107 |
-
"$type": "color",
|
| 108 |
-
"$value": "#fdebdd"
|
| 109 |
-
}
|
| 110 |
-
},
|
| 111 |
-
"green": {
|
| 112 |
-
"500": {
|
| 113 |
-
"$type": "color",
|
| 114 |
-
"$value": "#2ecc40"
|
| 115 |
-
}
|
| 116 |
-
},
|
| 117 |
-
"red": {
|
| 118 |
-
"base": {
|
| 119 |
-
"$type": "color",
|
| 120 |
-
"$value": "#ff2d55"
|
| 121 |
-
}
|
| 122 |
-
}
|
| 123 |
-
},
|
| 124 |
-
"font": {
|
| 125 |
-
"display": {
|
| 126 |
-
"2xl": {
|
| 127 |
-
"desktop": {
|
| 128 |
-
"$type": "typography",
|
| 129 |
-
"$value": {
|
| 130 |
-
"fontFamily": "sans-serif",
|
| 131 |
-
"fontSize": "68px",
|
| 132 |
-
"fontWeight": "700",
|
| 133 |
-
"lineHeight": "1.2"
|
| 134 |
-
}
|
| 135 |
-
},
|
| 136 |
-
"mobile": {
|
| 137 |
-
"$type": "typography",
|
| 138 |
-
"$value": {
|
| 139 |
-
"fontFamily": "sans-serif",
|
| 140 |
-
"fontSize": "60px",
|
| 141 |
-
"fontWeight": "700",
|
| 142 |
-
"lineHeight": "1.2"
|
| 143 |
-
}
|
| 144 |
-
}
|
| 145 |
-
},
|
| 146 |
-
"xl": {
|
| 147 |
-
"desktop": {
|
| 148 |
-
"$type": "typography",
|
| 149 |
-
"$value": {
|
| 150 |
-
"fontFamily": "sans-serif",
|
| 151 |
-
"fontSize": "58px",
|
| 152 |
-
"fontWeight": "700",
|
| 153 |
-
"lineHeight": "1.2"
|
| 154 |
-
}
|
| 155 |
-
},
|
| 156 |
-
"mobile": {
|
| 157 |
-
"$type": "typography",
|
| 158 |
-
"$value": {
|
| 159 |
-
"fontFamily": "sans-serif",
|
| 160 |
-
"fontSize": "50px",
|
| 161 |
-
"fontWeight": "700",
|
| 162 |
-
"lineHeight": "1.2"
|
| 163 |
-
}
|
| 164 |
-
}
|
| 165 |
-
},
|
| 166 |
-
"lg": {
|
| 167 |
-
"desktop": {
|
| 168 |
-
"$type": "typography",
|
| 169 |
-
"$value": {
|
| 170 |
-
"fontFamily": "sans-serif",
|
| 171 |
-
"fontSize": "48px",
|
| 172 |
-
"fontWeight": "700",
|
| 173 |
-
"lineHeight": "1.2"
|
| 174 |
-
}
|
| 175 |
-
},
|
| 176 |
-
"mobile": {
|
| 177 |
-
"$type": "typography",
|
| 178 |
-
"$value": {
|
| 179 |
-
"fontFamily": "sans-serif",
|
| 180 |
-
"fontSize": "42px",
|
| 181 |
-
"fontWeight": "700",
|
| 182 |
-
"lineHeight": "1.2"
|
| 183 |
-
}
|
| 184 |
-
}
|
| 185 |
-
},
|
| 186 |
-
"md": {
|
| 187 |
-
"desktop": {
|
| 188 |
-
"$type": "typography",
|
| 189 |
-
"$value": {
|
| 190 |
-
"fontFamily": "sans-serif",
|
| 191 |
-
"fontSize": "40px",
|
| 192 |
-
"fontWeight": "700",
|
| 193 |
-
"lineHeight": "1.2"
|
| 194 |
-
}
|
| 195 |
-
},
|
| 196 |
-
"mobile": {
|
| 197 |
-
"$type": "typography",
|
| 198 |
-
"$value": {
|
| 199 |
-
"fontFamily": "sans-serif",
|
| 200 |
-
"fontSize": "34px",
|
| 201 |
-
"fontWeight": "700",
|
| 202 |
-
"lineHeight": "1.2"
|
| 203 |
-
}
|
| 204 |
-
}
|
| 205 |
-
}
|
| 206 |
-
},
|
| 207 |
-
"heading": {
|
| 208 |
-
"xl": {
|
| 209 |
-
"desktop": {
|
| 210 |
-
"$type": "typography",
|
| 211 |
-
"$value": {
|
| 212 |
-
"fontFamily": "sans-serif",
|
| 213 |
-
"fontSize": "34px",
|
| 214 |
-
"fontWeight": "600",
|
| 215 |
-
"lineHeight": "1.3"
|
| 216 |
-
}
|
| 217 |
-
},
|
| 218 |
-
"mobile": {
|
| 219 |
-
"$type": "typography",
|
| 220 |
-
"$value": {
|
| 221 |
-
"fontFamily": "sans-serif",
|
| 222 |
-
"fontSize": "30px",
|
| 223 |
-
"fontWeight": "600",
|
| 224 |
-
"lineHeight": "1.3"
|
| 225 |
-
}
|
| 226 |
-
}
|
| 227 |
-
},
|
| 228 |
-
"lg": {
|
| 229 |
-
"desktop": {
|
| 230 |
-
"$type": "typography",
|
| 231 |
-
"$value": {
|
| 232 |
-
"fontFamily": "sans-serif",
|
| 233 |
-
"fontSize": "28px",
|
| 234 |
-
"fontWeight": "600",
|
| 235 |
-
"lineHeight": "1.3"
|
| 236 |
-
}
|
| 237 |
-
},
|
| 238 |
-
"mobile": {
|
| 239 |
-
"$type": "typography",
|
| 240 |
-
"$value": {
|
| 241 |
-
"fontFamily": "sans-serif",
|
| 242 |
-
"fontSize": "24px",
|
| 243 |
-
"fontWeight": "600",
|
| 244 |
-
"lineHeight": "1.3"
|
| 245 |
-
}
|
| 246 |
-
}
|
| 247 |
-
},
|
| 248 |
-
"md": {
|
| 249 |
-
"desktop": {
|
| 250 |
-
"$type": "typography",
|
| 251 |
-
"$value": {
|
| 252 |
-
"fontFamily": "sans-serif",
|
| 253 |
-
"fontSize": "24px",
|
| 254 |
-
"fontWeight": "600",
|
| 255 |
-
"lineHeight": "1.3"
|
| 256 |
-
}
|
| 257 |
-
},
|
| 258 |
-
"mobile": {
|
| 259 |
-
"$type": "typography",
|
| 260 |
-
"$value": {
|
| 261 |
-
"fontFamily": "sans-serif",
|
| 262 |
-
"fontSize": "20px",
|
| 263 |
-
"fontWeight": "600",
|
| 264 |
-
"lineHeight": "1.3"
|
| 265 |
-
}
|
| 266 |
-
}
|
| 267 |
-
},
|
| 268 |
-
"sm": {
|
| 269 |
-
"desktop": {
|
| 270 |
-
"$type": "typography",
|
| 271 |
-
"$value": {
|
| 272 |
-
"fontFamily": "sans-serif",
|
| 273 |
-
"fontSize": "20px",
|
| 274 |
-
"fontWeight": "600",
|
| 275 |
-
"lineHeight": "1.3"
|
| 276 |
-
}
|
| 277 |
-
},
|
| 278 |
-
"mobile": {
|
| 279 |
-
"$type": "typography",
|
| 280 |
-
"$value": {
|
| 281 |
-
"fontFamily": "sans-serif",
|
| 282 |
-
"fontSize": "16px",
|
| 283 |
-
"fontWeight": "600",
|
| 284 |
-
"lineHeight": "1.3"
|
| 285 |
-
}
|
| 286 |
-
}
|
| 287 |
-
}
|
| 288 |
-
},
|
| 289 |
-
"body": {
|
| 290 |
-
"lg": {
|
| 291 |
-
"desktop": {
|
| 292 |
-
"$type": "typography",
|
| 293 |
-
"$value": {
|
| 294 |
-
"fontFamily": "sans-serif",
|
| 295 |
-
"fontSize": "16px",
|
| 296 |
-
"fontWeight": "400",
|
| 297 |
-
"lineHeight": "1.5"
|
| 298 |
-
}
|
| 299 |
-
},
|
| 300 |
-
"mobile": {
|
| 301 |
-
"$type": "typography",
|
| 302 |
-
"$value": {
|
| 303 |
-
"fontFamily": "sans-serif",
|
| 304 |
-
"fontSize": "14px",
|
| 305 |
-
"fontWeight": "400",
|
| 306 |
-
"lineHeight": "1.5"
|
| 307 |
-
}
|
| 308 |
-
}
|
| 309 |
-
},
|
| 310 |
-
"md": {
|
| 311 |
-
"desktop": {
|
| 312 |
-
"$type": "typography",
|
| 313 |
-
"$value": {
|
| 314 |
-
"fontFamily": "sans-serif",
|
| 315 |
-
"fontSize": "14px",
|
| 316 |
-
"fontWeight": "400",
|
| 317 |
-
"lineHeight": "1.5"
|
| 318 |
-
}
|
| 319 |
-
},
|
| 320 |
-
"mobile": {
|
| 321 |
-
"$type": "typography",
|
| 322 |
-
"$value": {
|
| 323 |
-
"fontFamily": "sans-serif",
|
| 324 |
-
"fontSize": "12px",
|
| 325 |
-
"fontWeight": "400",
|
| 326 |
-
"lineHeight": "1.5"
|
| 327 |
-
}
|
| 328 |
-
}
|
| 329 |
-
},
|
| 330 |
-
"sm": {
|
| 331 |
-
"desktop": {
|
| 332 |
-
"$type": "typography",
|
| 333 |
-
"$value": {
|
| 334 |
-
"fontFamily": "sans-serif",
|
| 335 |
-
"fontSize": "12px",
|
| 336 |
-
"fontWeight": "400",
|
| 337 |
-
"lineHeight": "1.5"
|
| 338 |
-
}
|
| 339 |
-
},
|
| 340 |
-
"mobile": {
|
| 341 |
-
"$type": "typography",
|
| 342 |
-
"$value": {
|
| 343 |
-
"fontFamily": "sans-serif",
|
| 344 |
-
"fontSize": "10px",
|
| 345 |
-
"fontWeight": "400",
|
| 346 |
-
"lineHeight": "1.5"
|
| 347 |
-
}
|
| 348 |
-
}
|
| 349 |
-
}
|
| 350 |
-
},
|
| 351 |
-
"caption": {
|
| 352 |
-
"desktop": {
|
| 353 |
-
"$type": "typography",
|
| 354 |
-
"$value": {
|
| 355 |
-
"fontFamily": "sans-serif",
|
| 356 |
-
"fontSize": "10px",
|
| 357 |
-
"fontWeight": "400",
|
| 358 |
-
"lineHeight": "1.4"
|
| 359 |
-
}
|
| 360 |
-
},
|
| 361 |
-
"mobile": {
|
| 362 |
-
"$type": "typography",
|
| 363 |
-
"$value": {
|
| 364 |
-
"fontFamily": "sans-serif",
|
| 365 |
-
"fontSize": "8px",
|
| 366 |
-
"fontWeight": "400",
|
| 367 |
-
"lineHeight": "1.4"
|
| 368 |
-
}
|
| 369 |
-
}
|
| 370 |
-
},
|
| 371 |
-
"overline": {
|
| 372 |
-
"desktop": {
|
| 373 |
-
"$type": "typography",
|
| 374 |
-
"$value": {
|
| 375 |
-
"fontFamily": "sans-serif",
|
| 376 |
-
"fontSize": "8px",
|
| 377 |
-
"fontWeight": "500",
|
| 378 |
-
"lineHeight": "1.2"
|
| 379 |
-
}
|
| 380 |
-
},
|
| 381 |
-
"mobile": {
|
| 382 |
-
"$type": "typography",
|
| 383 |
-
"$value": {
|
| 384 |
-
"fontFamily": "sans-serif",
|
| 385 |
-
"fontSize": "6px",
|
| 386 |
-
"fontWeight": "500",
|
| 387 |
-
"lineHeight": "1.2"
|
| 388 |
-
}
|
| 389 |
-
}
|
| 390 |
-
}
|
| 391 |
-
},
|
| 392 |
-
"space": {
|
| 393 |
-
"1": {
|
| 394 |
-
"desktop": {
|
| 395 |
-
"$type": "dimension",
|
| 396 |
-
"$value": "8px"
|
| 397 |
-
},
|
| 398 |
-
"mobile": {
|
| 399 |
-
"$type": "dimension",
|
| 400 |
-
"$value": "8px"
|
| 401 |
-
}
|
| 402 |
-
},
|
| 403 |
-
"2": {
|
| 404 |
-
"desktop": {
|
| 405 |
-
"$type": "dimension",
|
| 406 |
-
"$value": "16px"
|
| 407 |
-
},
|
| 408 |
-
"mobile": {
|
| 409 |
-
"$type": "dimension",
|
| 410 |
-
"$value": "16px"
|
| 411 |
-
}
|
| 412 |
-
},
|
| 413 |
-
"3": {
|
| 414 |
-
"desktop": {
|
| 415 |
-
"$type": "dimension",
|
| 416 |
-
"$value": "24px"
|
| 417 |
-
},
|
| 418 |
-
"mobile": {
|
| 419 |
-
"$type": "dimension",
|
| 420 |
-
"$value": "24px"
|
| 421 |
-
}
|
| 422 |
-
},
|
| 423 |
-
"4": {
|
| 424 |
-
"desktop": {
|
| 425 |
-
"$type": "dimension",
|
| 426 |
-
"$value": "32px"
|
| 427 |
-
},
|
| 428 |
-
"mobile": {
|
| 429 |
-
"$type": "dimension",
|
| 430 |
-
"$value": "32px"
|
| 431 |
-
}
|
| 432 |
-
},
|
| 433 |
-
"5": {
|
| 434 |
-
"desktop": {
|
| 435 |
-
"$type": "dimension",
|
| 436 |
-
"$value": "40px"
|
| 437 |
-
},
|
| 438 |
-
"mobile": {
|
| 439 |
-
"$type": "dimension",
|
| 440 |
-
"$value": "40px"
|
| 441 |
-
}
|
| 442 |
-
},
|
| 443 |
-
"6": {
|
| 444 |
-
"desktop": {
|
| 445 |
-
"$type": "dimension",
|
| 446 |
-
"$value": "48px"
|
| 447 |
-
},
|
| 448 |
-
"mobile": {
|
| 449 |
-
"$type": "dimension",
|
| 450 |
-
"$value": "48px"
|
| 451 |
-
}
|
| 452 |
-
},
|
| 453 |
-
"8": {
|
| 454 |
-
"desktop": {
|
| 455 |
-
"$type": "dimension",
|
| 456 |
-
"$value": "56px"
|
| 457 |
-
},
|
| 458 |
-
"mobile": {
|
| 459 |
-
"$type": "dimension",
|
| 460 |
-
"$value": "56px"
|
| 461 |
-
}
|
| 462 |
-
},
|
| 463 |
-
"10": {
|
| 464 |
-
"desktop": {
|
| 465 |
-
"$type": "dimension",
|
| 466 |
-
"$value": "64px"
|
| 467 |
-
},
|
| 468 |
-
"mobile": {
|
| 469 |
-
"$type": "dimension",
|
| 470 |
-
"$value": "64px"
|
| 471 |
-
}
|
| 472 |
-
},
|
| 473 |
-
"12": {
|
| 474 |
-
"desktop": {
|
| 475 |
-
"$type": "dimension",
|
| 476 |
-
"$value": "72px"
|
| 477 |
-
},
|
| 478 |
-
"mobile": {
|
| 479 |
-
"$type": "dimension",
|
| 480 |
-
"$value": "72px"
|
| 481 |
-
}
|
| 482 |
-
},
|
| 483 |
-
"16": {
|
| 484 |
-
"desktop": {
|
| 485 |
-
"$type": "dimension",
|
| 486 |
-
"$value": "80px"
|
| 487 |
-
},
|
| 488 |
-
"mobile": {
|
| 489 |
-
"$type": "dimension",
|
| 490 |
-
"$value": "80px"
|
| 491 |
-
}
|
| 492 |
-
}
|
| 493 |
-
},
|
| 494 |
-
"radius": {
|
| 495 |
-
"xl": {
|
| 496 |
-
"$type": "dimension",
|
| 497 |
-
"$value": "16px"
|
| 498 |
-
},
|
| 499 |
-
"3xl": {
|
| 500 |
-
"$type": "dimension",
|
| 501 |
-
"$value": "50px"
|
| 502 |
-
},
|
| 503 |
-
"full": {
|
| 504 |
-
"$type": "dimension",
|
| 505 |
-
"$value": "50%",
|
| 506 |
-
"9999": {
|
| 507 |
-
"$type": "dimension",
|
| 508 |
-
"$value": "9999px"
|
| 509 |
-
},
|
| 510 |
-
"100": {
|
| 511 |
-
"$type": "dimension",
|
| 512 |
-
"$value": "100%"
|
| 513 |
-
}
|
| 514 |
-
},
|
| 515 |
-
"2xl": {
|
| 516 |
-
"$type": "dimension",
|
| 517 |
-
"$value": "24px"
|
| 518 |
-
},
|
| 519 |
-
"md": {
|
| 520 |
-
"$type": "dimension",
|
| 521 |
-
"$value": "0px 0px 16px 16px",
|
| 522 |
-
"4": {
|
| 523 |
-
"$type": "dimension",
|
| 524 |
-
"$value": "4px"
|
| 525 |
-
}
|
| 526 |
-
},
|
| 527 |
-
"lg": {
|
| 528 |
-
"$type": "dimension",
|
| 529 |
-
"$value": "8px"
|
| 530 |
-
}
|
| 531 |
-
},
|
| 532 |
-
"shadow": {
|
| 533 |
-
"xs": {
|
| 534 |
-
"$type": "shadow",
|
| 535 |
-
"$value": {
|
| 536 |
-
"color": "rgba(0, 0, 0, 0.2)",
|
| 537 |
-
"offsetX": "0px",
|
| 538 |
-
"offsetY": "10px",
|
| 539 |
-
"blur": "25px",
|
| 540 |
-
"spread": "0px"
|
| 541 |
-
}
|
| 542 |
-
},
|
| 543 |
-
"sm": {
|
| 544 |
-
"$type": "shadow",
|
| 545 |
-
"$value": {
|
| 546 |
-
"color": "rgba(0, 0, 0, 0.2)",
|
| 547 |
-
"offsetX": "0px",
|
| 548 |
-
"offsetY": "2px",
|
| 549 |
-
"blur": "30px",
|
| 550 |
-
"spread": "0px"
|
| 551 |
-
}
|
| 552 |
-
},
|
| 553 |
-
"md": {
|
| 554 |
-
"$type": "shadow",
|
| 555 |
-
"$value": {
|
| 556 |
-
"color": "rgba(0, 0, 0, 0.04)",
|
| 557 |
-
"offsetX": "0px",
|
| 558 |
-
"offsetY": "0px",
|
| 559 |
-
"blur": "80px",
|
| 560 |
-
"spread": "0px"
|
| 561 |
-
}
|
| 562 |
-
},
|
| 563 |
-
"lg": {
|
| 564 |
-
"$type": "shadow",
|
| 565 |
-
"$value": {
|
| 566 |
-
"color": "rgba(0, 0, 0, 0.06)",
|
| 567 |
-
"offsetX": "0px",
|
| 568 |
-
"offsetY": "0px",
|
| 569 |
-
"blur": "80px",
|
| 570 |
-
"spread": "0px"
|
| 571 |
-
}
|
| 572 |
-
},
|
| 573 |
-
"xl": {
|
| 574 |
-
"$type": "shadow",
|
| 575 |
-
"$value": {
|
| 576 |
-
"color": "rgba(0, 0, 0, 0.3)",
|
| 577 |
-
"offsetX": "0px",
|
| 578 |
-
"offsetY": "16px",
|
| 579 |
-
"blur": "90px",
|
| 580 |
-
"spread": "0px"
|
| 581 |
-
}
|
| 582 |
-
}
|
| 583 |
-
}
|
| 584 |
-
}
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
output_json/file (18).json
DELETED
|
@@ -1,584 +0,0 @@
|
|
| 1 |
-
{
|
| 2 |
-
"color": {
|
| 3 |
-
"text": {
|
| 4 |
-
"primary": {
|
| 5 |
-
"$type": "color",
|
| 6 |
-
"$value": "#373737"
|
| 7 |
-
},
|
| 8 |
-
"secondary": {
|
| 9 |
-
"$type": "color",
|
| 10 |
-
"$value": "#000000"
|
| 11 |
-
},
|
| 12 |
-
"tertiary": {
|
| 13 |
-
"$type": "color",
|
| 14 |
-
"$value": "#999999"
|
| 15 |
-
},
|
| 16 |
-
"quaternary": {
|
| 17 |
-
"$type": "color",
|
| 18 |
-
"$value": "#4e4c4a"
|
| 19 |
-
},
|
| 20 |
-
"quinary": {
|
| 21 |
-
"$type": "color",
|
| 22 |
-
"$value": "#808080"
|
| 23 |
-
},
|
| 24 |
-
"senary": {
|
| 25 |
-
"$type": "color",
|
| 26 |
-
"$value": "#cccccc"
|
| 27 |
-
},
|
| 28 |
-
"septenary": {
|
| 29 |
-
"$type": "color",
|
| 30 |
-
"$value": "#404040"
|
| 31 |
-
},
|
| 32 |
-
"octonary": {
|
| 33 |
-
"$type": "color",
|
| 34 |
-
"$value": "#727272"
|
| 35 |
-
},
|
| 36 |
-
"nonary": {
|
| 37 |
-
"$type": "color",
|
| 38 |
-
"$value": "#aaaaaa"
|
| 39 |
-
},
|
| 40 |
-
"decenary": {
|
| 41 |
-
"$type": "color",
|
| 42 |
-
"$value": "#656565"
|
| 43 |
-
},
|
| 44 |
-
"undecenary": {
|
| 45 |
-
"$type": "color",
|
| 46 |
-
"$value": "#0e0c24"
|
| 47 |
-
},
|
| 48 |
-
"duodecenary": {
|
| 49 |
-
"$type": "color",
|
| 50 |
-
"$value": "#282828"
|
| 51 |
-
},
|
| 52 |
-
"tredecenary": {
|
| 53 |
-
"$type": "color",
|
| 54 |
-
"$value": "#151414"
|
| 55 |
-
}
|
| 56 |
-
},
|
| 57 |
-
"bg": {
|
| 58 |
-
"primary": {
|
| 59 |
-
"$type": "color",
|
| 60 |
-
"$value": "#ffffff"
|
| 61 |
-
},
|
| 62 |
-
"light": {
|
| 63 |
-
"$type": "color",
|
| 64 |
-
"$value": "#f6f6f6"
|
| 65 |
-
},
|
| 66 |
-
"medium": {
|
| 67 |
-
"$type": "color",
|
| 68 |
-
"$value": "#ecedee"
|
| 69 |
-
},
|
| 70 |
-
"error": {
|
| 71 |
-
"$type": "color",
|
| 72 |
-
"$value": "#fff2f2"
|
| 73 |
-
}
|
| 74 |
-
},
|
| 75 |
-
"border": {
|
| 76 |
-
"light": {
|
| 77 |
-
"$type": "color",
|
| 78 |
-
"$value": "#d3d3d3"
|
| 79 |
-
},
|
| 80 |
-
"medium": {
|
| 81 |
-
"$type": "color",
|
| 82 |
-
"$value": "#e4e4e4"
|
| 83 |
-
},
|
| 84 |
-
"dark": {
|
| 85 |
-
"$type": "color",
|
| 86 |
-
"$value": "#b3b3b3"
|
| 87 |
-
},
|
| 88 |
-
"heavy": {
|
| 89 |
-
"$type": "color",
|
| 90 |
-
"$value": "#2c3e50"
|
| 91 |
-
}
|
| 92 |
-
},
|
| 93 |
-
"brand": {
|
| 94 |
-
"primary": {
|
| 95 |
-
"$type": "color",
|
| 96 |
-
"$value": "#06b2c4"
|
| 97 |
-
},
|
| 98 |
-
"secondary": {
|
| 99 |
-
"$type": "color",
|
| 100 |
-
"$value": "#bcd432"
|
| 101 |
-
},
|
| 102 |
-
"accent": {
|
| 103 |
-
"$type": "color",
|
| 104 |
-
"$value": "#ff1857"
|
| 105 |
-
},
|
| 106 |
-
"error": {
|
| 107 |
-
"$type": "color",
|
| 108 |
-
"$value": "#f20000"
|
| 109 |
-
},
|
| 110 |
-
"info": {
|
| 111 |
-
"$type": "color",
|
| 112 |
-
"$value": "#33cccc"
|
| 113 |
-
},
|
| 114 |
-
"warning": {
|
| 115 |
-
"$type": "color",
|
| 116 |
-
"$value": "#ff8f00"
|
| 117 |
-
},
|
| 118 |
-
"success": {
|
| 119 |
-
"$type": "color",
|
| 120 |
-
"$value": "#65a121"
|
| 121 |
-
}
|
| 122 |
-
},
|
| 123 |
-
"#333333": {
|
| 124 |
-
"$type": "color",
|
| 125 |
-
"$value": "#333333"
|
| 126 |
-
},
|
| 127 |
-
"neutral": {
|
| 128 |
-
"400": {
|
| 129 |
-
"$type": "color",
|
| 130 |
-
"$value": "#78808e"
|
| 131 |
-
}
|
| 132 |
-
}
|
| 133 |
-
},
|
| 134 |
-
"font": {
|
| 135 |
-
"display": {
|
| 136 |
-
"2xl": {
|
| 137 |
-
"desktop": {
|
| 138 |
-
"$type": "typography",
|
| 139 |
-
"$value": {
|
| 140 |
-
"fontFamily": "Open Sans",
|
| 141 |
-
"fontSize": "68px",
|
| 142 |
-
"fontWeight": "700",
|
| 143 |
-
"lineHeight": "1.2"
|
| 144 |
-
}
|
| 145 |
-
},
|
| 146 |
-
"mobile": {
|
| 147 |
-
"$type": "typography",
|
| 148 |
-
"$value": {
|
| 149 |
-
"fontFamily": "Open Sans",
|
| 150 |
-
"fontSize": "60px",
|
| 151 |
-
"fontWeight": "700",
|
| 152 |
-
"lineHeight": "1.2"
|
| 153 |
-
}
|
| 154 |
-
}
|
| 155 |
-
},
|
| 156 |
-
"xl": {
|
| 157 |
-
"desktop": {
|
| 158 |
-
"$type": "typography",
|
| 159 |
-
"$value": {
|
| 160 |
-
"fontFamily": "Open Sans",
|
| 161 |
-
"fontSize": "58px",
|
| 162 |
-
"fontWeight": "700",
|
| 163 |
-
"lineHeight": "1.2"
|
| 164 |
-
}
|
| 165 |
-
},
|
| 166 |
-
"mobile": {
|
| 167 |
-
"$type": "typography",
|
| 168 |
-
"$value": {
|
| 169 |
-
"fontFamily": "Open Sans",
|
| 170 |
-
"fontSize": "50px",
|
| 171 |
-
"fontWeight": "700",
|
| 172 |
-
"lineHeight": "1.2"
|
| 173 |
-
}
|
| 174 |
-
}
|
| 175 |
-
},
|
| 176 |
-
"lg": {
|
| 177 |
-
"desktop": {
|
| 178 |
-
"$type": "typography",
|
| 179 |
-
"$value": {
|
| 180 |
-
"fontFamily": "Open Sans",
|
| 181 |
-
"fontSize": "48px",
|
| 182 |
-
"fontWeight": "700",
|
| 183 |
-
"lineHeight": "1.2"
|
| 184 |
-
}
|
| 185 |
-
},
|
| 186 |
-
"mobile": {
|
| 187 |
-
"$type": "typography",
|
| 188 |
-
"$value": {
|
| 189 |
-
"fontFamily": "Open Sans",
|
| 190 |
-
"fontSize": "42px",
|
| 191 |
-
"fontWeight": "700",
|
| 192 |
-
"lineHeight": "1.2"
|
| 193 |
-
}
|
| 194 |
-
}
|
| 195 |
-
},
|
| 196 |
-
"md": {
|
| 197 |
-
"desktop": {
|
| 198 |
-
"$type": "typography",
|
| 199 |
-
"$value": {
|
| 200 |
-
"fontFamily": "Open Sans",
|
| 201 |
-
"fontSize": "40px",
|
| 202 |
-
"fontWeight": "700",
|
| 203 |
-
"lineHeight": "1.2"
|
| 204 |
-
}
|
| 205 |
-
},
|
| 206 |
-
"mobile": {
|
| 207 |
-
"$type": "typography",
|
| 208 |
-
"$value": {
|
| 209 |
-
"fontFamily": "Open Sans",
|
| 210 |
-
"fontSize": "34px",
|
| 211 |
-
"fontWeight": "700",
|
| 212 |
-
"lineHeight": "1.2"
|
| 213 |
-
}
|
| 214 |
-
}
|
| 215 |
-
}
|
| 216 |
-
},
|
| 217 |
-
"heading": {
|
| 218 |
-
"xl": {
|
| 219 |
-
"desktop": {
|
| 220 |
-
"$type": "typography",
|
| 221 |
-
"$value": {
|
| 222 |
-
"fontFamily": "Open Sans",
|
| 223 |
-
"fontSize": "34px",
|
| 224 |
-
"fontWeight": "600",
|
| 225 |
-
"lineHeight": "1.3"
|
| 226 |
-
}
|
| 227 |
-
},
|
| 228 |
-
"mobile": {
|
| 229 |
-
"$type": "typography",
|
| 230 |
-
"$value": {
|
| 231 |
-
"fontFamily": "Open Sans",
|
| 232 |
-
"fontSize": "30px",
|
| 233 |
-
"fontWeight": "600",
|
| 234 |
-
"lineHeight": "1.3"
|
| 235 |
-
}
|
| 236 |
-
}
|
| 237 |
-
},
|
| 238 |
-
"lg": {
|
| 239 |
-
"desktop": {
|
| 240 |
-
"$type": "typography",
|
| 241 |
-
"$value": {
|
| 242 |
-
"fontFamily": "Open Sans",
|
| 243 |
-
"fontSize": "28px",
|
| 244 |
-
"fontWeight": "600",
|
| 245 |
-
"lineHeight": "1.3"
|
| 246 |
-
}
|
| 247 |
-
},
|
| 248 |
-
"mobile": {
|
| 249 |
-
"$type": "typography",
|
| 250 |
-
"$value": {
|
| 251 |
-
"fontFamily": "Open Sans",
|
| 252 |
-
"fontSize": "24px",
|
| 253 |
-
"fontWeight": "600",
|
| 254 |
-
"lineHeight": "1.3"
|
| 255 |
-
}
|
| 256 |
-
}
|
| 257 |
-
},
|
| 258 |
-
"md": {
|
| 259 |
-
"desktop": {
|
| 260 |
-
"$type": "typography",
|
| 261 |
-
"$value": {
|
| 262 |
-
"fontFamily": "Open Sans",
|
| 263 |
-
"fontSize": "24px",
|
| 264 |
-
"fontWeight": "600",
|
| 265 |
-
"lineHeight": "1.3"
|
| 266 |
-
}
|
| 267 |
-
},
|
| 268 |
-
"mobile": {
|
| 269 |
-
"$type": "typography",
|
| 270 |
-
"$value": {
|
| 271 |
-
"fontFamily": "Open Sans",
|
| 272 |
-
"fontSize": "20px",
|
| 273 |
-
"fontWeight": "600",
|
| 274 |
-
"lineHeight": "1.3"
|
| 275 |
-
}
|
| 276 |
-
}
|
| 277 |
-
},
|
| 278 |
-
"sm": {
|
| 279 |
-
"desktop": {
|
| 280 |
-
"$type": "typography",
|
| 281 |
-
"$value": {
|
| 282 |
-
"fontFamily": "Open Sans",
|
| 283 |
-
"fontSize": "20px",
|
| 284 |
-
"fontWeight": "600",
|
| 285 |
-
"lineHeight": "1.3"
|
| 286 |
-
}
|
| 287 |
-
},
|
| 288 |
-
"mobile": {
|
| 289 |
-
"$type": "typography",
|
| 290 |
-
"$value": {
|
| 291 |
-
"fontFamily": "Open Sans",
|
| 292 |
-
"fontSize": "16px",
|
| 293 |
-
"fontWeight": "600",
|
| 294 |
-
"lineHeight": "1.3"
|
| 295 |
-
}
|
| 296 |
-
}
|
| 297 |
-
}
|
| 298 |
-
},
|
| 299 |
-
"body": {
|
| 300 |
-
"lg": {
|
| 301 |
-
"desktop": {
|
| 302 |
-
"$type": "typography",
|
| 303 |
-
"$value": {
|
| 304 |
-
"fontFamily": "Open Sans",
|
| 305 |
-
"fontSize": "16px",
|
| 306 |
-
"fontWeight": "400",
|
| 307 |
-
"lineHeight": "1.5"
|
| 308 |
-
}
|
| 309 |
-
},
|
| 310 |
-
"mobile": {
|
| 311 |
-
"$type": "typography",
|
| 312 |
-
"$value": {
|
| 313 |
-
"fontFamily": "Open Sans",
|
| 314 |
-
"fontSize": "14px",
|
| 315 |
-
"fontWeight": "400",
|
| 316 |
-
"lineHeight": "1.5"
|
| 317 |
-
}
|
| 318 |
-
}
|
| 319 |
-
},
|
| 320 |
-
"md": {
|
| 321 |
-
"desktop": {
|
| 322 |
-
"$type": "typography",
|
| 323 |
-
"$value": {
|
| 324 |
-
"fontFamily": "Open Sans",
|
| 325 |
-
"fontSize": "14px",
|
| 326 |
-
"fontWeight": "400",
|
| 327 |
-
"lineHeight": "1.5"
|
| 328 |
-
}
|
| 329 |
-
},
|
| 330 |
-
"mobile": {
|
| 331 |
-
"$type": "typography",
|
| 332 |
-
"$value": {
|
| 333 |
-
"fontFamily": "Open Sans",
|
| 334 |
-
"fontSize": "12px",
|
| 335 |
-
"fontWeight": "400",
|
| 336 |
-
"lineHeight": "1.5"
|
| 337 |
-
}
|
| 338 |
-
}
|
| 339 |
-
},
|
| 340 |
-
"sm": {
|
| 341 |
-
"desktop": {
|
| 342 |
-
"$type": "typography",
|
| 343 |
-
"$value": {
|
| 344 |
-
"fontFamily": "Open Sans",
|
| 345 |
-
"fontSize": "12px",
|
| 346 |
-
"fontWeight": "400",
|
| 347 |
-
"lineHeight": "1.5"
|
| 348 |
-
}
|
| 349 |
-
},
|
| 350 |
-
"mobile": {
|
| 351 |
-
"$type": "typography",
|
| 352 |
-
"$value": {
|
| 353 |
-
"fontFamily": "Open Sans",
|
| 354 |
-
"fontSize": "10px",
|
| 355 |
-
"fontWeight": "400",
|
| 356 |
-
"lineHeight": "1.5"
|
| 357 |
-
}
|
| 358 |
-
}
|
| 359 |
-
}
|
| 360 |
-
},
|
| 361 |
-
"caption": {
|
| 362 |
-
"desktop": {
|
| 363 |
-
"$type": "typography",
|
| 364 |
-
"$value": {
|
| 365 |
-
"fontFamily": "Open Sans",
|
| 366 |
-
"fontSize": "10px",
|
| 367 |
-
"fontWeight": "400",
|
| 368 |
-
"lineHeight": "1.4"
|
| 369 |
-
}
|
| 370 |
-
},
|
| 371 |
-
"mobile": {
|
| 372 |
-
"$type": "typography",
|
| 373 |
-
"$value": {
|
| 374 |
-
"fontFamily": "Open Sans",
|
| 375 |
-
"fontSize": "8px",
|
| 376 |
-
"fontWeight": "400",
|
| 377 |
-
"lineHeight": "1.4"
|
| 378 |
-
}
|
| 379 |
-
}
|
| 380 |
-
},
|
| 381 |
-
"overline": {
|
| 382 |
-
"desktop": {
|
| 383 |
-
"$type": "typography",
|
| 384 |
-
"$value": {
|
| 385 |
-
"fontFamily": "Open Sans",
|
| 386 |
-
"fontSize": "8px",
|
| 387 |
-
"fontWeight": "500",
|
| 388 |
-
"lineHeight": "1.2"
|
| 389 |
-
}
|
| 390 |
-
},
|
| 391 |
-
"mobile": {
|
| 392 |
-
"$type": "typography",
|
| 393 |
-
"$value": {
|
| 394 |
-
"fontFamily": "Open Sans",
|
| 395 |
-
"fontSize": "6px",
|
| 396 |
-
"fontWeight": "500",
|
| 397 |
-
"lineHeight": "1.2"
|
| 398 |
-
}
|
| 399 |
-
}
|
| 400 |
-
}
|
| 401 |
-
},
|
| 402 |
-
"space": {
|
| 403 |
-
"1": {
|
| 404 |
-
"desktop": {
|
| 405 |
-
"$type": "dimension",
|
| 406 |
-
"$value": "8px"
|
| 407 |
-
},
|
| 408 |
-
"mobile": {
|
| 409 |
-
"$type": "dimension",
|
| 410 |
-
"$value": "8px"
|
| 411 |
-
}
|
| 412 |
-
},
|
| 413 |
-
"2": {
|
| 414 |
-
"desktop": {
|
| 415 |
-
"$type": "dimension",
|
| 416 |
-
"$value": "16px"
|
| 417 |
-
},
|
| 418 |
-
"mobile": {
|
| 419 |
-
"$type": "dimension",
|
| 420 |
-
"$value": "16px"
|
| 421 |
-
}
|
| 422 |
-
},
|
| 423 |
-
"3": {
|
| 424 |
-
"desktop": {
|
| 425 |
-
"$type": "dimension",
|
| 426 |
-
"$value": "24px"
|
| 427 |
-
},
|
| 428 |
-
"mobile": {
|
| 429 |
-
"$type": "dimension",
|
| 430 |
-
"$value": "24px"
|
| 431 |
-
}
|
| 432 |
-
},
|
| 433 |
-
"4": {
|
| 434 |
-
"desktop": {
|
| 435 |
-
"$type": "dimension",
|
| 436 |
-
"$value": "32px"
|
| 437 |
-
},
|
| 438 |
-
"mobile": {
|
| 439 |
-
"$type": "dimension",
|
| 440 |
-
"$value": "32px"
|
| 441 |
-
}
|
| 442 |
-
},
|
| 443 |
-
"5": {
|
| 444 |
-
"desktop": {
|
| 445 |
-
"$type": "dimension",
|
| 446 |
-
"$value": "40px"
|
| 447 |
-
},
|
| 448 |
-
"mobile": {
|
| 449 |
-
"$type": "dimension",
|
| 450 |
-
"$value": "40px"
|
| 451 |
-
}
|
| 452 |
-
},
|
| 453 |
-
"6": {
|
| 454 |
-
"desktop": {
|
| 455 |
-
"$type": "dimension",
|
| 456 |
-
"$value": "48px"
|
| 457 |
-
},
|
| 458 |
-
"mobile": {
|
| 459 |
-
"$type": "dimension",
|
| 460 |
-
"$value": "48px"
|
| 461 |
-
}
|
| 462 |
-
},
|
| 463 |
-
"8": {
|
| 464 |
-
"desktop": {
|
| 465 |
-
"$type": "dimension",
|
| 466 |
-
"$value": "56px"
|
| 467 |
-
},
|
| 468 |
-
"mobile": {
|
| 469 |
-
"$type": "dimension",
|
| 470 |
-
"$value": "56px"
|
| 471 |
-
}
|
| 472 |
-
},
|
| 473 |
-
"10": {
|
| 474 |
-
"desktop": {
|
| 475 |
-
"$type": "dimension",
|
| 476 |
-
"$value": "64px"
|
| 477 |
-
},
|
| 478 |
-
"mobile": {
|
| 479 |
-
"$type": "dimension",
|
| 480 |
-
"$value": "64px"
|
| 481 |
-
}
|
| 482 |
-
},
|
| 483 |
-
"12": {
|
| 484 |
-
"desktop": {
|
| 485 |
-
"$type": "dimension",
|
| 486 |
-
"$value": "72px"
|
| 487 |
-
},
|
| 488 |
-
"mobile": {
|
| 489 |
-
"$type": "dimension",
|
| 490 |
-
"$value": "72px"
|
| 491 |
-
}
|
| 492 |
-
},
|
| 493 |
-
"16": {
|
| 494 |
-
"desktop": {
|
| 495 |
-
"$type": "dimension",
|
| 496 |
-
"$value": "80px"
|
| 497 |
-
},
|
| 498 |
-
"mobile": {
|
| 499 |
-
"$type": "dimension",
|
| 500 |
-
"$value": "80px"
|
| 501 |
-
}
|
| 502 |
-
}
|
| 503 |
-
},
|
| 504 |
-
"radius": {
|
| 505 |
-
"xs": {
|
| 506 |
-
"$type": "dimension",
|
| 507 |
-
"$value": "1px"
|
| 508 |
-
},
|
| 509 |
-
"sm": {
|
| 510 |
-
"$type": "dimension",
|
| 511 |
-
"$value": "2px",
|
| 512 |
-
"3": {
|
| 513 |
-
"$type": "dimension",
|
| 514 |
-
"$value": "3px"
|
| 515 |
-
}
|
| 516 |
-
},
|
| 517 |
-
"md": {
|
| 518 |
-
"$type": "dimension",
|
| 519 |
-
"$value": "4px",
|
| 520 |
-
"5": {
|
| 521 |
-
"$type": "dimension",
|
| 522 |
-
"$value": "5px"
|
| 523 |
-
},
|
| 524 |
-
"6": {
|
| 525 |
-
"$type": "dimension",
|
| 526 |
-
"$value": "6px"
|
| 527 |
-
},
|
| 528 |
-
"100": {
|
| 529 |
-
"$type": "dimension",
|
| 530 |
-
"$value": "100px"
|
| 531 |
-
}
|
| 532 |
-
},
|
| 533 |
-
"lg": {
|
| 534 |
-
"$type": "dimension",
|
| 535 |
-
"$value": "8px",
|
| 536 |
-
"10": {
|
| 537 |
-
"$type": "dimension",
|
| 538 |
-
"$value": "10px"
|
| 539 |
-
}
|
| 540 |
-
},
|
| 541 |
-
"xl": {
|
| 542 |
-
"$type": "dimension",
|
| 543 |
-
"$value": "16px",
|
| 544 |
-
"17": {
|
| 545 |
-
"$type": "dimension",
|
| 546 |
-
"$value": "17px"
|
| 547 |
-
}
|
| 548 |
-
},
|
| 549 |
-
"2xl": {
|
| 550 |
-
"$type": "dimension",
|
| 551 |
-
"$value": "20px"
|
| 552 |
-
},
|
| 553 |
-
"3xl": {
|
| 554 |
-
"$type": "dimension",
|
| 555 |
-
"$value": "50px"
|
| 556 |
-
},
|
| 557 |
-
"full": {
|
| 558 |
-
"$type": "dimension",
|
| 559 |
-
"$value": "9999px"
|
| 560 |
-
}
|
| 561 |
-
},
|
| 562 |
-
"shadow": {
|
| 563 |
-
"xs": {
|
| 564 |
-
"$type": "shadow",
|
| 565 |
-
"$value": {
|
| 566 |
-
"color": "rgba(0, 0, 0, 0.5)",
|
| 567 |
-
"offsetX": "0px",
|
| 568 |
-
"offsetY": "2px",
|
| 569 |
-
"blur": "4px",
|
| 570 |
-
"spread": "0px"
|
| 571 |
-
}
|
| 572 |
-
},
|
| 573 |
-
"sm": {
|
| 574 |
-
"$type": "shadow",
|
| 575 |
-
"$value": {
|
| 576 |
-
"color": "rgba(0, 0, 0, 0.15)",
|
| 577 |
-
"offsetX": "0px",
|
| 578 |
-
"offsetY": "0px",
|
| 579 |
-
"blur": "16px",
|
| 580 |
-
"spread": "0px"
|
| 581 |
-
}
|
| 582 |
-
}
|
| 583 |
-
}
|
| 584 |
-
}
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
requirements.txt
CHANGED
|
@@ -1,5 +1,5 @@
|
|
| 1 |
# =============================================================================
|
| 2 |
-
# Design System
|
| 3 |
# =============================================================================
|
| 4 |
|
| 5 |
# -----------------------------------------------------------------------------
|
|
|
|
| 1 |
# =============================================================================
|
| 2 |
+
# Design System Automation — Dependencies
|
| 3 |
# =============================================================================
|
| 4 |
|
| 5 |
# -----------------------------------------------------------------------------
|
storage/benchmark_cache.json
DELETED
|
@@ -1,20 +0,0 @@
|
|
| 1 |
-
{
|
| 2 |
-
"test_system": {
|
| 3 |
-
"key": "test_system",
|
| 4 |
-
"name": "Test System",
|
| 5 |
-
"short_name": "Test",
|
| 6 |
-
"vendor": "Test Vendor",
|
| 7 |
-
"icon": "\ud83e\uddea",
|
| 8 |
-
"typography": {
|
| 9 |
-
"scale_ratio": 1.25,
|
| 10 |
-
"base_size": 16
|
| 11 |
-
},
|
| 12 |
-
"spacing": {
|
| 13 |
-
"base": 8
|
| 14 |
-
},
|
| 15 |
-
"colors": {},
|
| 16 |
-
"fetched_at": "2026-02-15T12:12:38.917158",
|
| 17 |
-
"confidence": "high",
|
| 18 |
-
"best_for": []
|
| 19 |
-
}
|
| 20 |
-
}
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|