Building a Scalable Design System with Logic-Driven Tokens

Designing a tiered token architecture that aligns design and engineering while enabling scalable product development.

Role

Solo Product Designer

Product

dotData Enterprise AI Platform

Team

2 Designers, 8 Engineers

Focus

Design Systems,
Design–Eng Alignment

Each token layer encodes a more specific level of intent — from raw value to functional UI logic

Where it started

When I joined, the company had no real design library. Designers were recreating components independently, engineers were coding from memory or guesswork, and the codebase was full of hard-coded styles that nobody wanted to touch. My design lead asked me to audit what we had — go through the existing designs, talk to engineers, and figure out what was actually in the code versus what was in the files.

What I found wasn't pretty. Similar colors with slightly different hex values that should've been the same token. Components that existed in three different files with no clear source of truth. Engineers who'd stopped asking designers questions because the answers were inconsistent anyway.

Fragmented Foundation

Without a centralized and documented design system library, teams often recreated components independently, leading to visual inconsistencies across the product.

Ambiguous Implementation

Without logic-driven specs, engineers were forced to rely on guesswork, causing the final production code to diverge significantly from the design intent.

Escalating Technical Debt

These inconsistencies resulted in increasing numbers of hard-coded CSS variables and styles, creating a brittle UI infrastructure that slowed product development.

I built the first version of the design system from scratch. Started with the smallest building blocks — color, typography, spacing — and worked up to buttons, input fields, and larger composed components. We were on Sketch at the time, using Zeplin for handoff. It worked, mostly, until it didn't.

Why we had to do it again

The Zeplin problem took a while to surface fully. Every time a component changed, the update chain looked like this: update the component → update the clean screens → update InVision → update Zeplin. If anyone missed a step — and they did, often — engineers would find themselves implementing from an outdated spec. We started getting Slack messages: "Which version is the real one?"

When the team decided to redesign the enterprise platform and migrate to Figma, it felt like the right moment to fix the system properly, not just migrate the files. Figma solved the "which version is real" problem immediately — everyone's in the same file, and engineers can inspect elements directly. But I noticed something else after the first redesign shipped.

The real problem wasn't the tool

The other designer and I were making different calls on the same components. She'd reach for border-default-primary and I'd use border-default-secondary. Neither of us was exactly wrong — the semantic tokens were abstract enough that they could reasonably be interpreted in multiple ways. We didn't have a "when to use which" doc, and honestly, even if we'd written one, it would need to be maintained and actually read.

Same component, same context. Two designers, two different tokens — the system didn't encode the right answer.

After: the component token makes the decision for you.

I thought about writing documentation first. But documentation has two failure modes: people don't read it, and it falls out of date. I wanted the system itself to encode the decision logic.

The third layer

That's when I introduced component tokens as a third layer — sitting between semantic tokens and actual UI elements. Instead of choosing bg-subtle and hoping everyone interprets it the same way, you'd pick button-bg or table-header-bg. The token name tells you what it's for.

I ran into a problem immediately: I unpublished the general semantic tokens to push designers toward component tokens, and it broke things. Designers working on features that didn't have component tokens yet had nothing to reach for. So I kept a set of general semantic tokens published for those in-between moments — components still in design, features not yet scoped. The component token layer sits alongside, not on top of.

General semantic tokens stay published for edge cases. Component tokens handle the rest privately.

On the governance side, I made component tokens private within each module rather than publishing them globally. A few people thought this would create more work. I agreed it added some friction — but global component tokens would've created token sprawl faster than we could manage it. System stability over automation, at least at this stage.

Why the rollout was phased

At the start of the redesign, the engineers recommended some existing token libraries they liked. I looked at them and decided against it — they were comprehensive, but "comprehensive" at that stage meant we'd be maintaining a huge system before we'd validated any of the product decisions. We had two designers and eight engineers, and we were redesigning multiple workflows simultaneously.

So we launched with two layers. Once the first version shipped and things stabilized, I went back to engineering to talk about the component token layer. The architecture is fully built in Figma, aligned to engineering naming conventions so developers can inspect the intended token logic directly in Dev Mode when they're ready to implement.

Engineers inspect token logic directly in Figma Dev Mode — no separate spec docs.

Keeping it from going stale

As the system grows, the question is always the same: what gets promoted to a component token, and what stays one-off? I built a governance workflow to make that decision consistent — recurring patterns go through an audit and get promoted to the component layer; edge cases use the open semantic layer and stay documented outside the core library.

New patterns earn their way into the component layer. One-offs stay flexible.

What changed

The handoff workflow went from five steps to two: update the component, update the clean screens. Engineers self-serve in Figma — before this, someone from engineering would come to me with a design question roughly once a week. Now it's about once a month.

In design reviews, the spec questions mostly disappeared. We spend that time on actual design decisions instead.

I also used AI in parts of this project — generating color pattern documentation, working through token naming conventions, and drafting component specs. Mostly to get something on the page faster, then edit from there.

Where it is now

The two-layer system is in production across the platform. The component token layer is live in Figma — full implementation in code is still in progress, but the architecture is ready. When engineering has capacity, the system can extend into dark mode or multi-theme support without restructuring anything at the foundation.

Create a free website with Framer, the website builder loved by startups, designers and agencies.