Six weeks ago I wrote about the AI tools I use as a designer. That article hit 470K views. People bookmarked it, shared it, and sent me DMs asking for more details.
Since then, almost everything about how I work has changed.
Not the tools themselves. Most of those are still in my toolkit. What changed is how I use them. And more importantly, where I use them.
My first article was about side projects. Building apps, fixing styling, restyling Framer sites. Fun stuff. Personal stuff. The kind of work where "close enough" is fine and you can iterate until it feels right.
But I have a day job. I'm a product designer at a large ecommerce company. We have a design system. We have a pipeline. We have four platforms: iOS, Android, desktop web, mobile web. Every pixel has to match. Every component has to follow the system. Every change goes through review.
That's a very different environment for AI.
And for a while, I couldn't make it work there.
The problem with prototyping at work
At work, I can't just open Claude Code and say "build me something cool." There's an approved design. There's a design system with tokens, components, and naming conventions that took years to build. There are developers who need to implement whatever I hand them.
When I tried to bring my side project workflow into this environment, it fell apart.
I wrote about this a few weeks ago. I had an approved design in Figma. I set up an Xcode project, connected it to Cursor, pointed it at my Figma file, and told it to recreate the design in SwiftUI.
The MCP wouldn't connect. That cost me 15 minutes.
When it finally worked, the output looked nothing like my design. The structure was sort of there, but the spacing was wrong, the colors were wrong, and the whole thing felt like Cursor had seen a blurry photo of my design and made its best guess.
I spent 45 minutes trying to wrangle it before giving up entirely.
The issue wasn't the tools. The issue was that AI doesn't know your design system. It doesn't know your spacing tokens, your color primitives, your component hierarchy. It just guesses. And when you're working within tight production constraints, guessing isn't good enough.
So I had to solve that problem first.
Teaching AI your design system
This is the thing that changed everything for me.
I had Claude Code extract the essence of our design system. It used the Figma Console MCP to read through our component library, tokens, and styles.
Then I had it browse our live websites in both desktop and mobile views. Not just look at them. Actually document what it saw. The layout patterns. The spacing. How things respond on different screen sizes. How the navigation works. What the cards look like. Everything.
The output was a single markdown file. A design.md that captures our visual language: the spacing scale, the typography, the color system, how components are structured, how things behave across breakpoints.
It's not a full design system spec. It's more like a cheat sheet. Enough context for an AI tool to understand "this is what their product looks like" without having to parse the entire Figma file every time.
Here's why this matters.
I work at a company with a strict design pipeline. I can't burn all my Claude Code tokens trying to get AI to work inside Figma directly. But if I can extract the design system once and carry that context with me, every other AI tool I use becomes smarter.
Now when I plug that file into Variant AI or Google Stitch, the output is close to our actual product right from the start. Not perfect. But close enough that I'm tweaking details instead of rebuilding from scratch.
Before this, every AI-generated design looked generic. Like it came from a template. Now it looks like it belongs to our product. That single markdown file was the unlock.
From design to production prototype
After my success building native SwiftUI animations for our mobile app a few weeks ago, I wanted to try the same approach for our web product.
The process was faster than I expected.
I gave Claude Code the link to our frontpage and the Figma design files for the feature I was working on. I told it to build a React prototype that matched our current design.
Since our design is relatively simple and clean, it didn't take long to have the foundation up and running. The design.md file helped here too. Claude already understood our visual system, so it wasn't guessing at spacing or typography.
Once the foundation was solid, I started layering on the stuff that actually matters for testing.
Stagger appear animations for content loading. A transition animation using barba.js between pages. That page transition does two things: it hides our loading time, and it adds a bit of polish that makes the whole experience feel more considered.
I have a background in motion design. Broadcast work, cinemagraphs, HBO. So when it comes to animation, I know what I want. The hard part was always getting it into production. Now I can build it myself and hand developers code they can actually use.
This prototype now serves as a foundation. Not just for the feature I built it for, but for testing new ideas on top of it.
Want to try skeleton loading states? Drop them in. Want to experiment with a different navigation pattern? Build it on top of what's already there. No more sketching it out in Figma, building a clickable prototype, presenting it, getting approval, then watching developers rebuild it from scratch.
The prototype is the starting point for the real thing.
A quick note on Paper
I also gave Paper a proper run during this period. It's fast. Really fast. The iteration speed is impressive and the output quality is genuinely good.
For side projects, I'd use it without hesitation. You can go from idea to high-fidelity design in minutes, and the iteration loop is tighter than anything else I've tried.
But at work, our entire pipeline runs through Figma. Developers pull specs from there. Design reviews happen there. Tokens sync from there. Adding another tool to that chain creates friction that nobody wants. So Paper stays in my side project toolkit for now. It's one to watch though.
What's actually different now
Looking back at my first article, I was excited about the "WTF moments." Claude Code fixing my app styling. Remotion making a client video in minutes. The Framer MCP restyling an entire site.
Those were real. But they were also isolated wins. Cool tricks. Party pieces.
What changed in the last six weeks is that AI went from "fun for side projects" to "useful for my actual job."
And the thing that made that possible wasn't a new tool or a better model. It was solving a boring problem: how do you give AI enough context about your product that it stops guessing?
That design.md file sounds incredibly boring. It's a markdown document. There's nothing flashy about it. Nobody is going to make a viral demo video about extracting design tokens into a text file.
But it's the single most impactful thing I've done with AI at work. Because it means every AI tool I use now starts from our reality instead of from a blank slate. Every prototype starts closer to production. Every design exploration looks like it belongs to our product instead of a random template.
The React prototype approach is the same idea. Instead of asking AI to imagine what our product should look like, I'm giving it our actual product as the starting point and building on top of it.
Context is the unlock. Not better prompts. Not fancier tools. Just giving AI the information it needs to do useful work within the constraints you actually operate in.
The honest take
I'm still not using AI for final production design at work. Our design system, our developer handoff, our review process: that all still runs through Figma. That's not changing anytime soon.
But the prototyping and exploration phase? That's completely different now.
I can test an idea in a working React prototype before it ever enters a sprint. I can show stakeholders something real instead of a Figma mockup with hotspot arrows.
And when the developers pick it up, they're starting from code that already works. Not a static spec they need to interpret. Not a prototype video they need to reverse-engineer. Actual working code.
That changes the conversation. It changes the speed. It changes how much you can try before committing to a direction.
Six weeks ago I was saying "WTF" at party tricks. Now I'm shipping prototypes that feed directly into production.
The toolkit didn't change that much. How I think about it did.