Design

Using AI as a debugging and request-structuring copilot

When she hit technical walls, she used a second AI tool to understand failures, structure requests, and check outputs while building the design-system integration.

Why the human is still essential here

The human decides which suggestions to trust, evaluates whether they are technically and architecturally sound, and makes the final implementation choices.

How people use this

MCP setup troubleshooting

AI explains integration errors and suggests next debugging steps when connectors, permissions, or config files break the design-system workflow.

ChatGPT / Claude

Prompt constraint rewriting

AI rewrites vague requests into precise prompts that specify allowed components, variants, tokens, and output format.

Claude / ChatGPT

Generated code sanity checks

AI reviews generated React output for bad imports, invalid props, and design-system violations before a human accepts it.

Cursor / GitHub Copilot

Related Prompts (4)

Community stories (1)

Medium
6 min read

I wanted AI to do my job. It didn’t.

I felt it and couldn’t resist. I decided to try and actually connect our Design system to AI.

Over two weeks, I connected an AI coding assistant to our Design System and React component library via MCP. When I hit technical walls and I hit plenty I used a second AI tool to help me understand what went wrong, structure my requests, and check outputs. One tool to build, one to think with.


Trying to make AI work with our design system forced me to look at what was actually there. And what wasn’t. We also proved some real capabilities along the way.


AI could extract guidelines scattered across different places and reformat them into a consistent structure.


It could flag inconsistencies we’d stopped noticing.


It could build simple patterns using only existing components and tokens.

AM
Anna MorozovaUI/UX designer at HENNGE
Apr 6, 2026