Design

Voice-first input for AI agents

Design AI agent experiences that accept spoken input to reduce friction, increase context bandwidth, and improve user ability to express intent compared to typing prompts.

Why the human is still essential here

Designers decide when voice is appropriate, handle privacy/consent and error states, and define interaction patterns; AI transcribes/interprets speech but humans design and validate the experience.

How people use this

Voice design brief capture

Users speak a project brief (goals, audience, constraints) and the agent transcribes it into a structured brief the team can refine.

OpenAI Whisper / Notion

In-app voice command palette

A product lets users speak commands like “create a checkout flow draft” or “summarize this screen’s issues” to trigger agent actions without typing.

Google Cloud Speech-to-Text / Vercel AI SDK

Usability-test voice notes to insights

Researchers dictate observations during sessions and the system transcribes and clusters them into themes and actionable recommendations.

Otter.ai / OpenAI Whisper

Need Help Implementing AI in Your Organization?

I help companies navigate AI adoption -- from strategy to production. Whether you are building your first LLM-powered feature or scaling an agentic system, I can help you get it right.

LLM Orchestration

Design and build LLM-powered products and agentic systems

AI Strategy

Go from idea to production with a clear implementation roadmap

Compliance & Safety

Build AI with human-in-the-loop in regulated environments

Related Prompts (4)

Community stories (2)

LinkedIn

I used to spend hours on briefs, planning, and file setup before designing.

I used to spend hours on briefs, planning, and file setup before designing.

Now, my AI-powered design workflow takes me from idea to design in 4 steps:


Step 1: Voice → Structured brief


I do a 2-3 min voice brain dump using Whisper Flow. It captures everything and sends it to Notion. No typing, no formatting. Just talk and it's documented.


Step 2: Brief → Tasks → Daily focus


Notion AI then breaks the brief into tasks and keeps everything in one dashboard: tasks, notes, docs, links and even daily todos. No app switching, no scattered files. Stays focused without oversight.


Step 3: Layout → Final design


Figma Make generates initial layouts quickly. I pull in styling from my library so everything stays on-brand. Then I continue polishing it in Figma: typography, spacing, alignment and all the details that make it feel polished.


Step 4: Design → Critique


Before calling it done, I screenshot and drop it into Claude. It analyzes layout pros and cons, identifies ambiguous language, and reduces wordiness so the message is sharp.


This is how I use AI to work smarter and faster while keeping creative decisions in human hands.


What does your AI workflow look like? I'd love to hear what's working for you 👇

BX
Bonnie XUFounder at Bonbon Design
Mar 5, 2026
LinkedIn

People can't prompt: designing AI agents that don't require it

Whenever we think our users got prompting figured out, we're proven wrong again.

After 50+ user tests with the DeepL AI Agent, here's the uncomfortable truth: people can't prompt. And that's not their fault.


The standard chat interface has fundamental limitations:

- Typing speed bottleneck (nobody types at 400 keys per minute)

- Clarity issues (what sounds clear in your head doesn't always translate to text)

- Precision problems (context is everything, but users don't know what context to provide)


Research backs this up: nearly half the population struggles with complex text prompts. We're forcing users to become prompt engineers just to reach basic UX levels that visual interfaces previously provided for free.


So what's the solution?

1) Let them speak instead of type - Voice input removes friction and enables more natural interaction

2) Never let raw prompts reach your agent - Use prompt enhancers, planning modes, and pre-curation

3) Provide structure and context - Guide users with templates, examples, and smart defaults


Good UX shouldn't require users to master a new skill. It should work intuitively. The future isn't about teaching everyone prompt engineering. It's about building products that don't need it.

KP
Kai PetersStaff Product Designer for DeepL Agent at DeepL
Feb 25, 2026