Design

Prompt enhancement layer before agent execution

Insert an intermediate UX/system layer that rewrites, plans, and enriches user input (e.g., prompt enhancers, planning modes, curated context) so the agent receives structured, high-quality instructions instead of raw user prompts.

Why the human is still essential here

Humans define what context is safe/necessary and set constraints, guardrails, and evaluation criteria; AI can transform inputs, but designers/teams remain responsible for correctness, safety, and usability outcomes.

How people use this

Auto-rewrite messy requests into specs

A middleware step rewrites vague user input into a clear task spec with assumptions, acceptance criteria, and required outputs before calling the agent.

LangChain / OpenAI

Planning mode with clarification loop

Before execution, the agent produces a step-by-step plan and asks targeted clarifying questions, then converts answers into a final structured instruction.

Claude / Vercel AI SDK

Safe context injection from design system

The system enriches the user request by attaching relevant design tokens, component guidelines, and constraints retrieved from internal docs so raw prompts never hit the agent alone.

LlamaIndex / OpenAI

Community stories (1)

LinkedIn

People can't prompt: designing AI agents that don't require it

Whenever we think our users got prompting figured out, we're proven wrong again.

After 50+ user tests with the DeepL AI Agent, here's the uncomfortable truth: people can't prompt. And that's not their fault.


The standard chat interface has fundamental limitations:

- Typing speed bottleneck (nobody types at 400 keys per minute)

- Clarity issues (what sounds clear in your head doesn't always translate to text)

- Precision problems (context is everything, but users don't know what context to provide)


Research backs this up: nearly half the population struggles with complex text prompts. We're forcing users to become prompt engineers just to reach basic UX levels that visual interfaces previously provided for free.


So what's the solution?

1) Let them speak instead of type - Voice input removes friction and enables more natural interaction

2) Never let raw prompts reach your agent - Use prompt enhancers, planning modes, and pre-curation

3) Provide structure and context - Guide users with templates, examples, and smart defaults


Good UX shouldn't require users to master a new skill. It should work intuitively. The future isn't about teaching everyone prompt engineering. It's about building products that don't need it.

KP
Kai PetersStaff Product Designer for DeepL Agent at DeepL
Feb 25, 2026