Design

AI-powered visual exploration, moodboarding, and asset generation

Use AI image-generation tools to rapidly explore visual directions, moodboards, style variants, composition options, illustration directions, and first-pass visual assets—including social and campaign visuals—early in the design process before committing to production.

Why the human is still essential here

Creative direction, originality, taste, and final style and asset decisions remain with the designer or illustrator; AI accelerates exploration and first-pass asset generation, but humans decide what feels right, on-brand, and worth developing further.

How people use this

Style direction moodboard generation

AI generates sets of visual references in different art directions (e.g., minimal editorial, bold neo-brutal) to accelerate moodboarding.

Midjourney

Visual direction concept comps

Create several distinct concept visuals (different styles, lighting, composition, and graphic treatments) to compare and align on a direction early.

Adobe Firefly / DALL·E 3 (ChatGPT)

Hero artwork drafts

AI produces first-pass illustrations or background visuals that the designer can refine for product or web concepts.

Midjourney / Ideogram

Need Help Implementing AI in Your Organization?

I help companies navigate AI adoption -- from strategy to production. Whether you are building your first LLM-powered feature or scaling an agentic system, I can help you get it right.

LLM Orchestration

Design and build LLM-powered products and agentic systems

AI Strategy

Go from idea to production with a clear implementation roadmap

Compliance & Safety

Build AI with human-in-the-loop in regulated environments

Related Prompts (4)

Community stories (10)

YouTube

Best AI Design Tools in 2026 - The Complete Stack for Web Designers

Best AI design tools in 2026 - these are the 7 AI tools I use to run my design studio every single day. Not demos, not sponsored picks - real tools that ship real client work.

I tested over 50 AI design tools and narrowed it down to 7 that cover every part of the web design workflow: ideation, layout, assets, visuals, design, building, and the one tool that connects everything together. Each tool earned its spot by surviving real client projects at my studio, Klime.

A
AdrienAI Designer
Apr 8, 2026
LinkedIn

My AI workflow I use to 10x my design work:

My AI workflow I use to 10x my design work:

1. Inktrail.ai - For brainstorming, understanding PRDs, conducting market and competitor research and creating user flows


2. variant.com - To curate a moodboard of design directions and UI flows


3. figma.com/make - The HTML and CSS code that aligns to my need from the moodboard and curate a complete prototyping


4. figma.com - For creating the design artifacts, aligning with brand standards and design system


Inktrail.ai - To write the UX copies for the designs


Worthy of mention; windsurf.com - Creats a design.md file and implement the UI on frontend level.


This workflow cuts my design time from days to hours and still maintain the same level of quality.


Designers should perform human oversight and not manual work.


What’s your own process?


Repost to help a designer from your network.

JK
Joseph KaluProduct Designer @ Yolo
Apr 8, 2026
Medium

My honest review of every AI design tool I use daily

These tools are powerful, but each one only works well when you know exactly what job to give it.

I’ve been using AI tools as a core part of my design workflow. Here’s what actually works and what doesn’t.


ChatGPT: My content partner. Great for UI copy, microcopy variations, and translating engineer-speak into user-speak.


Figma Make: Good at generating initial wireframes and scaffolding a starting flow. The frustrations: credit limits are brutal and burn fast when iterating (which is… the entire point of design). It struggles with large context, you need to feed it bite-sized chunks. No plan mode like VS Code, so you can’t orchestrate a multi-screen flow strategically. For these reasons, it’s much easier to go from Figma Make to VS Code than the other direction.


Figma: Still the source of truth. After Make generates a starting point, I always move into Figma to manually refine. AI-generated layouts need a human hand- spacing, hierarchy, consistency.


VS Code + Claude: Where designs come to life. Like Figma Make, works best with focused prompts, not massive context dumps. But the speed is transformative. Things that used to require a developer friend and a weekend now take an afternoon.


Google Stitch: Strong at generating polished visual UI and great for design system exploration. But the content and UX structure is consistently weak- page hierarchy, flow logic, information architecture all need significant rework. Better for visual inspiration than building experiences.

AM
Aditi MagalProduct designer who designs complex systems that feel simple
Apr 8, 2026
X

Updated stack as a designer using ai to streamline my workflows; for those that find it useful.

Updated stack as a designer using ai to streamline my workflows; for those that find it useful.

AI tech stack: Claude (research, strategy assistant), Claude Code (coding), Figma Make (prototyping), v0 (structural explorations), Vercel (hosting), Nano Banano Pro (image gen), Notion (second brain), Shadcn & Tailwind (styling)

h
hyamDesigner at Discord
Mar 31, 2026
Medium

How I Create Social Media Designs Using AI in 10 Minutes

Social media moves fast. One day, you post a design that performs well, and the next day, the algorithm wants something completely different. If you run a blog, manage a brand, or promote affiliate products, you already know how much time design work can take.

Not long ago, creating social media graphics meant opening Photoshop, searching for stock images, adjusting layers, and tweaking fonts for hours. Now things are different. AI tools have made the process faster, simpler, and honestly more fun.


These days, I can create clean, professional social media designs in about 10 minutes using AI tools. No complicated design skills required. If you’ve ever wondered how people post polished graphics consistently, this article will show you exactly how the process works.


...

Z
ZenvertiseAI Enthusiast
Mar 10, 2026
X

A Designer's AI Stack

In the past few months I've replaced or automated a lot of the work that used to sit around my design process - managing projects, writing briefs, searching for prompts I made a while back, building interactions I could picture but couldn't code.

Here are the tools that actually made that happen and how I use them to keep track & automate everything boring in my process.




Claude + Midjourney


Midjourney has been a cornerstone for a lot of what I've been doing in the past year or so. The speed of style exploration is hard to match, and once I'm locked in a direction the consistency I can get out of it is something clients are actually happy with. Lately I've had a couple of people reach out specifically wanting a visual style/system they could use for their own content.


What changed how I use it was bringing Claude into the process on the prompt side. I learned early to store favourite prompts, profiles, images and styles somewhere, but they'd end up spread across notes, different Notion pages, message threads - everywhere. Now they live in Notion in a structured way, and since Claude already has access to that, I can just ask for a style I built months ago instead of searching for it.


This also works for helping me put together prompts and styles in batches for clients. Instead of handing someone a finished image, I can give them a repeatable structure - a set of parameters and style descriptors that means every image they generate looks on-brand, even if they've never written a Midjourney prompt before. It removes a lot of back and forth and gives clients something useful as an additional deliverable.




A custom Gem agent


Workshop is probably my favourite Framer feature of them all - being able to create complex code components with simple prompts is a huge deal and it handles most of what I need day to day.


But I've always loved WebGL shaders, advanced scroll interactions, infinite galleries, or anything with more complex logic than what's native in Framer. For more complex and detailed prompts I found Workshop just doesn't do well and that's where I switch to Gemini with a custom Gem I put together.


The Gem is trained on Framer's best practices for code components, so it already understands the structure, how property controls need to be set up, and how to write things so that whoever uses the component - myself, a template user or a client - doesn't need to know anything about the code. They just configure it through the UI like any other component.


This custom Gem really opened a door for me that always felt out of reach before, so if you're using Framer I'd definitely recommend it!




Claude + Notion


I have separate Notion pages for each Framer template I'm working on, client projects, and my own website redesign. I use them as simple project management tool where I cross off tasks to myself about where I'm at, what I want to come back to and just keeping track of things.


My problem with Notion has always been the friction of having to actually write and update things. I've been documenting my process because I know it's important, yet it's always been the most tedious thing to do.


The Claude connection to Notion fixed that. Now I just write to Claude - what I finished for the day, what I'm thinking, what I need to follow up on - and it updates the relevant pages for me. Took me less than an hour to set up and I've used Notion every day since.


For client projects it's been helpful for keeping short notes on where things stand. I also store client briefs in there and document my process on current Framer template work, which makes it easier to reflect on what worked, where and when I was slowed by something and what didn't work at all. It all ends up being useful source material for content to post on socials, too.




Flora Fauna


Midjourney is still where I do most of my visual exploration, but Flora Fauna has become the tool I reach for when I want to stay in one place and iterate quickly.


The node-based system is the main reason. Instead of rewriting long prompts from scratch, I'm working visually, branching off from generations I like, adjusting and building on what's already working. Having everything in one place rather than scattered across different chats or folders on my computer makes a real difference when I'm trying to keep a project coherent.


The video generation is worth a mention as well. Mostly useful for when a client project needs some motion or I want to push something further, being able to switch to a better video model from the same workspace - without starting over somewhere else - saves some time and nerves.




The point of all this


The stack will keep changing as these tools evolve, and I'm also aiming to build more custom setups and even small apps for myself based on specific use cases and workflows I haven't fully solved yet.


But the idea behind all of my AI use remains the same - I want to automate all of the mundane, repetitive and boring work, so I can give more time and attention to the creative work that I love doing myself!

G
GeorgiDesigner
Feb 24, 2026
LinkedIn

Do I use AI?

Do I use AI?

Yes, but not for final production visuals, only in the background.

I needed some graffiti to go on the wall here. That was a quick generation, a background element with low importance. It was an image texture that had to be applied to the wall (wall still being built in 3D)


I've also used it to brainstorm ideas, visual directions. It helps speed that process up. But the final visual still needs to be built traditionally.


I've tried using it to generate full backgrounds for products but was unsuccessful. The results weren't good enough for production.


I haven't used AI for hero products or final renders though. The details and designer's hard work has to be respected down to the millimetre. AI sometimes changes small details and textures and that's not ok.

I've seen product renders produced by AI that look great but then on closer examination the actual shape of the tiny parts, and texture patterns are not the same as the original product.

Imagine you send a poster out to print and the printer freely changes the colours and fonts.


I know there are tools out there that can be trained on products, I haven't tried all of them yet.

For now I'm using traditional methods with AI helping speed up some background work. I like the control and flexibility.

AH
Anthony HooverFreelance product visualization designer
Feb 24, 2026
LinkedIn

Figma AI just went from “cool beta feature” to “everywhere in my workflow.”

Figma AI just went from “cool beta feature” to “everywhere in my workflow.” As a Designer, this changes how we design, review and ship interfaces.

In early 2026, Figma rolled out full AI features to all users. You can now generate layouts from prompts, tweak visuals with AI and even create prototypes without touching a frame first.


Last week I rebuilt part of my UI process around it. Here is what actually worked:


𝟭. 𝗜 𝘀𝘁𝗼𝗽𝗽𝗲𝗱 𝘀𝘁𝗮𝗿𝘁𝗶𝗻𝗴 𝗮𝘁 𝗯𝗹𝗮𝗻𝗸 𝗰𝗮𝗻𝘃𝗮𝘀𝗲𝘀

 • I now start with a prompt like

 • “Create a mobile dashboard for a fintech app with cards for balance, recent transactions and quick actions.”

 • Figma AI gives me 3 to 5 starting points that I refine instead of drawing rectangles for 30 minutes.


𝟮. 𝗜 𝘂𝘀𝗲 𝗔𝗜 𝗳𝗼𝗿 “𝘄𝗵𝗮𝘁 𝗶𝗳” 𝗲𝘅𝗽𝗹𝗼𝗿𝗮𝘁𝗶𝗼𝗻

 • Instead of 10 manual variants, I ask

 • “Try a more minimal style” or “Push this towards a dark, high contrast theme.”

 • It is perfect for exploring extremes while I stay focused on information architecture and flows.


𝟯. 𝗜 𝘁𝗿𝗲𝗮𝘁 𝗔𝗜 𝗮𝘀 𝗮 𝗷𝘂𝗻𝗶𝗼𝗿 𝗱𝗲𝘀𝗶𝗴𝗻𝗲𝗿, 𝗻𝗼𝘁 𝗮 𝘀𝗲𝗻𝗶𝗼𝗿 𝗼𝗻𝗲

 • It is great at speeding up layout, components and visual polish.

 • It is bad at understanding edge cases, accessibility trade offs and product strategy. That is still my job.


𝟰. 𝗖𝗼𝗹𝗹𝗮𝗯𝗼𝗿𝗮𝘁𝗶𝗼𝗻 𝗶𝘀 𝗴𝗲𝘁𝘁𝗶𝗻𝗴 𝗳𝗮𝘀𝘁𝗲𝗿

 • Developers can react to AI generated screens earlier.

 • Stakeholders react better to “something real” instead of low fidelity boxes, which shortens the feedback loop.


𝗧𝗮𝗸𝗲𝗮𝘄𝗮𝘆


AI inside Figma is not replacing my UX work. It is removing the mechanical parts so I can spend more time on research, flows and interaction details.


If you are a UX or product designer

Would you let AI handle your first draft layouts or do you still prefer starting from scratch


Reply with “AI first” or “Manual first” and tell me why.

AD
Abhijit DasUX Design Consultant at EY
Feb 26, 2026
LinkedIn

Using Adobe Firefly as a creative assistant (collaboration, not replacement)

It's 2026, even your parents and grandparents use AI daily, so as designers, how could we not?

The real conversation isn't whether we use AI. It's how we use it.


AI doesn't replace taste. It doesn't replace strategy. And it definitely doesn't replace years of visual thinking.


But when used right, it becomes a power tool.


I personally use Adobe Firefly as an all-in-one creative assistant, build moodboards, test ideas and explore visual directions faster.


AI helps me move quicker. BUT I decide what's good.


What's your take on AI in the creative field? #AdobePartner

SK
Sandra KesserwanyFounder & Brand Designer, Sandra Creates Studio
Feb 24, 2026
LinkedIn

Most designers are using AI wrong (I use it to think better)

Most designers are using AI wrong.
They ask it to create everything.

I use it to think better.

-------------------------------------------------

Here’s how AI fits into my design process 👇

1️⃣ Idea Exploration

Before touching a single pixel, I use AI to generate multiple positioning angles for a product or concept.

One idea becomes five.

Five become direction.

2️⃣ Content & Messaging Clarity

Strong design starts with strong communication.

AI helps refine hooks, headlines, and structure before visuals come in.

3️⃣ Moodboarding & Layout Thinking

I explore style directions, composition variations, and visual hierarchy faster — so I design with intention, not guesswork.

4️⃣ Faster Iterations

Instead of staring at a blank screen, I start with momentum.


I test concepts quicker.

I refine smarter.

But here’s the important part

AI gives options.

✅ I make decisions.

AI speeds execution.

✅ I bring taste.

Creativity is still human.

AI just removes friction.


Designers who learn to collaborate with AI ,not compete with it will grow faster.

What’s one tool you use to level up your workflow?

PC
Prashant ChaudharyCreative Designer / Video Editor
Feb 23, 2026