Design

AI-powered visual exploration and moodboarding

Use AI image generation tools to rapidly explore visual directions, style variants, moodboards, and composition options early in the design process — accelerating creative decision-making before committing to production.

Why the human is still essential here

Creative direction, taste, and final style decisions remain with the designer; AI accelerates exploration but cannot judge brand fit, audience appropriateness, or visual coherence on its own.

How people use this

Style direction moodboard generation

AI generates sets of visual references in different art directions (e.g., minimal editorial, bold neo-brutal) to accelerate moodboarding.

Midjourney

On-brand image and asset creation

AI creates concept imagery and supporting assets (backgrounds, textures, illustrations) aligned to a chosen moodboard for early comps.

Adobe Firefly

Visual direction concept comps

Create several distinct concept visuals (different styles, lighting, composition, and graphic treatments) to compare and align on a direction early.

Adobe Firefly / DALL·E 3 (ChatGPT)

Color and texture exploration sets

Produce variations of the same scene with alternate palettes, materials, and textures to test mood and brand fit before committing to production.

Adobe Firefly / Krea AI

Light/dark/high-contrast theme sweeps

Prompt for multiple theme treatments of the same screen to quickly compare contrast, readability, and tone before committing to a direction.

Figma AI

Palette ideas for theme exploration

Generate candidate color palettes (e.g., minimal fintech, bold consumer, enterprise neutral) and apply them to components to evaluate tone and contrast.

Khroma / Figma

Style-direction sprint boards

Generate 30–100 fast style variants from a single creative brief to compare directions and select a shortlist for the client.

Midjourney

Consistent campaign imagery set

Lock a chosen look and produce a cohesive set of images for a campaign (ads, socials, headers) using consistent parameters and references.

Midjourney

Moodboard concept image exploration

Generate multiple concept images to explore lighting, environments, and art direction, then curate a moodboard to guide the traditional build.

Midjourney

Community stories (5)

X

A Designer's AI Stack

In the past few months I've replaced or automated a lot of the work that used to sit around my design process - managing projects, writing briefs, searching for prompts I made a while back, building interactions I could picture but couldn't code.

Here are the tools that actually made that happen and how I use them to keep track & automate everything boring in my process.




Claude + Midjourney


Midjourney has been a cornerstone for a lot of what I've been doing in the past year or so. The speed of style exploration is hard to match, and once I'm locked in a direction the consistency I can get out of it is something clients are actually happy with. Lately I've had a couple of people reach out specifically wanting a visual style/system they could use for their own content.


What changed how I use it was bringing Claude into the process on the prompt side. I learned early to store favourite prompts, profiles, images and styles somewhere, but they'd end up spread across notes, different Notion pages, message threads - everywhere. Now they live in Notion in a structured way, and since Claude already has access to that, I can just ask for a style I built months ago instead of searching for it.


This also works for helping me put together prompts and styles in batches for clients. Instead of handing someone a finished image, I can give them a repeatable structure - a set of parameters and style descriptors that means every image they generate looks on-brand, even if they've never written a Midjourney prompt before. It removes a lot of back and forth and gives clients something useful as an additional deliverable.




A custom Gem agent


Workshop is probably my favourite Framer feature of them all - being able to create complex code components with simple prompts is a huge deal and it handles most of what I need day to day.


But I've always loved WebGL shaders, advanced scroll interactions, infinite galleries, or anything with more complex logic than what's native in Framer. For more complex and detailed prompts I found Workshop just doesn't do well and that's where I switch to Gemini with a custom Gem I put together.


The Gem is trained on Framer's best practices for code components, so it already understands the structure, how property controls need to be set up, and how to write things so that whoever uses the component - myself, a template user or a client - doesn't need to know anything about the code. They just configure it through the UI like any other component.


This custom Gem really opened a door for me that always felt out of reach before, so if you're using Framer I'd definitely recommend it!




Claude + Notion


I have separate Notion pages for each Framer template I'm working on, client projects, and my own website redesign. I use them as simple project management tool where I cross off tasks to myself about where I'm at, what I want to come back to and just keeping track of things.


My problem with Notion has always been the friction of having to actually write and update things. I've been documenting my process because I know it's important, yet it's always been the most tedious thing to do.


The Claude connection to Notion fixed that. Now I just write to Claude - what I finished for the day, what I'm thinking, what I need to follow up on - and it updates the relevant pages for me. Took me less than an hour to set up and I've used Notion every day since.


For client projects it's been helpful for keeping short notes on where things stand. I also store client briefs in there and document my process on current Framer template work, which makes it easier to reflect on what worked, where and when I was slowed by something and what didn't work at all. It all ends up being useful source material for content to post on socials, too.




Flora Fauna


Midjourney is still where I do most of my visual exploration, but Flora Fauna has become the tool I reach for when I want to stay in one place and iterate quickly.


The node-based system is the main reason. Instead of rewriting long prompts from scratch, I'm working visually, branching off from generations I like, adjusting and building on what's already working. Having everything in one place rather than scattered across different chats or folders on my computer makes a real difference when I'm trying to keep a project coherent.


The video generation is worth a mention as well. Mostly useful for when a client project needs some motion or I want to push something further, being able to switch to a better video model from the same workspace - without starting over somewhere else - saves some time and nerves.




The point of all this


The stack will keep changing as these tools evolve, and I'm also aiming to build more custom setups and even small apps for myself based on specific use cases and workflows I haven't fully solved yet.


But the idea behind all of my AI use remains the same - I want to automate all of the mundane, repetitive and boring work, so I can give more time and attention to the creative work that I love doing myself!

G
GeorgiDesigner
Feb 24, 2026
LinkedIn

Do I use AI?

Do I use AI?

Yes, but not for final production visuals, only in the background.

I needed some graffiti to go on the wall here. That was a quick generation, a background element with low importance. It was an image texture that had to be applied to the wall (wall still being built in 3D)


I've also used it to brainstorm ideas, visual directions. It helps speed that process up. But the final visual still needs to be built traditionally.


I've tried using it to generate full backgrounds for products but was unsuccessful. The results weren't good enough for production.


I haven't used AI for hero products or final renders though. The details and designer's hard work has to be respected down to the millimetre. AI sometimes changes small details and textures and that's not ok.

I've seen product renders produced by AI that look great but then on closer examination the actual shape of the tiny parts, and texture patterns are not the same as the original product.

Imagine you send a poster out to print and the printer freely changes the colours and fonts.


I know there are tools out there that can be trained on products, I haven't tried all of them yet.

For now I'm using traditional methods with AI helping speed up some background work. I like the control and flexibility.

AH
Anthony HooverFreelance product visualization designer
Feb 24, 2026
LinkedIn

Figma AI just went from “cool beta feature” to “everywhere in my workflow.”

Figma AI just went from “cool beta feature” to “everywhere in my workflow.” As a Designer, this changes how we design, review and ship interfaces.

In early 2026, Figma rolled out full AI features to all users. You can now generate layouts from prompts, tweak visuals with AI and even create prototypes without touching a frame first.


Last week I rebuilt part of my UI process around it. Here is what actually worked:


𝟭. 𝗜 𝘀𝘁𝗼𝗽𝗽𝗲𝗱 𝘀𝘁𝗮𝗿𝘁𝗶𝗻𝗴 𝗮𝘁 𝗯𝗹𝗮𝗻𝗸 𝗰𝗮𝗻𝘃𝗮𝘀𝗲𝘀

 • I now start with a prompt like

 • “Create a mobile dashboard for a fintech app with cards for balance, recent transactions and quick actions.”

 • Figma AI gives me 3 to 5 starting points that I refine instead of drawing rectangles for 30 minutes.


𝟮. 𝗜 𝘂𝘀𝗲 𝗔𝗜 𝗳𝗼𝗿 “𝘄𝗵𝗮𝘁 𝗶𝗳” 𝗲𝘅𝗽𝗹𝗼𝗿𝗮𝘁𝗶𝗼𝗻

 • Instead of 10 manual variants, I ask

 • “Try a more minimal style” or “Push this towards a dark, high contrast theme.”

 • It is perfect for exploring extremes while I stay focused on information architecture and flows.


𝟯. 𝗜 𝘁𝗿𝗲𝗮𝘁 𝗔𝗜 𝗮𝘀 𝗮 𝗷𝘂𝗻𝗶𝗼𝗿 𝗱𝗲𝘀𝗶𝗴𝗻𝗲𝗿, 𝗻𝗼𝘁 𝗮 𝘀𝗲𝗻𝗶𝗼𝗿 𝗼𝗻𝗲

 • It is great at speeding up layout, components and visual polish.

 • It is bad at understanding edge cases, accessibility trade offs and product strategy. That is still my job.


𝟰. 𝗖𝗼𝗹𝗹𝗮𝗯𝗼𝗿𝗮𝘁𝗶𝗼𝗻 𝗶𝘀 𝗴𝗲𝘁𝘁𝗶𝗻𝗴 𝗳𝗮𝘀𝘁𝗲𝗿

 • Developers can react to AI generated screens earlier.

 • Stakeholders react better to “something real” instead of low fidelity boxes, which shortens the feedback loop.


𝗧𝗮𝗸𝗲𝗮𝘄𝗮𝘆


AI inside Figma is not replacing my UX work. It is removing the mechanical parts so I can spend more time on research, flows and interaction details.


If you are a UX or product designer

Would you let AI handle your first draft layouts or do you still prefer starting from scratch


Reply with “AI first” or “Manual first” and tell me why.

AD
Abhijit DasUX Design Consultant at EY
Feb 26, 2026
LinkedIn

Using Adobe Firefly as a creative assistant (collaboration, not replacement)

It's 2026, even your parents and grandparents use AI daily, so as designers, how could we not?

The real conversation isn't whether we use AI. It's how we use it.


AI doesn't replace taste. It doesn't replace strategy. And it definitely doesn't replace years of visual thinking.


But when used right, it becomes a power tool.


I personally use Adobe Firefly as an all-in-one creative assistant, build moodboards, test ideas and explore visual directions faster.


AI helps me move quicker. BUT I decide what's good.


What's your take on AI in the creative field? #AdobePartner

SK
Sandra KesserwanyFounder & Brand Designer, Sandra Creates Studio
Feb 24, 2026
LinkedIn

Most designers are using AI wrong (I use it to think better)

Most designers are using AI wrong. They ask it to create everything.
I use it to think better.


Here's how AI fits into my design process 

1. Idea Exploration

Before touching a single pixel, I use AI to generate multiple positioning angles for a product or concept. One idea becomes five. Five become direction.


2. Content & Messaging Clarity

Strong design starts with strong communication. AI helps refine hooks, headlines, and structure before visuals come in.


3. Moodboarding & Layout Thinking

I explore style directions, composition variations, and visual hierarchy faster, so I design with intention, not guesswork.


4. Faster Iterations

Instead of staring at a blank screen, I start with momentum.


I test concepts quicker.

I refine smarter.


But here's the important part:

AI gives options. ✅ I make decisions.

AI speeds execution. ✅ I bring taste.

Creativity is still human. AI just removes friction.


Designers who learn to collaborate with AI, not compete with it will grow faster.

PC
Prashant ChaudharyStrategy-driven creative (visual design & video production)
Feb 23, 2026