ℝ𝕖𝕩𝕠

ℝ𝕖𝕩𝕠

766342291819397543
Art is a lie that makes us realize the truth. — Picasso
678
Followers
97
Following
13.9K
Runs
3
Downloads
17.4K
Likes
304
Stars

Articles

View All
Qwen-Image-Edit & Flux.2 Klein Prompt Guide

Qwen-Image-Edit & Flux.2 Klein Prompt Guide

Follow these tips to change your image like a pro using QIE and Flux.2 Klein 🤙This article originally described prompting for Qwen-Image-Edit, but initial testing shows that the same prompt tips also work with Flux Klein. I will correct this article if any differences become apparent.Qwen-Image-Edit Prompt Guide: The Complete Playbook👉 Original guide created by u/gsreddit777👉 Link to guide on Reddit here⸻u/gsreddit777 says:"I’ve been experimenting with Qwen-Image-Edit, and honestly… the difference between a messy fail and a perfect edit is just the prompt. Most guides only show 2–3 examples, so I built a full prompt playbook you can copy straight into your workflow.This covers everything: text replacement, object tweaks, style transfer, scene swaps, character identity control, poster design, and more. If you’ve been struggling with warped faces, ugly fonts, or edits that break the whole picture, this guide fixes that."⸻📚 Categories of Prompts⸻📝 1. Text Edits (Signs, Labels, Posters)Use these for replacing or correcting text without breaking style.• Replace text on a signFor example:Replace the sign text with "GRAND OPENING". Keep original font, size, color, and perspective. Do not alter background or signboard.• Fix a typo on packagingFor example:Correct spelling of the blue label to "Nitrogen". Preserve font family, color, and alignment.• Add poster headlineFor example:Add headline "Future Expo 2025" at the top. Match font style and color to existing design. Do not overlap the subject.⸻🎯 2. Local Appearance EditsSmall, surgical changes to an object or clothing.• Remove unwanted itemFor example:Remove the coffee cup from the table. Keep shadows, reflections, and table texture consistent.• Change clothing styleFor example:Turn the jacket into red leather. Preserve folds, stitching, and lighting.• Swap color/textureFor example:Make the car glossy black instead of silver. Preserve reflections and background.⸻🌍 3. Global Style or Semantic EditsChange the entire look but keep the structure intact.• Rotate or re-angleFor example:Rotate the statue to show a rear 180° view. Preserve missing arm and stone texture.• Style transferFor example:Re-render this scene in a Studio Ghibli art style. Preserve character identity, clothing, and layout.• Photorealistic upgradeFor example:Render this pencil sketch scene as a photorealistic photo. Keep pose, perspective, and proportions intact.⸻🔎 4. Micro / Region EditsTarget tiny details with precision.• Fix character strokeFor example:Within the red box, replace the lower component of the character ‘稽’ with ‘旨’. Match stroke thickness and calligraphy style. Leave everything else unchanged.• Small object replaceFor example:Swap the apple in the child’s hand with a pear, keeping hand pose and shadows unchanged.⸻🧍 5. Identity & Character ControlPreserve or swap identities without breaking features.• Swap subjectFor example:Replace the subject with a man in sunglasses, keeping pose, outfit colors, and background unchanged.• Preserve identity in new sceneFor example:Place the same character in a desert environment. Keep hairstyle, clothing, and facial features identical.• Minor facial tweakFor example:Add glasses to the subject. Keep face, lighting, and hairstyle unchanged.⸻🎨 6. Poster & Composite DesignFor structured layouts and graphic design edits.• Add slogan without breaking designFor example:Add slogan "Comfy Creating in Qwen" under the logo. Match typography, spacing, and style to design.• Turn sketch mock-up into final posterFor example:Refine this sketched poster layout into a clean finished design. Preserve layout, text boxes, and logo positions.⸻📷 7. Camera & Lighting ControlsDirect Qwen and Flux.2 Klein like a photographer.• Change lightingFor example:Relight the scene with a warm key light from the right and cool rim light from the back. Keep pose and background unchanged.• Simulate lens choiceFor example:Render with a 35 mm lens, shallow depth of field, focus on subject’s face. Preserve environment blur.⸻💡 Pro Tips for Killer Results• Always add “Keep everything else unchanged” → avoids drift. • Lock identity with “Preserve face/clothing features”. • For text → “Preserve font, size, and alignment”. • Don’t overload one edit. Chain 2–3 smaller edits instead. • Use negatives → “no distortion, no warped text, no duplicate faces.”⸻Extra tips from the comments on the original Reddit piece:"Prompting for natural lighting also seems to help realism""What's a good prompt example for face swap or combining characters?""Try this - Replace the person’s face in this photo with the face from the second image. Keep hairstyle, body pose, clothing, and background unchanged. Blend the new face naturally with lighting and skin tone.""You can adjust lighting on a character using descriptive text prompts that focus on camera angles or light source positions. While precise numerical angles aren’t supported, you can describe the lighting relative to the camera angle for realistic results.Example Prompt: Change the lighting on the character to come from directly above, simulating a top-down camera angle, with soft shadows under the eyes and chin, maintaining a cool, moonlight glow."⸻There's also this guide, FWIW: Complete AI Image Editing Prompt Guide:https://imagebyqwen.com/prompt⸻Editing Photos & Photoreal Gens with Flux2.Klein 👉 Original guide created by u/JIGARAYS👉 Link to guide on Reddit hereu/JIGARAYS says:"The Problem: If you are using Flux 2 Klein (especially for restoring/upscaling old photos), you've probably noticed that as soon as you describe the subject (e.g., "beautiful woman," "soft skin") or even the atmosphere ("golden hour," "studio lighting"), the model completely rewrites the person's face. It hallucinates a new identity based on the vibe.The Fix: I found that Direct, Technical, Post-Processing Prompts work best. You need to tell the model what action to take on the file, not what to imagine in the scene. Treat the prompt like a Photoshop command list.If you stick to these "File-Level" prompts, the model acts like a filter rather than a generator, keeping the original facial features intact while fixing the quality."The "Safe" Prompt List:1. The Basics (Best for general cleanup)remove blur and noisefix exposure and color profileclean digital filesource quality2. The "Darkroom" Verbs (Best for realism/sharpness)histogram equalization (Works way better than "fix lighting")unsharp maskmicro-contrast (Better than "sharp" because it doesn't add fake wrinkles/lashes)shadow recoverygamma correction3. The "Lab" Calibration (Best for color)white balance correctioncolor gradedchromatic aberration removalsRGB standardreference monitor calibration4. The "Lens" Fixeslens distortion correctionanti-aliasingreduce jpeg artifacts"My 'Master' Combo for Restoration:"clean digital file, remove blur and noise, histogram equalization, unsharp mask, color grade, white balance correction, micro-contrast, lens distortion correction."TL;DR: Stop asking Flux.2 Klein to imagine 'soft lighting.' Ask it for 'gamma correction' instead. The face stays the same, the quality goes up."⸻Have fun, and let us know of any other tips you may have in the comments!— ℝ𝕖𝕩𝕠
28
7
Emphasis, weighting () and down-weighting ⚠️ You might be doing it wrong! 😳

Emphasis, weighting () and down-weighting ⚠️ You might be doing it wrong! 😳

Hey 🙂TL;DR:Be careful when using weighting with SD 1.5. Use keyword order and detail instead (see below), then on SD 1.5 use weighting as an optional, complementary method.Barely use it or don't use it at all with SDXL (including Pony-based models and Illustrious) or SD 3.5. Use keyword order and detail instead.Don't use it at all with Flux! Weighting was developed for use with Stable Diffusion, which is an entirely different diffusion model. Flux uses natural language prompts, so correct punctuation is vital. Weighting is not natural language, so using weighting with Flux may damage image quality.With AI image generation, a lot has been written and discussed about how and when to use weighting, also known as attention, which is used in prompts in the following formats:(keyword:1.2)or((((keyword))))or[[keyword]]Unfortunately, much of the information in circulation is incorrect. The misuse of weighting risks damaging your image quality. See below for tips on how to best use weighting, including on when not to use it.Emphasis via keyword order and detailBy default, AI image-generation diffusion models on Tensor (both SD and Flux) use 2 techniques for understanding what are the most important elements of your image:Keyword order: what you put at the beginning of a prompt is given more attention by the AI than what goes at the end of the prompt; andDetail: the AI logically assumes that what you describe in the most detail is what you want most to see. Even if you add a specific, detailed image element near the end of the prompt, the AI will nevertheless pay more attention to it.Weighting with parentheses, e.g., (keyword:1.2)Like many of its competitors, for Stable Diffusion (SD), Tensor.Art has installed a plugin which allows you to "add weight" to a keyword in order to make the AI emphasize it when generating the image.⚠️ Ideally, adding weight should be used only after first optimizing the keyword order and detailed elements as described above — and IMO it should used sparingly.Also:Avoid using weighting with SDXL and SD 3.5. You might be able to get away with one minimally weighted keyword (using simple parentheses), but anything stronger will often damage image quality. The maker of SDXL (Stability AI) has expressly advised against using weighting with versions of SD more recent than 1.5 (SDXL and SD 3.5).Don't use curly brackets {}. On other installations of Stable Diffusion, curly brackets can be used for weighting — but not on Tensor.Art!Don't use weighting with Flux. Not only will it not work, it may damage your image!In any case, you can add weight in 2 different ways:Method #1: Enclose keywords within multiple consecutive parentheses, e.g., ((((keyword))))Method #2: Enclose keywords in parentheses and apply a "weight" (value) between 1 and 2, e.g., (keyword:1.2)For Method 1, each set of parentheses (keyword) multiplies the weight by 1.1, so:(keyword) = (keyword:1.1)((keyword)) = (keyword:1.21) = (1.1 x 1.1)(((keyword))) = (keyword:1.331) = (1.1 x 1.1 x 1.1)((((keyword)))) = (keyword:1.4641) = (1.1 x 1.1 x 1.1 x 1.1)etc.So which one to use? IMO use Method #2, which allows more precise, incremental control. I've found that even changing the value from, say, (keyword:1.1) to (keyword:1.11) or even (keyword:1.105) can make all the difference.(Why does Method #1 even exist? My assumption is that it was developed first, then left in the code for backward compatibility so that existing prompts still worked after the plugin was updated with Method 2.)Making things even more complicated is the fact that Methods 1 and 2 are cumulative, so ((((keyword:1.4)))) = (keyword:1.4) x 1.1 x 1.1 x 1.1 x 1.1 = (keyword:2.05).However, any value over 1.7 can severely damage image quality, often to point where the image will be a garbled mess. Moreover, even lower weights like 1.4 and 1.3 can degrade image quality. This is known as "overcooking the gen".Overcooking the genWeighting is a very powerful plugin for SD. Even adding weight to one keyword can change your image dramatically, sometimes for the worse: the image becomes blurry or low-resolution, or extra limbs or unidentifiable objects appear, or bodies become laughably twisted. In other words, the image can get "overcooked".So again: when weighting with SD 1.5, it's important to use a light touch, and with SDXL, avoid using weighting at all, as even a little weighting will often fry the gen.⚠️ On SD 1.5, a common rule is to avoid any weight over 1.5. And it's a good rule — however, the number of weighted keywords also matters, so I advise avoiding a total weight of all weighted keywords above 1.4, otherwise the gen may get overcooked.How can I avoid overcooking?Follow the ideal workflow for emphasis below.Avoid using weighting with SDXL and SD 3.5.It's also possible to overcook the gen by adding too many LoRAs or by weighting LoRAs too heavily.A similar "overcooked" effect can also occur when Clip Skip is not set to 2. This is true for all Pony-based models, for example. Always use a checkpoint's recommended settings for optimal results.Ideal workflow for telling the AI what you really want to seeDescribe in greater detail those elements you want to appear prominently in the image;Place the most important keywords near the front of the prompt;Leaving everything unweighted, click Generate. (Even better: generate several different images, one after the other. Then choose the best of the lot. This is known as "seed hunting" 😉);If a certain keyword isn't showing (enough) in the "gen" (generated image), rearrange keywords in order to optimize keyword order and/or describe in greater detail the most important elements. Re-generate.SD ONLY: If you're still not getting what you want, progressively add minimal weighting — parentheses () only, without any numerical weight — to those elements that seem to need it. Re-generate.
SD 1.5 ONLY: If you're still not getting what you want, for certain critical keywords, progressively increase weighting incrementally — say, from 1.1 to 1.15.SD 1.5 ONLY: Repeat Step 4 until you succeed OR until the gen starts to overcook. If it does start to overcook, decrease weighting and re-think your prompt, starting with Step 1.DEMO: Keyword order vs. weightingThe image below uses the following prompt:girl, sundress, tropical beach, seagulls, sailboat, sunset, rundown shackAll elements are represented except "rundown shack":Let's decide that we'd rather see the shack than the boat. So, reusing the seed, we add weight of 1.3 to "rundown shack":girl, sundress, tropical beach, seagulls, sailboat, sunset, (rundown shack:1.3)Oops! It's a nice gen, but we've lost the girl:Instead, let's change the keyword order by placing "rundown shack" closer to the start of the prompt:rundown shack, girl, sundress, tropical beach, seagulls, sailboat, sunsetBetter 👍Down-weighting with SD 1.5"Down-weighting" (also known as "light-weighting") is basically telling the AI: "Apply this keyword, but a slightly lighter version of it."As with weighting, there are 2 different methods for down-weighting:Method #1: Enclose keywords within multiple consecutive square brackets, e.g., [[[keyword]]]Method #2: Enclose keywords in parentheses (not brackets!) and apply a "weight" (value) between 0 and 1, e.g., (keyword:0.5)For Method 1, each set of square brackets [keyword] divides the weight by 1.1, so:[keyword] = (keyword:0.9091)[[keyword]] = (keyword:0.8265) = (1.1 ÷ 1.1)[[[keyword]]] = (keyword:0.7513) = (1.1 ÷ 1.1 ÷ 1.1)[[[[keyword]]]] = (keyword:0.6830) = (1.1 ÷ 1.1 ÷ 1.1 ÷ 1.1)etc.As with weighting, use Method #2 — for example: (keyword:0.5) — as it allows for more granular control.Please note:You cannot down-weight by adding numbers to square brackets — for example, [keyword:0.5] will not result in down-weighting; only add numbers to parentheses ().As for weighting, don’t use down-weighting with SDXL, SD 3.5, or Flux, as it will often damage image quality.When using down-weighting with SD 1.5, be aware that the effect is subtle. The difference between "keyword" and "(keyword:0.9)" will be quite noticeable; however, the difference between, say, (keyword:0.5) and (keyword:0.4) may be so subtle as to be unnoticeable. On the other hand, this allows for some very granular fine-tuning.Below ⬇️ is a demo gallery starting with the prompt:portrait of a chubby businessman, parkYou'll notice that even at (chubby:0.1), the subject will still be chubby. In other words, (chubby:0.1) ≠ thin; (chubby:0.1) = the least amount of chubbiness.You'll also notice that the lower the down-weighting, the lower the lighting on the subject: the weighting plugin can mess with your image in so many unexpected ways, so use with care!Common weighting errorsMethod #2 weighting and down-weighting requires both parentheses "()" and a colon ":"If you use incorrect syntax, at best weighting will not be applied; at worst, it may throw errors and diminish image quality.❌ Portrait of a chubby1.2 businessman❌ Portrait of a chubby:1.2 businessman❌ Portrait of a (chubby1.2) businessman❌ Portrait of a (chubby)(1.2) businessman❌ Portrait of a {chubby:1.2} businessman✅ Portrait of a (chubby:1.2) businessman❌ Portrait of a [chubby:0.5] businessman✅ Portrait of a (chubby:0.5) businessmanConclusionResults may vary! There are so many different checkpoints, LoRAs, etc., so this guide is of course not definitive.Nor is this THE one and only way to do it: some use very different emphasis & weighting methods than those described in this guide, yet still produce amazing, high-quality gens.I based this guide on more than 2 years of trial and error and research, as well as on official usage tips as provided by Stability AI — but if you develop your own techniques and they work for you, great!While I always try to give accurate advice, I'm only human and (so) I make mistakes, lol. If you find a mistake in this guide, do let me know in the comments and I'll gladly correct it 😇Have fun 🤟Example of down-weighting
68
9

Posts