Mexes

Mexes

616420638671868313
My goal is to have a great library of LoRAs in many styles so that you can find what you want in one place ;)
499
Followers
11
Following
1.2M
Runs
347
Downloads
1.7K
Likes
13.5K
Stars

Models

View All
941487248510078259
LORA Illustrious
EXCLUSIVE

Soft Realistic 3D | Mexes -V1.0 Illustrious

77 18
CHECKPOINT Illustrious
EXCLUSIVE

Glossy Hentai | Mexes-V1.0

781 52
930762021534726337
LORA Illustrious
EXCLUSIVE

Haunted Pastel | Mexes-Illustrious-V1.0

588K 3K
930092921464580640
CHECKPOINT Illustrious
EXCLUSIVE

Niji Crystal | Mexes-🌕V2.0

50K 1.4K
905952356623998570
CHECKPOINT Illustrious

Hexer Minimal Toon | Mexes-V3.1

4.6K 196
895851614672262035
CHECKPOINT Illustrious
EXCLUSIVE

Hexer Specular Anime | Mexes-V1.0

44K 958
894751925877094392
CHECKPOINT Illustrious
EXCLUSIVE

Hexer Semi Real (3D / Anime) | Mexes-🔹V1.2 Standard

41K 685
893673388722112244
CHECKPOINT Illustrious
EXCLUSIVE

Aquamatic-MIX | Illustrious | Mexes-Illustrious-V1.0

72K 538
836909673945989183
LORA Illustrious
EXCLUSIVE

Haunted Vintage | Illustrious | Mexes | 80s 90s-Illustrious-V0.4

35K 500
836931415070394406
LORA Illustrious
EXCLUSIVE

Haunted Minimal | Illustrious | Mexes-Illustrious-V0.4

140K 1.1K
LORA Illustrious

Oroborus Art Style | Artist styles | Mexes-v1.0

59 6
938591596591459790
LORA Illustrious

Ourobot Art Style | Artist styles | Mexes-v1.0

3 4
938547441106484791
LORA Illustrious

Nxhqt | Artist styles | Mexes-v1.0

0
938539679026832961
LORA Illustrious

Kyezzzz Art Style | Artist styles | Mexes-v1.0

10 2
938153823459881013
LORA Illustrious

Binggong Asylum Art Style | Artist styles | Mexes-V1.0

71 6
937391928473827331
LORA Illustrious

Synth Robot Girls | Mexes-V1.0

2 2
938529866100295786
LORA Illustrious

billdadinosaur Art Style | Artist styles-v1.0 trigger word

20 7
933255938318586739
LORA Illustrious

aka6 Art Style | Artist styles-v1.0

38 6
933255777257304265
LORA Illustrious

Solid Eyes | Concept-v1.0

212 11
926203056557685621
LORA Illustrious

Sadamoto Yoshiyuki Art Style | Artist styles-v1.0

8 3

Articles

View All
About the trigger word in LoRa training

About the trigger word in LoRa training

Following this series of articles on LoRa training, today it’s time to touch on the subject of the Trigger Word in style LoRAs.I invite you to read the previous article where I touched on the subject of the Text Encoder, as it might help you better understand today’s concepts. Training the Text Encoder in LoRa: Why it Matters for Style | CivitaiNote: This article aims to be easy to understand. I will not use complex technical terminology (like weights, vectors, or matrices) and will even skip over some deep theoretical concepts to simplify understanding.All this time I’ve been training style LoRAs, most of the time I don’t train them with a trigger word. Mostly for the sake of convenience: by not having to worry about whether you are using the keyword or not, you simply apply the LoRa and forget about the rest.However, for this experiment and using the same dataset from the previous article, I trained another LoRa simply by adding a trigger word to all the images. But first, let’s go through the theory before looking at the results and differences.Although the examples given here focus on LoRa style, many concepts apply equally to LoRa character.What is the Trigger Word?The trigger word is the tag you will use to activate your LoRa... forgive the redundancy. The idea is that this tag is like an empty "container" that we will fill with whatever we want to train. Usually, invented words or tags with unique characters (like letters swapped for numbers) are used to ensure that this tag doesn't previously exist in the model's knowledge, thus giving us a clean canvas.There are two ways to use the trigger word:Removing tags: For example, if the style you want to train is a realistic style, what you should do is delete those tags that represent the realistic style and commonly appear with the auto tagger like “nose, lips, realistic, photorealistic”. This will cause those characteristics seen in the images to associate directly with the trigger word since they aren't tagged.Keeping tags: If you leave “nose” and “lips” written in the dataset, surely your results with only the trigger word won't have those same noses or lips seen in the dataset images, but you will have the colors and strokes of your style. If you want to obtain the same facial structure, it will suffice to write "nose" and "lips" in the prompt to get those characteristics. This approach is useful for flexible LoRAs.Is it necessary?Let’s analyze the two approaches:Method without Trigger: By not using a trigger word when we train a style LoRa, the existing tags are modified. The most common one is 1girl, but generally, the style starts to train onto different common tags within the dataset. What happens with this? Well, in ambiguous prompts with fewer than 5 tags, for example, the style will be poor and look diluted. But when using many tags—specifically those mostly used in the dataset—the style will start to look much more present.Method with Trigger: By using a trigger word, we are giving the training a specific word where it can put everything that isn't tagged in the image. In a style, this would be the lineart, the brushstrokes, the color, etc. But one also has to be much more careful with tagging and prioritize a varied dataset to prevent the trigger word from picking up objects, concepts, poses, etc. However, unlike the method without a trigger, we would only need this trigger word for the style to appear with all (or most) of its characteristics (depending on which approach we decided to use with the trigger word).Note: This graph is a simplification; this would have to happen several times within the dataset.The Problem with the Trigger Word:As I mentioned before, our trigger word is a container waiting to be filled with information. How does the model know what information to put in there?Simple: the model looks at what doesn't change between images and associates it with your tag.If we want to train a chair, that chair must appear in all images with the trigger word.The problem: If the dataset isn't very varied, the model might associate unwanted things. Let’s say that in all the photos of the chair, a table also appears in the background. Since the table doesn't change and always accompanies the trigger, the model will think that “ch4ir” means "A chair AND a table." It will start putting the table inside the concept. (In this tag example, it would suffice to tag "table" in the dataset since it’s something the model already has broad knowledge of, which would prevent associating that table with the chair. The real problem occurs with things like poses, gestures, and other things we usually don't tag or that the model doesn't have much knowledge of).Once this is understood, let’s move on to talk about the examples and visual differences.Analysis of ResultsNote: All examples were made with the same configuration and a static seed.1. Presence of StyleOkay, right off the bat, you can say that with the trigger word, the style looks much more present. However, some flaws also appear, such as the combined color of the kimono and the change in the hand gesture compared to the version without a trigger. This could be an indication that our LoRa was surely already overtraining, so we'll overlook that.In these examples, the strength of the style is much more noticeable using the trigger word, and there is little change in the general composition.2. Background problem with trigger wordHere is an example that lets us see the problem we mentioned earlier in the theory. Leaving aside the fact that the intensity difference between "with" and "without" is very large, let's talk about that background in the trigger word example. That background style appears in the vast majority of images in the dataset, and it seems the trigger word has been learning it; specifically that it is a solid color, with a border and a subtle pattern. This is easy to fix. Since the prompt didn't specify what background was wanted, the trigger filled the void with what it saw most (that repeated background). By simply putting a setting (beach, street, city) or a color (white background, green background), it will surely stop appearing.3. Prompt taken from the DatasetThen there is this example where I took a prompt directly from the original dataset. We can see that here there is much less difference between the "without" and "with" trigger versions. This relates to what we explained in the theory section: the style began to associate with different tags within the dataset. This means that, in the version without a trigger, to get a result just as strong as in the version with a trigger, we must use mostly the same descriptive tags that were used in the dataset.So, which one to choose?It seems that the option with a trigger word is the best option, but it involves a bit more work behind the scenes by needing a more varied and much better-tagged dataset. Choose whichever fits your workflow best. For my part, I think I will start training more LoRAs with trigger words.If you are knowledgeable about this topic and notice I’ve made a mistake at any point, please let me know! The last thing I want to do is misinform people, and if that happens, I will edit this article as soon as possible to correct the errors.LoRa ConfigurationBase Model: Illustrious V1.0 Repeats: 5 Epoch: 10 Steps: 990 Batch Size: 4 Clip Skip: 1 UNet learning rate: 0.0005 LR Scheduler: cosine_with_restarts lr_scheduler_num_cycles: 3 Optimizer: AdamW8bit Network Dim: 32 Network Alpha: 16 Min SNR Gamma: 5 Noise Offset: 0.1 Multires noise discount: 0.3 Multires noise iterations: 8 Zero Terminal SNR: True Shuffle caption: TrueOther articles that may interest you:Web App: Booru Prompt Gallery V5.1 | CivitaiWeb App: Booru Tag Gallery | CivitaiApp: Regional Multi Crop - Dataset Tool | CivitaiTools I use for LoRa training | CivitaiTraining the Text Encoder in LoRa: Why it Matters for Style | CivitaiThat’s it!Stay hydrated and don’t forget to blink.
Training the Text Encoder in LoRa: Why it Matters for Style

Training the Text Encoder in LoRa: Why it Matters for Style

A while back, I read a comment on the CivitAI Discord where someone mentioned that training the Text Encoder (TE) shouldn't THEORETICALLY be necessary. Recently, wolf999 published an interesting article (illustrious lora training best settings 2026 sdxl | Civitai) mentioning settings where the TE is left untouched.However, after several tests, I believe the story is different—at least for Style LoRAs. In this article, I want to explain why I consider training the TE vital for capturing the essence of a style, keeping it simple and free of heavy technical jargon.What is the TE (Text Encoder)?In short, the Text Encoder is the translator. It is responsible for converting user text (your prompts) into numbers (vectors) that the UNet can understand. Think of it as the guide giving the UNet coordinates on where to look in its "memory" to draw what you are asking for.What happens when we train the TE?By training the TE, we adjust those "coordinates" to point to a more specific location related to the strokes of the style we are aiming for.Think of it this way: Illustrious has vast knowledge of how to represent "grey hair."Without training TE: The model uses its "average" definition of grey hair. The UNet tries to visually force your style, but the underlying concept remains generic. This "dilutes" the style.Training TE: We tell the model: "When I say 'grey hair', don't look for the average; look for the version of 'grey hair' that matches these artistic strokes."If we only train the UNet, we reinforce how the image looks, but the TE continues pointing to the generic concept. This creates a constant struggle between the model's base style and your LoRA. By training the TE, we align the concept with the style.In these example images, I trained 2 LoRas with the exact same settings, except one has the TE training rate at 0.00005 and the other at 0. (Full configuration at the end of the article).We can see that when training only the UNet, you can indeed tell that something was trained related to the target style. You can see the shadows are harder rather than simply blurred, the anatomy changes slightly, and the color palette shifts. However, you could say the style is effectively being diluted.Now, when we train the TE, the change is instantly noticeable. We see the facial structure change completely; the shadows that were slightly visible in the UNet-only training are now much more pronounced, along with other style characteristics like the line art.With this example, we can conclude that training the TE is indeed necessary, at least for Style LoRAs.I must clarify that this was a 2-day investigation. I skipped over several technical concepts but tried to ensure the information presented here is accurate. If you are knowledgeable about this topic and notice I’ve made a mistake at any point, please let me know! The last thing I want to do is misinform people, and if that happens, I will edit this article as soon as possible to correct the errors.UPDATE 11/27/2025It is important to clarify a side effect: by training the TE, we are creating a somewhat "selfish" LoRA. Since we are modifying how the model interprets text, it will likely conflict if we try to mix it with other LoRAs that are also attempting to modify those same rules.Using the previous infographic, imagine that "pointer" struggling to agree with two different LoRAs simultaneously. This can cause the style to become diluted or, in the worst-case scenario, introduce visual artifacts.User n_Arno brought up an interesting point in the comments: if the dataset is perfectly tagged, training the TE shouldn't be necessary, as the UNet would learn to associate those tags with the visual style.While this is a valid approach for certain cases, I see two main issues when it comes to Style LoRAs:Manual Tagging: Tagging absolutely everything visible in every single image is a massive undertaking.Generalization: This is the key point. If a chair never appears in your style dataset, a UNet-only training will likely draw a generic chair that doesn't fit with the rest of the image. By training the TE, the model learns to "interpret" any concept through the lens of your artistic style. (Note: This is purely theoretical; I haven't run specific tests to back this claim up yet).If you notice artifacts when combining your Style LoRA (with TE) with other resources, there are simple solutions I’ve successfully tested:Lowering the Strength: I ran some tests combining exactly 5 LoRAs. At full weight (1.0), you could practically see the model screaming in agony trying to produce a result. However, when using multiple LoRAs, we usually say: "Hmm... give me a bit of this, a bit more of that... or I definitely want that specific anatomy so...", which results in varied weights (e.g., 0.2, 0.3, 0.8, 0.6, 0.6). Even with this approach, I encountered artifacts in specific cases (like the headphones in this example). The fix? I simply lowered that specific LoRA's weight from 0.8 to 0.7. While this largely solved the issue, it feels like more of a superficial fix.Choosing the Right Epoch: The solution that worked best was selecting an earlier epoch. I generally train my LoRAs for 10 epochs, but I always test both epoch 10 and epoch 8. If epoch 8 looks virtually identical to epoch 10, I always choose the 8th one. Even if epoch 10 doesn't show signs of overfitting, I prefer sticking with epoch 8 to leave myself much more "headroom" for combining weights later on.LoRa ConfigurationBase Model: Illustrious V1.0 Repeats: 5 Epoch: 10 Steps: 990 Batch Size: 4 Clip Skip: 1 UNet learning rate: 0.0005 LR Scheduler: cosine_with_restarts lr_scheduler_num_cycles: 3 Optimizer: AdamW8bit Network Dim: 32 Network Alpha: 16 Min SNR Gamma: 5 Noise Offset: 0.1 Multires noise discount: 0.3 Multires noise iterations: 8 Zero Terminal SNR: True Shuffle caption: TrueThat’s it!Stay hydrated and don’t forget to blink.
1
1
Tools I use for LoRa training

Tools I use for LoRa training

Hi everyone!Training LoRas is a process that can be very time-consuming, especially dataset preparation. After many training sessions, I've found some tools that save me hours of manual work. I have compiled the tools I use in this article.The titles of each tool contain the direct links to access them.Notice: I use these applications on a system with a Ryzen 5 9600X and an Rx 570 4GB.1. Grabber (Image Collection)This application is wonderful for mass image collection, especially if you work with boorus (anime-style image sites).Main function: It allows you to search by tags on multiple sites simultaneously and download the images in their highest quality quickly.Additional function: It can automatically create a .txt file with the tags from the site itself for each downloaded image.How to set up Grabber's auto-tagging:Go to Tools > Options.In the Options menu, expand the Save tab.Go to Separate Log Files and create a new one.You must configure the values shown in this image:%character:spaces,separator=^, %, %general:spaces,separator=^, %Then, in the main application window, go to the Destination panel (on the left).In the Name field, use the nomenclature shown in this image:%md5%.%ext%And that's it! Now, each downloaded image will come with its corresponding tags file.My advice: Personally, I don't trust these tags 100%, as they are sometimes incorrect or incomplete. However, they are an excellent base if you supplement them using the Append tags option in the automatic tagging tool.2. DupeGuru and Krokiet (Duplicate Cleaning)Having duplicate or very similar images in a dataset is fatal for training, and cleaning them by hand is a nightmare. These two tools make it much easier.Both do the same thing: they scan a folder and detect duplicate or visually similar images.Why use both? I've noticed that DupeGuru sometimes detects duplicates that Krokiet misses, and vice versa. Using both gives me almost total certainty that the dataset is clean.In Krokiet: Simply select the Similar Images section on the left, set your folder, and let it scan.Adjustment: If it doesn't detect duplicates well, you can click the gear icon (⚙️) and adjust the similarity threshold.3. Regional MultiCrop (Image Extraction)This is a simple but incredibly useful tool I created to speed up the extraction of multiple images from a single one.It's perfect for those images that contain multiple angles of a character, facial expressions, or for cropping individual panels from a manga. It saves a lot of manual cropping time.4. Upscayl (Image Scaling)Although some tools (like Dataset Processor) have scaling functions, they often depend on modern hardware. Upscayl is my preferred solution.It allows for batch upscaling.It works wonderfully even with old hardware or if you have VRAM limitations; it can also use the CPU.5. Dataset Processor DesktopThis tool is the Swiss Army knife for processing datasets. It has many functions, but to keep the workflow fast, I focus on the following:Gallery Page: Gives you a quick view of all images. You can click to select them and then delete them with a single button. It's ideal for detecting images that don't add value, are duplicates, or simply clash with the rest of the LoRa.Inpaint Images: Allows you to quickly erase text, logos, or unwanted elements. You just navigate between images, paint over what you want to remove, and move on to the next one.Resize Images: Although this isn't necessary, I usually rescale them to 1024px on their longest side.Tip: The main reason I do this is speed. Inpainting at high resolutions takes a very long time and delays the process unnecessarily.Generate Tags: Allows you to automatically tag the entire dataset with the tagger you choose.My Threshold settings:Few images: I use WDv3Large with a low threshold, 0.25.Many images (+100): I raise the threshold to 0.4 or 0.5 to capture only the most relevant tags and avoid noise.Process Tags: Once tagged, this section gives you a quick overview of the most common tags. It has great options for cleaning duplicates, removing redundancies, and checkboxes to add or remove tags in bulk.6. tagguiAlthough I generally prefer the integrated tagger in Dataset Processor for convenience, taggui is a fantastic and more specialized alternative.Greater variety: It has many more tagger models than Dataset Processor.Use cases: It's especially useful if you need something more specific or if you are training a LoRa with natural language tagging (like Qwen).Technical: It offers more technical options that might interest you if you want more granular control.7. chaiNNerchaiNNer is an advanced node-based image processor. Its capabilities are enormous and go far beyond this guide, but I use it for two specific tasks:Dataset Augmentation: You can create workflows (chains) to rotate, flip, or make small changes to your images to artificially increase the dataset size. (Use chaiNNer to increase dataset quickly | Civitai)Quick Batch Editing: If you notice that all your images need an adjustment in contrast, color, saturation, or brightness, you can apply that correction to the entire dataset at once.8. Booru Prompt GalleryOnce the LoRa is trained, you have to test it! I made this simple webpage to get quick and varied prompts directly from sites like danbooru.Web App: Booru Prompt Gallery V5.1 | CivitaiThat’s it!Stay hydrated and don’t forget to blink.
Web App: Danbooru Prompt Gallery

Web App: Danbooru Prompt Gallery

Notice: The Aibooru API has been quite unstable in recent days. I recommend using the Danbooru API provider.LINK: WEBV5: A new API provider has been added.Just that, a new API provider.Minor changes:Remove the option to choose between “Most popular” and “Most recent”; now “Most recent” will always be used.Replace the rating selector with a simple button that switches between safe and unsafe.V4: new option and minor fixes.I added the possibility of a new input called "Tags to add". Unlike "Tags to remove", this one adds the tags you want at the beginning of the prompt. It’s useful if you want to test a LoRa with a keyword, if you simply want to add your preferred quality tags, or if you want to add new elements to all prompts.Minor changes:Added some more metatagsMade it so that disabling the "Characters" option also removes tags that refer to a character (alternate costume, official costume, alternate hairstyle, etc.)Fixed an issue that left behind residual commas when removing quality tagsV3: More options!!!I’ve added the ability to use Aibooru.Why Aibooru?Most of the images people upload to Aibooru come with all their generation metadata, including the prompt.So, if you use the prompts from Aibooru, it’s much more likely that you’ll get results closer to the reference image.Why didn’t I use this from the start?Personally, I don’t like how some people prompt (they use tags that don’t exist, write messy prompts, or simply use natural language). That’s why I went with Danbooru at first.But hey, now this is a tool for the community and not just for me, and the more options the better, right?Options:Remove LoRa Tags: Sometimes prompts come with LoRAs included. This simply removes them if you’re not interested.Remove Quality Tags: This removes the quality tags that are used, like “score_9_up, masterpiece, super ultra mega quality, etc.”Why can’t I use Danbooru options on Aibooru?The way prompts are obtained on Aibooru is different from Danbooru. On Danbooru, the tags come from the site itself, where they’re used to filter searches—making it easier to know which tag refers to a character, copyright, etc. On the other hand, Aibooru pulls the raw prompt directly from the metadata, so the tags don’t carry any additional information.V2: I added many more options to better configure the prompt according to your preferences.Tags to remove: this option lets you enter any tag you want to remove from the final prompt. Don’t want generations with blue jackets? Just type “blue jacket” and that tag will be removed from the prompt. (this does not filter the results you get from the API; you will still receive images that contain blue jackets)Character: this lets you remove or add character tags to the prompt.Copyright: this lets you remove or add tags for the name of the franchise of the respective character/setting.Combine tags: this combines tags to reduce the size of the prompt. For example: if the prompt contains “skirt, white skirt, pleated skirt” this will combine everything into a single tag “white pleated skirt.”As for minor changes, I can say that:I changed the page colors so they wouldn’t burn your eyes.I added more tags to remove (some of them: web address, original, patreon logo).V1:I’ve been creating LoRA models and checkpoints, and I was spending too much time crafting prompts to test them thoroughly. That’s why I decided to develop this web app. At first, it was just a personal tool, but I soon realized it could be useful to many more people, so I refined it to make it easy and comfortable for anyone to use.What does this tool do?It generates prompts based on the tags from Danbooru posts.It automatically removes metadata and tags that restrict generation (for example, “white background” or “censor”).It filters out artist tags and other irrelevant ones, so you can copy and paste the prompt directly.LimitationsIt won’t always recreate the original image accurately, since it depends on the quality of the tags in each post.Occasionally, unwanted tags may appear. If you notice any, feel free to let me know!Found a bug?Don’t hesitate to tell me — I’m happy to fix them all.Have feature suggestions?Absolutely! Your ideas are welcome.Is the site down?The app runs on a free hosting plan, so sometimes resources get exhausted. I apologize in advance if that happens.Why a web app and not a desktop app?I noticed many users access from mobile devices, and I wanted it to be accessible to everyone, regardless of platform.Want to support me?Thank you so much! you can help me:Leaving a comment and rating.Using my modelsThat’s it!Stay hydrated and don’t forget to blink.
2
1

Posts