Introduction to Beholder Vision
This is an initial test LoRA for Pony, derived from my Beholder Vision LyCORIS-LoCon, trained using the foundational parameters outlined in my Crash Course to On-Site Training.
Nov 28, 2024
Beholder Vision began as an experimental LyCORIS-LoCon, trained on 126 meticulously curated images of aesthetic anime art.
The concept: create “eye-candy” results, with flexibility to adapt to a variety of creative needs.
Usage Recommendations
Pony Ver. Parameters (Recommended as of Dec. 6, 2024):
Strength: Set between 1.0–1.2 (based on extensive testing, I strongly recommend 1.05).
Sampling Method: DPM++ 2M SDE (Karras)
Steps: 25–35 steps.
CFG: 7.
Changelog and Updates
Dec 6, 2024
Version 3o is here!
Full support for v-prediction models such as NoobAI-XL V-Pred-0.65S-Version, which this version was trained on.
Dec 4, 2024
Version 2.1o is now available! This marks a complete rework of the initial test version for Pony. It has been trained 5,490 steps, making it competitive with the original Beholder Vision.
Despite its extensive improvements, the new LoRA is just 54.8MB—half the size of previous Pony iterations of the model and vastly enhanced in every aspect.
Dec 2, 2024
The first version of Beholder Vision for FLUX.1 [dev], 1.3o, has officially released!
Nov 30, 2024
I’m excited to announce Beholder Vision 1o (and its successor, 1.1o)!
1o debuts as an experimental GLoRA, while 1.1o returns to classic LoCon, delivering improved performance at a smaller file size.
Now natively compatible with other SDXL models, including NoobAI-XL Epsilon-pred 1.0-Version.
This version represents a significant leap forward over the 0.9beta, thanks to an upgraded training dataset and finely tuned parameters. (Details on the dataset and parameters will be shared if there’s interest.)