Training Parameters
Network Module LoKr
Base Model FLUX.1 - dev-fp8
Trigger words: raelyn
Repeat 25
Epoch 10
Clip Skip 2
Text Encoder learning rate 0.00001
Unet learning rate 0.0002
LR Scheduler: cosine_with_restarts
Optimizer: AdamW8bit
Network Dim 100000
Network Alpha 1
Factor 8
Gradient Accumulation Steps -
Shuffle caption -
Keep n tokens -
Noise offset: 0.03
Multires noise discount 0.1
Multires noise iterations 10
conv_dim 8
conv_alpha 4
Batch Size -