Having recently had the opportunity to try out a 3-day pro upgrade on Tensor.Art, I wanted to share my initial thoughts on the platform, specifically for training LORA models. Coming from a background of using Civitai, I was curious to see how the experience would compare, and I've found some key differences that are worth noting.
My first impression is that Tensor.Art is a lot more flexible when it comes to the training process. While many platforms encourage using a large number of images for optimal results, Tensor.Art provides a viable and effective option to train a LORA with as few as 10 images - and the result is surprisingly good.
This is a game-changer for creators who may have very limited reference material for a specific character or style. The platform's pricing model also reflects this flexibility; it costs less to train with fewer images, making it an accessible option for quick or niche projects. However, it's important to note that the cost scales up with the number of images, so the more data you use, the more expensive the training will be.
While the flexibility is a major advantage, my main issue with the LORA models trained on Tensor.Art is the file size. I've found that the resulting LORAs are often quite large, typically exceeding 400MB. This is a stark contrast to my experience with Civitai, where the LORAs I've trained usually hover around the 200MB mark. The size difference is significant and can quickly become a concern for storage and management.
All in all, my initial experience with Tensor.Art has been a mix of pros and cons. The platform is highly flexible and cost-effective for smaller, more limited training datasets. However, this seems to come at the cost of a much larger file size for the final LORA model, which is a trade-off that creators will need to consider.