Live Wallpaper Plus — Wan I2V 720P (Exclusive Tensor.Art)
🧾 Model Description
This LoRA is designed to recreate the dynamic and immersive style of live wallpapers, reminiscent of the League of Legends launcher intros. It is also perfect for lofi loops, cinematic ambient scenes, and AI-powered visual storytelling. 🎮🎧
This is the second generation of the original LoRA Live Wallpaper, trained from scratch using a larger, hand-curated dataset sourced from 4K video. While the first version focused on general motion and parallax, this version places a special emphasis on particles, atmospheric effects, visual detail enhancements, camera moviments, making it ideal for dynamic loops with magical or cinematic ambiance. It is still in the development stage, and needs more time to train for at least 50 epochs, meaning this version is experimental.
Tensor.Art limits the generation of the 720P model to 480P, which makes no sense to me, in other words, you are not generating with the best possible quality, it makes no sense to generate 480 in a model that was mostly trained with 720P data.
🧩 Training & Specs
- Trigger word: `lvwprx` (must be included in the prompt)
- Dataset: 575 carefully selected 4K videos, downscaled to 256p
- Captions: focused on dynamic motion (e.g. hair, particles, parallax, ambient effects, camera moviments)
- Frames per sample: 49
- LoRA Dim: 64
- Training resolution: 256p
- Hardware: Trained using H100 (80GB) + RTX 5090 (32GB)
Note: It took 3 days to tune the dataset, train and test hyperparameters, and finally evaluate the model. The considerations I have regarding the model are that I need a few more days until the version of this model is really 100% improved, this model needs to be trained for more epochs because I used 3 times more data than the previous model and in this case a smaller number of epochs, so based on this I will use the next few days to improve it.
⚙️ Usage Guidelines
- LoRA strength: 0.2 to 1.2 (default = 1.0)
- Generation resolution: must be 720P (1280x720, 720x1280...) (native to Wan I2V 720P)
- Input image (if used):
- Should be exactly or as close as possible to 720P
- Must be sharp and detailed — avoid blurry or compressed images
- Pay special attention to fine details like hair, fabric, and background layers
- Prompting tip: clearly describe intended motion, parallax, particles, hair flow, or ambient effects — this strongly guides the animation
- Duration: between 49 and 81 frames (~3 to 5 seconds at 16 FPS)
- Compatible with other LoRAs — tune strength and merging settings as needed
✍️ Prompt Generation Template (Optional)
----------------------------------------
If you want to generate high-quality prompts for this LoRA, you can use the following template with any LLM, along with an input image. Your prompt should always be in tags format for this template.
You must always wait for an input image before generating output.
Do not generate captions or tags unless an image has been provided.
You are a captioning assistant for video generation using anime-style models like Wan 2.1.
Given an image, you must generate two outputs optimized for animated video generation with smooth loops and subtle movement.
Your task is to output one line:
🔹 1. Tag-style Caption (lvwprx+ compact tags)
On the second line, list 8–14 tags separated by commas. These define motion, style, and environment.
✅ Example:
lvwrpx, anime style, glowing eyes, flowing scarf, rising embers, twilight sky, parallax effect, static camera, smooth animation, seamless loop
⚠️ Important: Use only one of the captions above — either the sentence or the tags — never both together.
🙏 Special Thanks
---------------------
Special thanks to users bellow, for providing high-quality images used in the showcase examples. Your support helped shape the visual direction of this model.
L9
🧾 LTXV I2V 0.9.7 (13B) – Experimental v1
This LoRA is trained for the LTXV I2V model version 0.9.7 (13B parameters), with the goal of generating smooth, seamless live wallpaper-style video loops with detailed, localized motion — such as hair fluttering, blinking, ambient particles, and parallax — while keeping rigid structures like armor, weapons, chairs, and body parts perfectly static.
This is the first experimental version of the LoRA, and was trained on a modest but curated dataset using conservative hyperparameters and high-precision captioning. If better techniques are discovered (e.g., improved prompting, training without captions, or more stable convergence), I plan to release future, higher-quality versions.
🔧 Training Details:
Base Model: LTXV I2V v0.9.7 (13B)
Training Tool: Diffusion Pipe (unofficial LTXV I2V support)
Dataset: 140 handpicked videos, 512px resolution, 24fps, 49 frames per clip
Epochs: 250 (~35,000 steps)
LoRA Dim: 32
Optimizer: AdamW8bit
Batch Size: 1
Prompts: Used during training, generated via Qwen2.5-VL-7B with a strict motion-focused template
Initial Loss: High — LTXV I2V is known to require many steps before reaching stability
🧠 Prompting Recommendation:
This LoRA is sensitive to prompt quality and seed variation. Short prompts often lead to:
Deformed or “rubber-like” rigid objects
Motion in areas that should remain static
Unstable background or scene jitter
✅ Use long, structured prompts that clearly describe motion and stillness. For best results, use the custom ComfyUI node: Ollama Describer (supports structured motion captions via LLMs)
Alternatively, the official LTXV Prompt Enhancer node can also help improve prompt effectiveness.
✍️ Prompt Generation Template
Use the following prompt with any vision-enabled LLM (like Qwen-VL, GPT-4o, or Gemini). It is designed to describe only what moves, and explicitly what must stay still, to avoid motion artifacts in rigid structures.
Prompt Template:
You are an expert in motion design for seamless animated loops. Given a single image as input, generate a richly detailed description of how it could be turned into a smooth, seamless animation. Your response must include: ✅ What elements should move: – Hair (e.g., swaying, fluttering) – Eyes (e.g., blinking, subtle gaze shifts) – Clothing or fabric elements (e.g., ribbons, loose parts reacting to wind or motion) – Ambient particles (e.g., dust, sparks, petals) – Light effects (e.g., holograms, glows, energy fields) – Floating objects (e.g., drones, magical orbs) if they are clearly not rigid or fixed – Background ambient motion (e.g., fog, drifting light, slow parallax) 🚫 And explicitly specify what should remain static: – Rigid structures (e.g., chairs, weapons, metallic armor) – Body parts not involved in subtle motion (e.g., torso, limbs unless there’s idle shifting) – Background elements that do not visually suggest movement ⚠️ Guidelines: – The animation must be fluid, consistent, and seamless, suitable for a loop – Do NOT include sudden movements, teleportation, scene transitions, or pose changes – Do NOT invent objects or effects not present in the image – Do NOT describe static features like colors, names, or environment themes – The output must begin with the trigger word: lvwpr – Return only the description (no lists, no markdown, no instructions)
✅ Example output:
lvwpr The character’s hair flows gently with ambient wind, while glowing particles drift across the background. Her eyes blink slowly, and her outfit’s ribbons sway softly. The armor plates, weapons, and mechanical chair remain perfectly still.
⚙️ Usage Guidelines:
Trigger word: none (prompt-based control only)
Suggested LoRA Strength: 0.8 – 1.2
Recommended Duration: 49–81 frames (~3–5 seconds)
Resolution: Generate at 512px or upscale later
Style: Anime, stylized, or cinematic images with clean structure work best