TensorArt

TensorArt

596591019210834840
Images & GIFs online generating, model training, hosting, comfyUI workflow and more!
2K
Followers
1
Following
8M
Runs
923
Downloads
1.7K
Likes
32.1K
Stars

Articles

View All
TensorArt Content Policy

TensorArt Content Policy

At TensorArt, we believe creativity thrives best in environments that are both free and responsible.As we evolve into a fully SFW (Safe For Work) platform, this policy outlines what content we restrict—and more importantly, why—to ensure our community remains vibrant, legally compliant, and welcoming to creators of all backgrounds.What Content Is Restricted on TensorArt1. NSFW ContentAny content depicting explicit sexual acts, nudity, or pornographic material is not permitted for public visibility & Generation. This includes:Images, models, or posts with explicit sexual themesPrompts and generations that produce NSFW outputs2. Child Pornography and ExploitationWe maintain absolute zero tolerance for: Child pornography or any sexual content involving minorsEnforcement: We reserve the right to delete any content involving child pornography, and accounts posting such content will be permanently banned.3. Celebrity ContentModels, images, or content depicting real celebrities are prohibited across TensorArt."celebrity" includes: entertainment stars, influencers , athletes, political leaders, business executives, and historically recognizable figures.Enforcement: Celebrity content will be immediately hidden. Repeated violations may result in account termination.4. Illegal ContentCriminal or unlawful materialContent promoting extreme violence, hate speech, or harassmentOther Regulated Content1. Child Safety Related ContentModels based on real, identifiable children will be hidden. Even if non-pornographic to protect children's privacy and safety.Any content that exploits, endangers, or depicts minors in inappropriate contexts will also be hidden.2. Real Person ModelsModels of specific, identifiable individuals are regulated to protect personal portrait rights. Distribution may be restricted if the model:Produces insulting, degrading, or harassing contentGenerates pornographic or overly revealing imageryViolates an individual's right to control their own likenessIf you encounter a model depicting you or a loved one that you find offensive, please contact us for removal.3. Protected Intellectual PropertyContent featuring certain corporate IPs may face restrictions, including select properties from Nintendo and Disney. Related models and posts may experience reduced visibility as we navigate copyright protection while preserving creative freedom.Thank YouThank you for your understanding and for being part of our creative community. These policies help ensure TensorArt remains a safe, sustainable platform where everyone can create freely and responsibly.Have questions about how these changes affect your work? Contact us on Discord—our team is ready to assist.Create boldly,The Tensor Team
12
27
 Hun yuan Video 1.5 User Guide

Hun yuan Video 1.5 User Guide

This guide will walk you through how to fully harness Hun yuan Video 1.5 on TensorArt from the basics of text2video and image2video, all the way to advanced control over style, mood, camera movement, and lighting. Even without relying on external prompt rewriting tools, you’ll learn how to write high level prompts that produce high quality, cinematic results and unlock your full creative potential.Basic FeaturesText-to-VideoOverview: Simply input a text description and the model will generate a matching video. For better control over the output, we strongly recommend using structured prompts. Just like professional creators, you can combine multiple “key elements” to shape the result.Core Formula: Prompt = Subject + Action/Motion + Scene + [Shot Size] + [Camera Movement] + [Lighting] + [Style] + [Mood]Items in brackets [ ] are optional, and you can freely mix and match them depending on your creative goals.Basic Usage: Subject + Action + SceneAdvanced Usage: Add more control tags as needed, e.g.: Subject + Action + Scene + Style + Camera Movement + LightingPrompt Examples:A mushroom grew out of the grass.Image-to-VideoOverview: Upload one image + a text prompt, and the model will generate a video starting from your image. The first frame is taken directly from the uploaded image, while the subsequent frames evolve according to the text instructions you provide.Core Formula: Prompt = Subject Motion + Scene Motion + [Camera Movement]Items in brackets [ ] are optional and can be added for more cinematic control.Prompt Examples:The girl in the scene slowly raises her head, her gaze fixed on the upper right of the frame. The camera follows her gaze, gradually revealing a Rococo-style window with a frame adorned with intricate carvings and gold lines, the glass reflecting the soft light from inside. The girl's headscarf and earrings sway slightly as she moves, and the edge of her collar wrinkles subtly with her movements.Advanced ControlsStyle ControlYou can guide the overall visual style of the generated video by adding style-related keywords to your prompt.Realistic / Cinematic StyleA tired middle-aged Asian man, wearing a pilling gray sweater, with fine wrinkles at the corners of his eyes, looks worriedly out the window. Cinematic lighting, realist style.Animation/Painting StyleThis low-poly 3D animation features a gigantic, geometrically shaped whale swimming slowly through an underwater world composed of sharply defined corals and seaweed. Crystal-like bubbles rise around it, and soft beams of sunlight pierce the water's surface, creating ever-changing patches of light that illuminate the entire scene. The upward-looking perspective showcases the ocean's depth and grandeur, creating a tranquil atmosphere imbued with geometric aesthetics.Lighting ControlCore Principle:Light is the soul of atmosphere. If you know how to describe lighting, you gain control over the emotional tone of the entire video.Common Techniques for Describing Lighting:Lighting Style: (e.g., soft light, hard light, neon lighting)Light Direction: (e.g., top-down lighting, side lighting)Light Quality: (e.g., diffused/soft, harsh, spotlight)Shadow Details: (e.g., deep shadows, soft gradients, high contrast shadows)Color Temperature: (e.g., warm golden-hour tones, cool daylight, sunset glow)Reflections: (e.g., reflective highlights on water, glass, or metal surfaces)Silhouettes & Outlines: (e.g., backlit subjects creating dramatic silhouettes or rim lighting)Examples:A detective answers a phone call in a smoke-filled office. Afternoon sunlight filters through the blinds, casting sharp, parallel streaks of light on him and the opposite wall. As he moves, the light and shadow constantly cut across the screen, creating a cinematic sense of destiny.Camera Movement ControlBy adding standard camera-movement keywords to your prompts, you can significantly enhance the cinematic quality of your generated videos.Below is a reference library of commonly used camera-movement terms:Reference Camera Movement LibraryExamples:Text2Video:A professional freestyle skier, dressed in a futuristic fluorescent ski suit, completes a jump in mid-air and lands atop a ski jump in a snow park. The backdrop is a snow-covered mountain and sky tinged with pink and purple by the sunset. Using a panoramic lens, the camera slowly pans around him in a 360-degree arc, capturing his body's rotation and posture from all angles. The lighting is backlighting at sunset, outlining him and the swirling snowflakes with a dreamlike golden contour. The overall effect is cinematic high-definition, in the style of an extreme sports commercial advertisement, creating an atmosphere of transcendence, pushing limits, and showcasing the beauty of human potential.Image2Video:The camera follows a girl riding a motorcycle, her hands gripping the handlebars tightly, her body leaning forward as the motorcycle speeds forward, its wheels kicking up dust. Huge cacti flank the road on the right side of the frame, disappearing into the background. The camera then slowly pulls back, the girl and motorcycle gradually shrinking in size, with a convoy of trucks following closely behind on the dusty road behind them.Bilingual Text Rendering Inside VideoHun yuanVideo 1.5 is capable of generating clear, high-quality text directly within the video frames, supporting both Chinese and English.How to use: Include the text you want to render inside quotation marks in your prompt.For Chinese prompts: use Chinese quotation marks — “ ”For English prompts: use English quotation marks — " "This ensures the model correctly recognizes and renders the exact text you specify.Image2Video:The camera focuses on a woman in a white shirt, standing quietly in the center. Suddenly, she begins to breakdance, her body swaying rhythmically to the silent beat. Her arms swing, her steps are light, and her short hair sways gently with the dance. Then, the words "Hello, World" appear in the upper left corner of the screen.Additional Advanced Controls & NotesSupported LanguagesThe model currently supports Chinese and English prompts.Video Aspect RatiosHun yuanVideo 1.5 supports multiple aspect ratios, including: 16:9 (landscape), 4:3, 1:1 (square), 3:4, and 9:16 (portrait). Please select the desired aspect ratio before generating.Keep It SimpleUse clear, direct vocabulary and straightforward sentence structures whenever possible.Prompt Component BreakdownMore Creative Use Cases & ExamplesStrong Instruction ResponsivenessHun yuanVideo 1.5 natively supports long-form Chinese and English prompts, enabling it to understand and interpret complex semantic structures—such as lighting, composition, spatial layout, and more. It automatically maps these semantic details to video parameters, allowing for:Continuous camera movementsIn-frame text renderingCombined or sequential actionsMulti-instruction generation with high accuracyThis makes it possible to create highly controlled, cinematic videos using only natural language.Example:The hiker begins walking forward along the trail, causing the water bottle to swing rhythmically with each step. The camera gradually pulls back and rises to reveal a vast desert landscape stretching out ahead, while the sun position shifts from afternoon to dusk, casting increasingly longer shadows across the terrain as the figure becomes smaller in the frame.Smooth & Natural Motion GenerationHun yuanVideo 1.5 produces smooth, physically coherent movements for both characters and objects. Motion remains natural and distortion-free, even in fast-paced shots or highly dynamic scenes.Example: A cake-man sits on a chair. Then, he reaches down and breaks off a piece of cake from his own leg, causing a few crumbs to fall as a visible chunk goes missing from the leg. Next, he lifts the broken piece toward his mouth, opens his mouth, and takes a bite, chewing a few times. The table and the wall in the background remain completely still.Realistic Physics at PlayGenerate fluid, natural phenomena and rigid physical interactions with pinpoint accuracy, bringing your scenes to life with immersive realism and dynamic energy.Example:The video captures a basketball going through the hoop. The subject is the orange ball. Initially, it arcs through the air. Then, it passes through the net without touching the rim (swish). Next, the white net whips up violently. The background is the blurred crowd. The camera shoots from a low angle under the basket. The lighting is focused arena lighting. The overall video presents a satisfying moment style.Cross-Dimensional GenerationHun yuanVideo 1.5 enables seamless cross-dimensional creation, bringing virtual characters and elements—like cartoon figures or special effects—into real-world scenes. The model precisely interprets complex semantics, lighting, and material textures, ensuring virtual elements integrate naturally with reality for a fully immersive experience.Action Logic & BreakdownHun yuanVideo 1.5 supports action decomposition, allowing you to generate complex movements by describing the subject’s motion in discrete states and leveraging precise visual cues.Core Formula: Prompt = Scene Setup + Temporal Action Breakdown + Key DetailsExample: Static overhead shot of a printed photo of a tree trunk lying on a wooden table.Action Sequence:1、A real human hand enters, places a single pinecone on the paper next to the tree hole, and exits immediately.2、A realistic 3D squirrel emerges from the 2D hole in the photo. The squirrel comes out empty-handed.3、The squirrel sniffs the pinecone sitting on the paper, looks curious, blinks, and tilts its head.4、The squirrel reaches out, grabs that specific pinecone from the table.Key details: Seamless interaction between real world and photo, surreal VFX, squirrel paws are empty initially, heavy weight perception on the pinecone.
33
7
TensorHub - New Home for Your Creations & TA Policy Change

TensorHub - New Home for Your Creations & TA Policy Change

TensorHub: Your Uncensored Creative StudioWe recognize that artistic expression takes many forms. We wish to support your creative freedom to the fullest extent. That's why we're launching TensorHub — a separate platform where mature content is more welcome (with the same hard lines on child safety and celebrity content).Think of TensorArt as your broad public stage and TensorHub as your specialized creative arena. Both serve creators, just with different audiences and freedoms. 🥰Key Details About TensorHub:Expanded creative freedom for mature & NSFW themes in both Community & WorkspaceAll users on TensorHub enjoy full Pro membership benefits, including member-exclusive parameters, queue-free generation chances, Pro segment access, and more.TH operates on Tokens instead of Credits. Tokens are priced higher than credits and aren't available through daily free credits or missions, but you have a one-time opportunity to convert your credits to tokens.Your TensorArt and TensorHub accounts are content-synced. Details can be found below.Notice:TensorHub has just launched and we are currently conducting a comprehensive project re-audit.During this period, NSFW content may temporarily not appear on profiles or in search results, but remains fully accessible, shareable, and usable via direct URLs.We are working diligently to complete the re-audit as quickly as possible, after which NSFW content will be searchable and displayed throughout the platform.This precaution helps prevent distribution of illegal content like child exploitation and celebrity pornography—thank you for your understanding.❗️ Special migration offer for existing users:TensorHub operates on Token instead of Credits. Due to operational costs, support for more open content creation (especially videos), and automatic Pro benefits for all TH users, token prices are higher than credits. Thank you for understanding.As a loyal TensorArt user, we offer a better rate for you to Convert your Credits to TensorHub tokens.Computing Power Conversion:2:1 ratio: You can convert TA credits(permanent) into TH tokens at a 2:1 ratio (a significantly better rate than standard purchases). [Check your email / system notification for convert link] (will be sent before 11.25)Only one conversion per user — this is a one-time opportunity to migrate your balance.Conversion deadline: You have until December 15, 2025 to use this offer⭐️ Black Friday bonus:Black Friday Period: November 27–30Tokens aren't part of our Black Friday sale — TensorHub will have its own discounts before Christmas.For the cheapest tokens: Buy TensorArt's Black Friday sale bundle first, then use your one-time conversion bonus. This stacks both discounts for maximum savings.Your TensorArt and TensorHub accounts are connected. Here's how it works:Your creations sync both ways: Models, posts, widgets, Workflows, and other content you publish will appear on both platforms.Your interactions sync both ways: Saved models, liked posts, and tasks in your Library are accessible on both platforms for seamless use.Task visibility is one-way: Tasks from TensorArt sync to TensorHub, but TensorHub tasks remain invisible on TensorArt.Compute power is separate, but convertible: Balances don't sync, but you get one chance to convert TensorArt credits to TensorHub tokens.Creator Dashboard & GPU Fund are separate: Different incentive structures mean separate wallets. TensorHub's Creator Dashboard is still being built—your earnings there are being recorded but aren't visible yet. Thanks for your patience!Please note that our Non-Negotiable rules remain:Child Safety: We maintain absolute zero tolerance for child pornography or any abusive content involving minors. Posting such content will result in account termination and potential legal action. Additionally, models based on real-life children (specific, identifiable minors) will be hidden to protect their privacy and safety, even if non-pornographic.Celebrity Content: Celebrity Related Content is still prohibited on TensorHub.Real Person Models: To protect personal portrait rights, models of specific real-life individuals may face restrictions if they're insulting, pornographic, or overly revealing.TensorArt Going SFWMeanwhile, TensorArt will be introducing some changes.Starting 2:00 AM, Nov 27, 2025 , TensorArt is evolving into a fully SFW platform. Don't worry—your creations are safe.To keep building a vibrant, safe, and sustainable creative community, we're updating TensorArt's content guidelines and introducing a dedicated space for creators of all styles. Here's everything you need to know:What's Changing on TensorArtTensorArt is evolving into a fully SFW (Safe For Work) creative hub. Moving forward, all NSFW content will be hidden from public view on the platform.Update Date:SFW restrictions will officially go live on November 27, 2025 2AM (UTC)What you may notice after this update:Workspace will block NSFW prompts and automatically censor NSFW images generated.  Important: Any credits used for blocked generations are non-refundable—please ensure your prompts comply with our SFW guidelines to avoid losing credits.NSFW content is no longer visible to the public on TensorArt. (They will not be deleted and still remain visible to you)If your content (Project/Image) is hidden from the public, you will receive a system notification so that you can make modifications.If your content has been flagged with a visibility warning, it's likely marked as NSFW. To restore visibility on TensorArt, please update your creations to align with our guidelines.Of course, you can also choose to leave them as-is—they simply won't appear on TensorArt, but maybe visible on TensorHub, our new Creative Studio.Your Next StepsReview your TensorArt Creations and adjust any flagged content if you want it visible there.Curious about TensorHub? Head over and explore. It might be the perfect fit for your style.Thinking about conversion? Link is in your system notification/email. Remember: one shot, best value with Black Friday sale. (System Notification will be sent within 24 hours)For Creators:TensorHub: Generation-Focused FeaturesTensorHub prioritizes powerful generation capabilities over training infrastructure. Here's what this means:No online training or ComfyUI workflow builder—these features currently aren't supported on TensorHub. You can still use these features on TA.No dedicated Workflows or Articles sectionsContent Visibility Across PlatformsOn TensorHub: Mature content can be generated, displayed, and shared, subject to our strict policies against child exploitation and celebrity content.Cross-platform sync: Your models sync bidirectionally between both platforms. NSFW content uploaded to TensorHub will be visible there but hidden on TensorArt. You can adjust your content's displayment based on which audience you want to reach.⚠️ Critical warning: If your model is very likely to generate NSFW content(even with no nsfw prompts), TensorArt users who run it risk having outputs censored — even though the model itself remains accessible. To avoid this problem from happening. We will develop a warning system — if a model frequently generates censored content, we'll add a risk tag to its page. This feature is launching soon.🚀 Creator IncentivesTensorHub will launch its own creator incentive system—direct cash rewards for your work. This is being built now and will go live ASAP.Can't wait? Join our Discord TH Creator Migration Event to earn tokens before the official system launches!👉 Stay tuned to the #hub-hsf channel for upcoming activity rules and details.Happy creating!Q&A (keep updating)Comment on this doc or ask in our Discord channel—we'll compile the most common questions to help everyone!About TensorHub:Q: Will TensorHub offer daily free credits or task rewards?A: Unfortunately not—TH token can only be purchased. However, we'll provide tokens to all current TensorArt users', so you can try the platform.Q: Can I convert my TensorArt pro membership to TH tokens?A: TA membership can't be directly converted. But users who purchased membership before November 27 2AM(UTC) will receive daily tokens on TH during their membership period—tokens daily on TH (while keeping TA's 300 daily credits).Q: Will I earn TH tokens when others run my models on TensorHub? A: Not tokens — we've built a separate creator incentive system for TH that earns you direct cash rewards (with higher rates than TensorArt). This launches soon, but you can earn tokens now by joining our Discord TH Creator Migration Event! 👉 Stay tuned to the #hub-hsf channel for upcoming activity rules and details.About TensorArt Going SFW:Q: Will my models involving NSFW be deleted from TensorArt?A : No. Your creation involving NSFW will be hidden from others on TA, but they will not be deleted and remain visible to you. You can still manage them. And as your creation sync across both platforms, users will be able to see them on TensorHub.Q: What if I lose credits using an NSFW model on TA that looks normal?A: We're developing a warning system — if a model frequently generates censored content, we'll add a risk tag to its page. This feature is launching soon.Q: Can I see tasks I create on TH in TA?A: Unfortunately not. While tasks sync between platforms, it's one-way only: TA tasks appear in TH, but TH tasks remain invisible in TA to preserve its SFW environment.More Questions? Contact us on Discord. We've set up a channel for everyone to discussWe're here to support your creative journey, wherever it leads. Thanks for being part of the Tensor family.Create freely,The Tensor Team
59
738
WAN2.5 Pricing Adjustment Notice

WAN2.5 Pricing Adjustment Notice

Dear Tensorians,We need to be transparent: WAN2.5 runs on APIs and is relatively expensive, and we're currently operating at a significant loss.To keep the service available, we will have to increase Wan2.5 task consumption by10% for 1080p & 65% for 480p & 720p —effective on 2025.11.13This decision came after exhausting possible alternative, we genuinely wish we didn't have to make this change. And our WAN2.5 API will still be the lowest you’ll find anywhere.Your creativity and support mean everything to us, and we're grateful for your understanding as we work to keep WAN2.5 sustainable for the long run. Thank you for sticking with us through this. 💙And of course, once WAN2.5 is open-sourced and we're no longer paying API fees, we'll drastically reduce task consumption, offering a way cheaper price.
45
37
Library: your permanent, private vault for any Task & Media

Library: your permanent, private vault for any Task & Media

Update Date: 2025-10-24 Say hello to Library: your permanent, private vault for every task and media file you love. 🥰What Library Provide:1️⃣ Save & Manage any Task. Every image, video, and audio clip perfectly organized and instantly reusable.2️⃣ Forever-safe & private: save on creation, download anytime, zero risk of loss.3️⃣ Pixel-perfect: stored at original resolution with zero compression. 4️⃣ Store Asset: your go-to images & audio for lightning-fast re-use across projects.5️⃣ Generous Space: Every user gets 2 GB free. Upgrade to Pro and unlock a total of 20 GB.How to Use: 1️⃣ Find Library in the left-hand sidebar; click it to view and manage every image, video, audio, or other asset you’ve saved.2️⃣ How to add to Library? Hover over any image or video or audio on the Workspace → click the “+” that appears → it’s instantly archived in Library.3️⃣ Filter by media type or the date the task was generated. Use Select (top-right) to bulk-manage items.✨ Pro-tip: the Search box instantly finds any asset by the prompt you used or the custom name you gave it. (We will allow search by model name later 😉)4️⃣ All data appears on the right: the action bar underneath holds everything you want to do—remix, edit, I2V, post, download, or delete.Rename the asset anytime to keep things tidy and find it instantly by that name in search.5️⃣ Select from Library: In I2V, I2I and other generation that requires an input other than prompts, you can now select input from libraryComing next: 1️⃣ Batch-save multiple Tasks in one click (coming next week); After this is on, you can click manage and then batch select and add to Library2️⃣ Upload local assets straight to Library ❗️Important Notice: After this release, newly generated Tasks will auto-expire after 7 days (30 days for members). but Any task created before this update stays untouched. So remember to add your loved task into Library before it disappear! ✨ Start building your personal creative vault and explore more we provide now~ If you found any bug or you have suggestions, contact us on DC
136
46
Qwen Prompting Guide - Best Ever!

Qwen Prompting Guide - Best Ever!

In this guide, we’ll explore key strategies for prompt design and ways to improve the quality and stability of generated results through precise descriptions of content and style on QwenPrompting TipsGeneral:❗️Core Strategy:Use coherent, natural sentences to describe the scene's content (subject + action + environment), and clear, concise phrases to describe the style/composition/camera angle/quality, etc. A universal template is as follows:[Subject Description] + [Environmental Background] + [Style Tone] + [Aesthetic Parameters] + [Emotional Atmosphere] + [On-Screen Text]Subject Description: For a person, describe appearance, expression, and action; for an object, detail material, color, and shape.Environmental Background: Specify the scene (e.g., "library at midnight") and the spatial relationship between elements.Style Tone: Define the artistic style (e.g., "ink painting," "cyberpunk") for consistency.Aesthetic Parameters: Include visual elements like composition, perspective, angle, lighting, and color tone.Emotional Atmosphere: Define the conveyed emotion (e.g., "lively," "tense," "relaxed").On-Screen Text: If text is needed, place it in quotes with position and font details.💡 TipsMaintain a consistent visual style to avoid conflicts: "The atmosphere is solemn, with a refreshing and healing tone" → "The image conveys a calm emotion with a fresh and elegant color tone."Rephrase negative expressions into positive ones: "Avoid cartoon style" → "Create a realistic style image"; "Don’t make the image look crowded" → "The composition is simple, with the subject in the center and 1/3 of the space left empty around it."When there is a clear use case, specify the purpose and type, such as "mobile wallpaper," "movie poster," etc.Avoid unnecessary instructions unrelated to the image.🌰 Detailed ExamplePrompts:A visually striking surreal illustration depicts a giant whale made of brilliant starry skies and molten gold, gliding silently through deep space. Its body is semi-transparent, revealing flickering star clusters and nebulae, with countless tiny lights falling from its tail, forming a trajectory. In the bottom corner of the image, on a massive, angular, dark-colored meteorite, a short futuristic sentence "WE ARE THE COSMOS DREAMING." is engraved in glowing, futuristic font, its light mirroring the glow on the whale. The background features a deep, velvet-textured cosmos dotted with distant galaxies. The image exudes a serene divine quality, a magnificent contrast of scale, and breath-taking details.1. Content DescriptionSubject: a giant whale made of brilliant starry skies and molten goldEnvironment: A deep, velvet-textured cosmos dotted with distant galaxies.Text: "WE ARE THE COSMOS DREAMING."2. Tone of the ImageStyle: Surrealism, futuristic illustration.Atmosphere: Serene, mysterious, divine, deep.Other examples:Practical Design TipsPoster DesignUnlike typical images, poster design requires special attention to the theme, visual elements, aesthetic style, and layout of the design.Theme Description: Explain the intended use, defining the general style of the image, such as "a promotional poster for a music festival" or "an advertisement poster for xxx product."Visual Elements: Describe the elements included in the poster image. If text is required, place it in quotation marks and specify its position and font.Aesthetic Style: Define the overall feeling and artistic movement of the poster, such as "vector illustration," "e-commerce style," "abstractism," etc.Layout: Specify the desired layout (e.g., "rule of thirds," "modular composition") and where the main subject and text should be placed.Professional Parameters: These include specific image detail and output quality requirements, such as "4K professional level" or "clear lines."🌰 Detailed ExamplePrompts: A summer eco-market event poster in a flat illustration style with bright, cheerful colors. In the center of the image is a large cartoon-style apple tree, with various handcrafted goods and organic produce stalls underneath. People are happily chatting and shopping. Sunlight filters through the leaves, creating dappled light spots, enhancing the relaxed and joyful atmosphere. The top of the poster features a large, pure light blue sky for the main title and subtitle. The bottom area is reserved for event details in a clean beige color. The main title reads "Natural Carnival," with the subtitle "Discover the Joy of Green Living." Event details include "Event Date: July 27, 2026" and "Event Location: Central Park Lawn." The organizer’s name, "Green Leaf Community," is in the top-right corner.Other Examples: Font DesignFont design is relatively simple; just describe the text content, font style, color, texture/feel, and the background of the image.💡 Important: Text content must be in quotation marks.🌰 Detailed ExampleAn expressive ink calligraphy piece with the text “自在” in bold, flowing cursive. The brushstrokes show rich variations in dryness, thickness, and wetness, with ink spreading naturally across rice paper, as if freshly written. Surrounding the text are faint sketches of distant mountains and a lone boat, with large areas of white space showing the slight yellowing and creases of the ancient paper. The overall atmosphere is ethereal, tranquil, and full of Eastern philosophical charm. Soft natural light shines from the side, highlighting the ink’s layers.1. Text Content: “自在”2. Font and Calligraphy StyleFont: Cursive brush.Style: Bold, dynamic, with rich variations in brushstrokes and a flowing, expressive feel.3. Color: Ink: A gradient from deep to light ink with natural diffusion and spreading effects.4. Texture and Feel Ink Texture: Natural diffusion and spreading on the paper, showing the unique effect of wet and dry brushstrokes with a layered feel.Aesthetic Parameters Prompting HandbookStyle Templates1. 2000s Street Documentary (CCD Taste): 35mm straight-on view, direct flash; high contrast + cool green tint; candid look back; 3:2 ratio; noticeable grain.2. Cyber Mech Character: Rain droplets on armor, surface flowing light, volumetric beams piercing through smoke; path tracing/global illumination; 8K ultra HD.3. Food Still Life (Dark Tone Texture): Layered foreground obstruction (herbs/peppercorns), shallow depth of field, triangular composition; shadows with details.4. Architectural Space (Order and Light): 24mm exaggerated perspective + foreground framing; hard-edge shadow geometric cuts; low saturation, cool tone.5. Chinese Portrait (Rain Alley Night Scene): New Chinese-style high collar + velvet shawl, hair dampened by rain; misty volumetric light passing through alley lanterns, shallow depth of field, creamy bokeh; golden spiral composition; window light texture, natural skin tone reflection.Lighting Systems 1. Light Position/Texture: Top light/side-back light/rim light/butterfly light/Rembrandt light/window light/honeycomb control light/flag cut light.2. Atmosphere: Volumetric light/foggy beams/silhouette/high contrast/low contrast/edge highlights.3. Mixed Colors: Dual color temperature mixed light/gel blue-green/magenta/orange/green to clash.Color/Film Effects1. Film Reference: Portra 400 (soft skin tones)/Cinestill 800T (tungsten blue tint)/Fuji Velvia 50 (high saturation landscape).2. Semantic Colors: Cyber blue-green/vintage ochre-brown/Hong Kong style cyan-green/twilight orange-gold.Post-Production and Texture1. Cinematic Depth of Field/Natural Film Grain/Vignette/Glow/Halation2. Local Contrast/Color Separation/Cross Processing/Sharpening with Contrast Retention3. LUT Stylization/Film Curves/S-Curve/Color Noise Control🪄 Ultimate Streamlined VersionDon’t want to bother with writing prompts? Too many rules to remember? Try TensorArt prompt enhancement!Enter the workspace, input core keywords, click the Prompt Enhancement icon, and generate the full prompt with one click, easily unlocking all Qwen-Image capabilities. Effect Comparison💡 If you’re not satisfied with the automatically expanded prompt, you can modify it according to your needs or regenerate it. That’s all for Qwen-image’s advanced prompt techniques. Open workspace and make your imagination come alive! 🪄
128
3
TA Update Log - Hunyuan 2.1, SRPO, Vace, New Labeling Algorithm, Enhanced Prompt etc

TA Update Log - Hunyuan 2.1, SRPO, Vace, New Labeling Algorithm, Enhanced Prompt etc

We bring you some recent updates on TA that you might find interesting. HunyuanImage 2.1 UpdateFollowing Tencent’s official update of Hunyuan 2.1, which introduced ComfyUI support, TA has now synchronized its system to support this update as well.Hunyuan 2.1 has : Enhanced Semantic Understanding: Now capable of accurately interpreting complex semantics, supporting individual descriptions and precise generation for multiple subjects.Improved Visual Quality: Visual textures are more realistic, with enhanced details and significant improvements in lighting and material expression.Faster and More Stable Performance: Response and generation speeds are now more consistent, meeting a wider range of business scenario demands.📣 We now supports native 2K resolution output! Don’t miss out.Try it here: https://tensor.art/models/908972930887615947SRPO SupportSRPO is introduced to address the "oily" texture issue in the Flux model. Compared to previous Flux models, SRPO offers the following advantages: Oiliness Reduction: Effectively eliminates oiliness and AI-generated artifacts, enhancing realism by up to 3x. Fast Training: The training process now takes only 10 minutes.Experience here: https://tensor.art/models/908661281618175817Vace SupportedWe previously launched the video Vace feature, but recently with help of Wan 2.2 VACE, we've upgraded it to make it even more powerful.VACE provide Precise Motion Rendering: Accurately recreates action changes and facial expressions.Simple Setup: Just go to the Video Workspace, select Edit as the model type, choose the Wan 2.2 model family, and select Wan 2.2 Vace. Easy Upload and Generation: Upload your materials and hit Generate to create videos.Multiple Function includes depth control, pose control, swap subject, multi-image-reference and Recolorization. Offers comprehensive video editing tools, precise control, and endless creative possibilities.New Labeling AlgorithmThe new Minicpmv4.5 series labeling algorithm is now available for online training. Compared to previous versions, this algorithm offers significant improvements:Better Understanding of Images/Videos: Now captures more subtle details and semantics with greater precision.Natural and Fluent Tagging: Generates labels that are more aligned with human expression.Check it out below:Enter the online training mode, upload your image, and select Labeling → Auto Labeling.Choose the Minicpmv4.5 series from the dropdown menu.Preview the results.Enhanced Prompt UpgradesTo help improve the quality and stability of generated content, we has upgraded the prompt enhancement feature.Optimized Prompt Formula: The generated prompt text now follows a formula of [Subject Description] + [Environmental Background] + [Style Tone] + [Aesthetic Parameters] + [Emotional Atmosphere] + [On-screen Text], ensuring detailed, accurate expression.Simply go to the workspace, input your core keywords, and click the Prompt Enhancement icon to generate a complete prompt in one click—no more struggling with unclear or lackluster descriptions! 👇👇👇Thank you for reading through the update! If you have any questions or suggestions, feel free to join our community and reach out to the admins for feedback on Discord.
37
1
Creator Dashboard & Withdraw

Creator Dashboard & Withdraw

This is an introduction to the Creator Dashboard and Withdraw process. It provides a detailed overview of how you, as a creator, can monitor and analyze the performance of your content on TensorArt, as well as how to withdraw income. 😉How to check my EarningYou can check and manage all your earning in your Creator Dashboard. You can see the total accumulated income & withdrawable income.Worried about transparency? Creator Dashboard offers detailed records for every transaction, so you’ll always know exactly how much you’ve earned! 💰Data AnalysisWe have also provided data visualization and analysis capabilities in the "Data Center," allowing you to clearly view your revenue curve and see which models/AI tools contributed to it.Tips: Here, you can monitor the performance of all your content, including the number of views, pro runs, and paid interactions. This information is very helpful for planning your future creative direction."WithdrawHow to Withdraw:On the "Income & Withdraw" page, click the withdraw to transfer your income to your bank account.Note: We will verify your identity and collect your bank card information during the first withdrawal. Please make sure your bank card details and personal information are accurate to avoid payment failure.How Long Does Withdrawal Take:Generally, withdraw will be proceed within 7 working days. We usually process withdrawal request on every Friday. It may take another 3-5 days for your Bank to transfer to your account.Tip: If you haven’t received the transfer after 14 working days, please check your system notification. If there’s an issue with your transfer, we will send a notification to guide you on how to proceed. If you still have question, contact us on DiscordAbout the Service FeeSince it’s an international transfer, each transaction will incur a $15 fee from the payment channel.(not from TA)Tips: 1️⃣ Accumulating a certain amount before making a single withdrawal to minimize the impact of fees.2️⃣ Use Redeem feature. You can use your money in wallet to redeem pro membership & credits. No service fee. It's a good option if you need pro & credits.If you have any questions, especially if you encounter any issues with withdrawals, feel free to contact us on Discord.
Avatar & Homepage Update Highlights

Avatar & Homepage Update Highlights

Redesigned HomepageWe’ve streamlined the way you enter the creative workspace from the Toast homepage. Now it’s easier than ever to start generating images and videos right away!Each entry point is paired with inspiring examples—real works from our community—because we can’t wait to see more of your amazing creations appear here. 〰Avatar Function SupportWith just one static image and a short audio clip, you can now create cinematic-quality digital human videos. Expect natural facial expressions, perfectly synced lip movements, and fluid body gestures — up to 60 seconds per generation, with full text-based control.Applications range from livestreaming with digital avatars to film and media production, and much more.How to get started:On the homepage, click the Avatar entry to enter the workspace.Select your model type:Infinite Talk excels at voice-to-character synchronization.WAN2.2-S2V offers superior visual quality.Upload your character's imageUpload or generate your audio file online.Finding the right audio file can be tricky, so we’ve added an online audio cloning feature. The system will render your dialogue using the tone and style of your reference. Simply click Generate Audio and:Enter the dialogue text.Provide a reference audio clip.Don’t have a reference clip? No problem. We also provide preset system voices for you to choose from. Once your audio is generated, click Use to apply it directly in the workspace.Finally, set your parameters:Prompt: Keep it simple—just describe the subject. If you’d like gestures or extra actions, add them (e.g., “This man is speaking while waving his arms”).Resolution: Higher resolutions yield sharper results, but require more compute.Generation Mode:Fast Mode: quicker, cheaper, with slight quality trade-offs.Quality Mode: best results, but requires more time and compute.That’s it—you’re ready to generate. We can’t wait to see your high-quality digital humans shared in the community. We look forward to seeing your work pinned on the homepage!
88
17
Wan2.2 Training Tutorial

Wan2.2 Training Tutorial

In this guide, we’ll walk through the full process of online training on TensorArt using Wan2.2. For this demo, we’ll be using image2video training so you can see direct results.Step 1 – Open Online TrainingGo to the Online Training page.Here, you can choose between Text2Video or Image2Video.👉 For this tutorial, we’ll select Image2Video.Step 2 – Upload Training DataUpload the materials you want to train on.You can upload them one by one.Or, if you’ve prepared everything locally, just zip the files and upload the package.Step 3 – Adjust ParametersOnce the data is uploaded, you’ll see the parameter panel on the right.💡 Tip: If you’re training with video clips, keep them around 5 seconds for the best results.Step 4 – Set Prompts & Preview FramesThe prompt field defines what kind of results you’ll see during and after training.As training progresses, you’ll see epoch previews. This helps you decide which version of the model looks best.For image-to-video LoRA training, you can also set the first frame of the preview video.Step 5 – Start TrainingClick Start Training once your setup is ready.When training completes, each epoch will generate a preview video.You can then review these previews and publish the epoch that delivers the best result.Step 6 – Publish Your ModelAfter publishing, wait a few minutes and your Wan2.2 LoRA model will be ready to use.Recommended Training Parameters (Balanced Quality)Network Module: LoRABase Model: Wan2.2 – i2v-high-noise-a14bTrigger words: (use a unique short tag, e.g. your_project_tag*)*Image Processing ParametersRepeat: 1Epoch: 12Save Every N Epochs: 1–2Video Processing ParametersFrame Samples: 16Target Frames: 20Training ParametersSeed: –Clip Skip: –Text Encoder LR: 1e-5UNet LR: 8e-5 (lower than 1e-4 for more stability)LR Scheduler: cosine (warmup 100 steps if available)Optimizer: AdamW8bitNetwork Dim: 64Network Alpha: 32Gradient Accumulation Steps: 2 (use 1 if VRAM is limited)Label ParametersShuffle caption: –Keep n tokens: –Advanced ParametersNoise offset: 0.025–0.03 (recommended 0.03)Multires noise discount: 0.1Multires noise iterations: 10conv_dim: –conv_alpha: –Batch Size: 1–2 (depending on VRAM)Video Length: 2Sample Image SettingsSampler: eulerPrompt (example):TipsKeep training videos around ~5 seconds for best results.Use a consistent dataset (lighting, framing, style) to avoid drift.If previews show overfitting (blurry details, jitter), lower UNet LR to 6e-5 or reduce Epochs to 10.For stronger style binding: increase Network Dim → 96 and Alpha → 64, while lowering UNet LR → 6e-5.
13
2
📢 Daily Credits Mission Update Notice

📢 Daily Credits Mission Update Notice

Dear Tensorians,To encourage more high-quality content in our community, we’ve optimized the Daily Missions.New missions focus more on creative value and content recommendations, so outstanding works can earn greater rewards.🎯 New Daily Mission Reward PlanShare content to external sites (once per day) 👉 +5 creditsYour post gets a like (up to 5 times per day) 👉 +2 credits eachPost featured on the Homepage (up to 3 times per day) 👉 +30 creditsPost featured on a Channel page (once per day) 👉 +10 credits💡 Note: Credits from others running your Models & AI Tools remain unchanged.We understand that the ways to earn credits may feel more streamlined than before, but please trust that these changes are made to:Improve the quality of recommended content, so users see more valuable worksProvide greater rewards for outstanding creatorsBuild a more positive and fair community environmentActivate Date: 2025.09.4🎬 New Credits Pool!Plus, to further support video content, a brand-new Video Reward Pool is coming soon!✨ You’ll get the chance to share a pool full of rewards with fellow creators!💜 Thank you for your continuous support. Let’s make the TensorArt community even better, together!
119
854
Illustrations v1.1 — Now Exclusive on Tensor.art

Illustrations v1.1 — Now Exclusive on Tensor.art

Next-Gen AI for Stunning Illustrations — Now on Tensor.artIntroducing Illustrious XL 1.1, the latest evolution in anime-focused text-toimage AI. Building on the foundation of Illustrious XL 0.1, this new version pushes the boundaries of fidelity, prompt understanding, and high-resolution output, making it a must-have for artists, illustrators, and animation creators.🔹 Resolution & More Detail — Generate breathtaking 1536 x 1536 images with refined aesthetic quality🔹 Smarter Prompt Interpretation — Optimized for natural language prompts, delivering more intuitive resultsRecommended Settings for Best Results💡 Negative Prompts: “blurry,” “worst quality,” “bad quality,” “bad hands”🛠️ Sampling Settings: Steps: 28 | CFG Scale: 5.5-7.5 | Sampler: Euler🏋️ Training: Try lokr when training, achieving better results than Lora 🤫To showcase the advancements of Illustrious XL 1.0, we’ve put it to the test across key performance areas. Below is a direct comparison of image outputs across different versions, demonstrating improvements in natural language comprehension, high-resolution rendering, vivid color expression, and detail fidelity.1. Natural Language Understanding📌 Improvement: Better prompt adherence and character accuracy.🔍 Comparison:• Illustrious XL 0.1: Struggled with maintaining a consistent character fidelity.• Illustrious XL 1.0: Improved coherence between prompt and image, with better facial expressions• Illustrious XL 1.1: Further refined accuracy, reducing artifacts and enhancing overall expressiveness.📝 Prompt Used:"A vibrant anime-style illustration of a young woman with golden blonde hair, striking orange eyes, and a cheerful expression. She's dressed in a unique outfit that blends sporty and whimsical elements: an orange jacket over a teal and white striped shirt, a blue neckerchief, and a distinctive white cap with orange accents. She's set against a dark green background with streaks of teal, creating a dynamic and eye-catching composition. The style is bold, energetic, and suggestive of a character from a video game or animation., masterpiece, best quality, very aesthetic, absurdres, vivid colors2. High-Resolution Precision📌 Improvement: Increased resolution to 1536 x 1536, maintaining clarity at larger sizes.🔍 Comparison:• Illustrious XL 0.1: Noticeable blurring and loss of detail in high-resolution images.Illustrious XL 1.0: Clearer textures, sharper lines, and more defined elements.• Illustrious XL 1.1: More robust structure📝 Prompt Used:"This masterpiece artwork, in a stylish and extremely aesthetic style evocative of artists like hyatsu,shule_de_yu, lococo:p, huke, potg_\(piotegu\), z3zz4, and moruki, showcases a tsundere solo 1girl, makise kurisu, standing at night under an iridescent sky filled with clouds and forget-me-not flowers, rendered in absurdres detail with a colorful yet partially black and white and abstract composition.”3. Vivid Colors & Dynamic Lighting📌 Improvement: More vibrant hues, balanced contrast, and expressive compositions🔍 Comparison:• Illustrious XL 0.1: Muted tones and washed-out colors.• Illustrious XL 1.0: More vibrant color balance• Illustrious XL 1.1: Richer tones and better shadow handling?📝 Prompt Used:"1girl,hyatsu,shule_de_yu,lococo:p,makise kurisu,huke,tsundere,absurdres,potg_ (piotegu\),z3zz4,moruki,hyatsu,stylish,extremely aesthetic,abstract,colorful,night,sky,flower,cloud,iridescent,masterpiece,black and white, forget-menot.”4. Detail Refinement & Aesthetic Quality📌 Improvement: Sharper facial details, and expressive character design.🔍 Comparison:• Illustrious XL 0.1: Some inconsistencies in facial structure and hand rendering.• Illustrious XL 1.0: Significant improvements in eye detailing and shading.• Illustrious XL 1.1: Near-professional quality with refined expressions.📝 Prompt Used:"1boy,black hair,red eyes,horns,scars,white clothes,blood stains,arm tattoos,black and red tattoos,long gloves on left hand,red sash,warrior-like attire,cold expression,sharp expression”Get Started Today! The future of anime AI is here—be part of it with Illustrious XL 1.1 ✨
218
20
TensorArt 2024 Community Trends Report

TensorArt 2024 Community Trends Report

2024: A Year of BreakthroughsThis year marked an explosion of innovation in AI. From language and imagery to video and audio, new technologies emerged and thrived in open-source communities. TensorArt stood at the forefront, evolving alongside our creators to witness the rise of AI artistry.Prompt of the Year: HairSurprisingly, "Hair" became the most-used prompt of 2024, with 260 million uses. On reflection, it makes sense—hair is essential in capturing the intricacies of portraiture. Other frequently used words included eyes (142M), body (130M), face (105M), and skin (79M).Niche terms favored by experienced users—like detailed (132M), score_8_up (45M), and 8k (25M)—also dominated this year, but saw a decline in usage by mid-year. With the advent of foundational models like Flux, SD3.5, and HunYuanDit, natural language prompts became intuitive and multilingual, removing the need for complex or negative prompts and lowering the barriers to entry for creators worldwide.Community AchievementsEvery day, hundreds of new models are uploaded to TensorArt, fueling creativity among tensorians. This year alone:Over 400,000 models are now available.300,000 images generated daily, with 35,000 shared via posts, reaching 1 million viewers and earning 15,000 likes and shares.This year, we introduced AI Tool and ComfyFlow, welcoming a new wave of creators. AI Tool simplified workflows for beginners and enabled integration into industry applications, with usage distributed across diverse fields.In November, TensorArt celebrated its 3 millionth user, solidifying its position as one of the most active platforms in the AI space after just 18 months. Among our loyal community are members like Goofy, MazVer, AstroBruh and Nuke, whose dedication spans back to our earliest days.A Global Creative ExchangeAI knows no borders. Creators from around the world use TensorArt to share and connect through art. From the icy landscapes of Finland (1.6%) to the sunny shores of Australia (8.7%), from Pakistan (0.075%) to Cuba (0.003%), tensorians transcend language and geography.Generationally, 75% of our users are Gen Z or Alpha, with the remaining 9% belonging to Gen X and Baby Boomers. “It’s never too late to learn” is a motto they live by.Gender representation also continues to evolve, with women now accounting for 20% of user base.TensorArt is breaking barriers—technical, social, and economic. With no need for costly GPUs or advanced knowledge of parameters, tools like Remix make creating stunning artwork as simple as a click.The Way Tensorians CreateMost active hours: Weeknights, 7 PM–12 AM, when TensorArt serves as the perfect way to unwind.Platform preferences: 70% of users favor the web version, but we’ve prioritized app updates for Q1 2025 to close this gap.Image ratios: Female characters outnumber male ones 9:1.67% are realistic, 28% are anime, and 3% are furry.Favorite colors order: Black, white, blue, red, green, yellow, and gray.A Growing Creator EconomyIn 2024, Creator Studio empowered users to monitor their model earnings. Membership in TenStar Fund tripled, and average creator income grew by 1.5x compared to last year.In 2025, TensorArt will continue to prioritize the balance between the creator economy and market development. TA will place greater emphasis on encouraging creators of AI tools and workflows to provide more efficient and convenient practical tools for various specific application scenarios. To this end, TA will be launching the Pro Segment to further reward creators, offering them higher revenue coefficients and profit sharing from Pro user subscriptions.2024 MilestonesThis year, TensorArt hosted:26 site events and 78 social media campaigns.First AI Tool partnership with Snapchat, pioneering AI-driven filters, which were featured as a case study by Snapchat.Launch of “Realtime Generate” and “Talk to Model,” revolutionizing how creators interact with AI.Collaboration with Austrian tattoo artist Fani to host a tattoo design contest, where winners received free tattoos based on their designs.TensorArt is committed to advancing the open-source ecosystem and has made significant strides in multiple areas:For newly released base models, TA ensures same-day online running and next-day support for online training. To allow Tensorians to experience the latest models, limited-time discounts are offered.To boost creative engagement with new base models, TA hosts high-reward events for each open-source base model, incentivizing Tensorians across various dimensions such as Models, AI tools, and Posts.Beyond image generation, TA actively supports the open-source video model ecosystem, enabling rapid integration of CogVideo, Mochi, and HunYuanVideo into ComfyFlow and Creation. In 2025, TA plans to expand online video functionality further.Moving from "observer" to "participant," TA has launched TensorArt Studios, with the release of the SD3.5M distilled version, Turbo. In 2025, Studios will unveil TensorArt’s self-developed base model.TensorArt continuously funds talented creators and labs, providing financial and computational resources to support model innovation. In 2025, Illustrious will exclusively collaborate with TensorArt to release its latest version.Looking ForwardFrom ChatGPT’s debut in 2022 to Sora’s groundbreaking in 2024, AI continues to redefine innovation across industries. But progress isn’t driven by one company—it thrives in the collective power of open-source ecosystems, inspiring collaboration and creativity.AI is a fertile ground, filled with the dreams and ambitions of visionaries worldwide. On this soil, we’ve planted the seed of TensorArt. Together, we will nurture it and watch it grow.2024 Annual RankingsEach month of 2024 brought unforgettable moments to TensorArt. Based on events, likes, runs and monthly trends, we’ve curated the 2024 Annual Rankings. Click to explore!
508
74

Posts