
Seedance 2.0 Complete Guide: All 12 Input Modalities Explained (With Examples)
A complete guide to Seedance 2.0's 12 input modalities, 4 generation modes, @Reference Prompting, camera motion control, beat-sync audio, and real workflow examples for creators.
Seedance 2.0 accepts 12 distinct input types — text, up to 9 reference images, 3 video clips, and 3 audio files — combined with a prompt system that lets you specify the role of each asset. This guide covers every input type, all 4 generation modes, and practical workflows for the most common use cases.
Most tutorials for Seedance focus on the API or on basic text-to-video. This guide is for creative workflows: what each input does, when to use it, and how to combine inputs for controlled output.
Why Input Flexibility Matters
Sora 2 and Veo 3.1 accept 2 input types each (text + 1 image). Seedance 2.0's 12-modality input system is a fundamentally different approach — it treats the model like a director who can be given detailed reference materials rather than just a description.
According to ByteDance's technical overview of Seedance 2.0, the model was designed for "multi-reference compositional generation" — meaning it's specifically trained to honor multiple reference inputs simultaneously.
The 12 Input Types
Text (1 input)
The prompt. Unlike Sora, Seedance 2.0 prompts work best when combined with reference assets. Pure text-to-video is supported but the model's strength is in interpreting how uploaded references should be used.
Effective prompt structure:
[reference tags] + [scene description] + [camera/motion description]Images (up to 9)
You can upload up to 9 reference images in a single generation. Each image can serve a different purpose:
- Style reference — "use the color palette and mood of @image1"
- Character reference — "use the person from @image2 as the main subject"
- Environment reference — "set the scene in the location from @image3"
- Product reference — "show the product from @image4 in motion"
- First frame — in First & Last Frame mode, @image1 becomes the opening shot
With 9 image inputs available, you can provide a character, their outfit, their environment, a product they're holding, and a style direction — all as separate controllable references.
Video Clips (up to 3)
Video reference inputs serve primarily for camera motion guidance and style continuity:
- Camera motion reference — upload a clip with the camera movement you want reproduced (@video1 camera motion)
- Style reference — a video clip establishes the visual aesthetic and pace
- Transition reference — show how one scene connects to another
This is the feature that enables camera motion control. If you have a drone footage clip with a specific dolly move, uploading it as @video1 with "camera motion" instructs Seedance to replicate that movement.
Audio Files (up to 3)
Audio inputs enable beat-synchronized editing:
- Music track — upload a song and the generated video will cut and transition to the beat
- Voiceover reference — establish pacing or audio direction
- Sound effect reference — guide the ambient audio generation
Beat sync is Seedance's differentiating audio feature. For music video content, social media reels set to trending audio, or product ads with a branded soundtrack, this is the primary reason to choose Seedance over Veo or Sora.
The 4 Generation Modes
1. All Reference
When to use: You have 2+ reference assets and want the model to blend them.
All Reference mode accepts the widest range of input combinations. Upload character images, environment references, style guides, and camera motion clips. The model synthesizes them according to your prompt tags.
Example workflow:
@image1 main character walking, @image2 background environment,
@video1 camera movement, smooth tracking shot through
a neon-lit urban street at night, 5 seconds2. First & Last Frame
When to use: You know exactly how a shot starts and ends and want AI to fill the transition.
Upload two images: the first frame and the last frame. Seedance 2.0 generates the motion and transformation between them.
Common use cases:
- Product appearing from packaging (start: packaged; end: product revealed)
- Character walking into a room (start: door; end: interior)
- Day-to-night transition (start: bright daylight; end: golden hour)
The model handles interpolation, physics, and camera positioning to connect the two frames naturally.
3. Multi-Frame
When to use: You're building a sequence with multiple distinct moments.
Upload 3–5 keyframe images in sequence. Seedance generates transitions between each frame in order, creating a short storyboard-driven video.
This mode works well for:
- Step-by-step product demonstrations
- Recipe or tutorial sequences
- Travel montages with specific location shots
4. Subject Reference
When to use: You need consistent character appearance across multiple generations.
Upload a reference image of your subject. Seedance 2.0 preserves their appearance, facial features, and clothing across different scenes, backgrounds, and angles.
For brands with a mascot, a recurring character, or a specific product that must look identical across a video series, Subject Reference is the mode that makes sequential content feasible.
@Reference Prompting: How It Works
The @reference syntax is Seedance's instruction system for multi-asset inputs. Without it, the model makes its own decisions about how to use each uploaded asset. With it, you specify the role.
Supported role tags:
@image1 character— use this person/subject as the main character@image2 style— apply this image's aesthetic to the full scene@image1 first frame— this image is the opening frame@image2 last frame— this image is the closing frame@video1 camera motion— replicate the camera movement from this clip@audio1 beat sync— cut to the rhythm of this audio track
You can combine multiple references in one prompt:
@image1 character walking confidently, @image3 city background,
@video1 camera motion, @audio1 beat sync, urban fashion
advertisement, 5 seconds, golden hour lightingCamera Motion Control
Camera motion replication is one of Seedance 2.0's capabilities not available in Sora or Veo. The process:
- Find or record a video clip with the camera movement you want
- Upload it as a reference in your generation
- Tag it:
@video1 camera motion - Describe the scene content in your prompt
The model learns the speed, direction, and character of the camera movement from the reference and applies it to your new scene. This enables consistent cinematography across a video series without having to precisely describe complex camera movements in text.
Beat Sync Audio: A Practical Walkthrough
- Upload your audio track (MP3 or WAV) as an audio reference
- Tag it in your prompt:
@audio1 beat sync - The model analyzes the track's tempo and beat markers
- Generated cuts and transitions in the video align to those markers
The generated video will have natural editing rhythm. If your audio has a beat drop at the 3-second mark, the video will have a visual emphasis there.
Note: Seedance 2.0 generates ambient video audio in addition to syncing to your uploaded track. The final output typically includes both. If you want only your audio track in the final video, edit the audio layer in post-production.
Common Workflow Examples
Product Advertisement (E-Commerce)
Inputs:
- @image1: Product photo (front view)
- @image2: Lifestyle environment (kitchen, gym, etc.)
- @audio1: Brand music track
Prompt:
@image1 product displayed prominently, @image2 environment,
@audio1 beat sync, smooth reveal shot, warm lighting,
professional product advertisement style, 5 secondsMusic Video Segment
Inputs:
- @image1: Artist reference photo
- @video1: Camera movement reference clip
- @audio1: Song
Prompt:
@image1 artist performing, @video1 camera motion,
@audio1 beat sync, concert venue with dynamic lighting,
crowd in background, 5 secondsReal Estate Tour
Inputs:
- @video1: Drone footage with specific flyover movement
- @image1: Exterior photo of property
Prompt:
@video1 camera motion, @image1 building exterior,
aerial flyover approach, sunset lighting, real estate
showcase, cinematic quality, 8 secondsFrequently Asked Questions
What's the maximum number of images I can upload in one generation? 9 reference images. Combined with up to 3 video clips and 3 audio files, a single generation can reference 15 separate assets.
Does Seedance 2.0 support 1080p output? Yes, on Pro and Business plans. Free plan generates at 720p. 1080p costs approximately 80 additional credits per generation.
What's the maximum video length? 4–12 seconds for Seedance 2.0. The model supports durations within this range.
Can I use Seedance 2.0 for commercial projects? Pro and Business plans include commercial usage rights with no watermarks on outputs.
How accurate is beat sync? It works best with clear rhythmic audio (electronic music, pop, hip-hop). Complex jazz or classical music with irregular timing produces less consistent results. For best results, use audio with a consistent BPM.
What file formats are accepted for reference assets? Images: JPG, PNG, WebP. Video: MP4, MOV. Audio: MP3, WAV, M4A.
Is the @reference syntax case-sensitive? No. @Image1, @image1, and @IMAGE1 are equivalent.
Last updated: February 2026
Author
Categories
More Posts

How to Access Google Veo 3.1 Pro Without Vertex AI: Complete Guide (2026)
Use Google Veo 3.1 Pro without a Google Cloud or Vertex AI account. SeedanceVideo provides API access at $19.90/month — compared to Vertex AI enterprise pricing and setup complexity.

Best AI Video Generators in 2026: 7 Tools Tested & Compared (Including Multi-Model Platforms)
A factual comparison of 7 AI video generation tools in 2026, tested by use case. Includes single-model tools, multi-model platforms, and specialized solutions for advertising and enterprise.

One Platform, Three Top AI Video Models: Seedance 2.0, Sora 2 & Veo 3.1
A multi-model AI video platform lets you access Seedance 2.0, Sora 2, and Veo 3.1 from a single workspace with unified credits — at a fraction of the cost of separate subscriptions.
Seedance 2.0 Newsletter — AI Video Tips & Updates
Join the Seedance 2.0 community
Get weekly AI video generation tips, creative workflows, and Seedance 2.0 product updates delivered to your inbox.