Seedance AnswersUpdated Mar 9, 2026
How do I add lip sync to AI-generated video?
Short answer
Provide a clear audio track or script, select a consistent character reference, and use a lip-sync model that maps phonemes to mouth shapes frame by frame.
Recommended mode: LabScenario: Marketing teams creating spokesperson videos, testimonial content, or multilingual ad variants with synchronized speech.
Execution Steps
- 1Record or generate the audio narration track with clear pronunciation and natural pacing.
- 2Select or generate a character reference with a clearly visible face at a consistent angle.
- 3Run the lip-sync process to map audio phonemes to corresponding mouth movements.
- 4Review the output for sync accuracy, especially on plosive sounds and sentence endings.
Prompt Template
Generate a spokesperson video with lip sync matching this audio script. Use a front-facing character reference with neutral expression and good lighting.
Common Failure Points
- Using audio with background noise or mumbled speech that confuses phoneme detection
- Choosing a character reference with an obscured or angled face
- Skipping the review step — lip-sync errors are immediately obvious to viewers
FAQ
Composite User Feedback
Search-driven buyer
"I could answer my tool-choice question and start with a concrete prompt in one pass."
Performance operator
"The failure-point list is useful because it maps directly to why batches break."
Agency workflow owner
"The related workflow links make this page operational, not just explanatory."