Synthome Docs
Generation

Video Generation

Generate videos with AI models using generateVideo()

Video Generation

Use generateVideo() to create AI-generated videos from text prompts, images, or both.

Basic Usage

import { compose, generateVideo, videoModel } from "@synthome/sdk";

const execution = await compose(
  generateVideo({
    model: videoModel("bytedance/seedance-1-pro", "replicate"),
    prompt: "A serene mountain landscape at sunrise, cinematic",
    duration: 5,
  }),
).execute();

console.log(execution.result?.url);

Options

Required

OptionTypeDescription
modelVideoModelThe video model to use
promptstringText description of the video to generate

Optional

OptionTypeDescription
durationnumberVideo duration in seconds
resolution"480p" | "720p" | "1080p"Output resolution
aspectRatiostringAspect ratio (e.g., "16:9", "9:16", "1:1")
seednumberRandom seed for reproducibility
imagestring | ImageOperationInput image (URL or generated)
audiostring | AudioOperationInput audio (URL or generated)
startImagestringStarting frame image URL
endImagestringEnding frame image URL
cameraMotion"fixed" | "dynamic"Camera movement style

Text-to-Video

Generate a video from a text prompt:

const execution = await compose(
  generateVideo({
    model: videoModel("bytedance/seedance-1-pro", "replicate"),
    prompt: "A rocket launching into space, dramatic lighting",
    duration: 5,
    aspectRatio: "16:9",
  }),
).execute();

Image-to-Video

Animate an existing image:

// Using an image URL
const execution = await compose(
  generateVideo({
    model: videoModel("bytedance/seedance-1-pro", "replicate"),
    prompt: "Gentle wind blowing through the trees",
    image: "https://example.com/landscape.jpg",
  }),
).execute();

With Generated Image

Generate an image first, then animate it:

const execution = await compose(
  generateVideo({
    model: videoModel("bytedance/seedance-1-pro", "replicate"),
    prompt: "The waves gently rolling onto the shore",
    image: generateImage({
      model: imageModel("google/nano-banana", "fal"),
      prompt: "A beautiful sunset beach scene",
    }),
  }),
).execute();

The image is generated first, then automatically passed to the video model.

Lip-Sync Video

Create talking head videos with the Fabric model:

const execution = await compose(
  generateVideo({
    model: videoModel("veed/fabric-1.0", "fal"),
    prompt: "A professional presenter speaking",
    image: "https://example.com/portrait.jpg",
    audio: "https://example.com/speech.mp3",
  }),
).execute();

With Generated Audio

const execution = await compose(
  generateVideo({
    model: videoModel("veed/fabric-1.0", "fal"),
    prompt: "A friendly presenter",
    image: "https://example.com/portrait.jpg",
    audio: generateAudio({
      model: audioModel("elevenlabs/turbo-v2.5", "elevenlabs"),
      text: "Hello! Welcome to our product demo.",
      voiceId: "EXAVITQu4vr4xnSDxMaL",
    }),
  }),
).execute();

Available Models

ModelProviderFeatures
bytedance/seedance-1-proreplicateText-to-video, image-to-video
minimax/video-01replicateText-to-video
veed/fabric-1.0falLip-sync, image-to-video with audio
veed/fabric-1.0/fastfalFast lip-sync

Combining with Operations

Use generated videos with operations like merge():

const execution = await compose(
  merge([
    "https://example.com/intro.mp4",
    generateVideo({
      model: videoModel("bytedance/seedance-1-pro", "replicate"),
      prompt: "Main content scene",
    }),
    generateVideo({
      model: videoModel("bytedance/seedance-1-pro", "replicate"),
      prompt: "Closing scene",
    }),
  ]),
).execute();

All video generations run in parallel, then merge waits for them to complete.

Next Steps

How is this guide?