Quickstart
Get started with Synthome SDK in 5 minutes
Quickstart
Get up and running with Synthome SDK in just a few minutes.
Installation
Install the SDK using your preferred package manager:
npm install @synthome/sdkyarn add @synthome/sdkpnpm add @synthome/sdkbun add @synthome/sdkSetup API Keys
1. Synthome API Key
Get your Synthome API key from the dashboard:
export SYNTHOME_API_KEY="your-synthome-api-key"2. Provider API Keys
Synthome orchestrates AI models across multiple providers. You need API keys for the providers you want to use.
Option A: Dashboard (Recommended)
Add your provider API keys in the Synthome dashboard. This keeps your keys secure and managed in one place.
Option B: Environment Variables
# For Replicate models
export REPLICATE_API_KEY="your-replicate-key"
# For Fal models
export FAL_KEY="your-fal-key"
# For ElevenLabs audio
export ELEVENLABS_API_KEY="your-elevenlabs-key"
# For Hume TTS
export HUME_API_KEY="your-hume-key"Option C: Pass in Code
const model = videoModel("bytedance/seedance-1-pro", {
provider: "replicate",
apiKey: "your-replicate-key",
});Priority Order: Model-level API key → Dashboard keys → Environment variables
Your First Pipeline
Let's generate a simple video:
import { compose, generateVideo, videoModel } from "@synthome/sdk";
const execution = await compose(
generateVideo({
model: videoModel("bytedance/seedance-1-pro", "replicate"),
prompt: "A serene mountain landscape at sunrise, cinematic",
duration: 5,
}),
).execute();
console.log("Video URL:", execution.result?.url);Generate an Image
import { compose, generateImage, imageModel } from "@synthome/sdk";
const execution = await compose(
generateImage({
model: imageModel("google/nano-banana", "fal"),
prompt: "A futuristic city skyline at night",
}),
).execute();
console.log("Image URL:", execution.result?.url);Generate Audio (Text-to-Speech)
import { compose, generateAudio, audioModel } from "@synthome/sdk";
const execution = await compose(
generateAudio({
model: audioModel("elevenlabs/turbo-v2.5", "elevenlabs"),
text: "Welcome to Synthome, the composable AI media toolkit.",
voiceId: "EXAVITQu4vr4xnSDxMaL", // Sarah voice
}),
).execute();
console.log("Audio URL:", execution.result?.url);Combine Multiple Operations
The real power of Synthome is composing multiple operations. You can mix direct URLs with generated media:
import { compose, generateVideo, merge, videoModel } from "@synthome/sdk";
const execution = await compose(
merge([
// Use an existing video URL
"https://example.com/intro.mp4",
// Generate new videos
generateVideo({
model: videoModel("bytedance/seedance-1-pro", "replicate"),
prompt: "Scene 1: A rocket launching into space",
}),
generateVideo({
model: videoModel("bytedance/seedance-1-pro", "replicate"),
prompt: "Scene 2: Earth from orbit",
}),
]),
).execute();
console.log("Merged video:", execution.result?.url);Generated videos run in parallel, and the merge waits for all inputs before combining them.
Use Webhooks (Async)
For long-running pipelines, use webhooks instead of polling:
const execution = await compose(
generateVideo({
model: videoModel("bytedance/seedance-1-pro", "replicate"),
prompt: "An epic cinematic scene",
}),
).execute({
webhook: "https://your-server.com/webhook",
webhookSecret: "your-secret", // Optional: for signature verification
});
// Returns immediately with execution ID
console.log("Execution ID:", execution.id);Your webhook will receive the result when the pipeline completes.
Next Steps
Now that you have the basics, explore more:
Core Concepts
Understand how pipelines, models, and providers work
Video Generation
Deep dive into video generation options
Image Generation
Deep dive into image generation options
Audio Generation
Deep dive into audio generation options
Operations
Learn about merge, layers, captions, and more
Models
See all supported AI models
How is this guide?