eachlabs-video-generationGenerate new videos from text prompts, images, or reference inputs using EachLabs AI models. Supports text-to-video, image-to-video, transitions, motion control, talking head, and avatar generation. Use when the user wants to create new video content. For editing existing videos, see eachlabs-video-edit.
Install via ClawdBot CLI:
clawdbot install eftalyurtseven/eachlabs-video-generationGenerate new videos from text prompts, images, or reference inputs using 165+ AI models via the EachLabs Predictions API. For editing existing videos (upscaling, lip sync, extension, subtitles), see the eachlabs-video-edit skill.
Header: X-API-Key: <your-api-key>
Set the EACHLABS_API_KEY environment variable or pass it directly. Get your key at eachlabs.ai.
curl -X POST https://api.eachlabs.ai/v1/prediction \
-H "Content-Type: application/json" \
-H "X-API-Key: $EACHLABS_API_KEY" \
-d '{
"model": "pixverse-v5-6-text-to-video",
"version": "0.0.1",
"input": {
"prompt": "A golden retriever running through a meadow at sunset, cinematic slow motion",
"resolution": "720p",
"duration": "5",
"aspect_ratio": "16:9"
}
}'
curl https://api.eachlabs.ai/v1/prediction/{prediction_id} \
-H "X-API-Key: $EACHLABS_API_KEY"
Poll until status is "success" or "failed". The output video URL is in the response.
| Model | Slug | Best For |
|-------|------|----------|
| Pixverse v5.6 | pixverse-v5-6-text-to-video | General purpose, audio generation |
| XAI Grok Imagine | xai-grok-imagine-text-to-video | Fast creative |
| Kandinsky 5 Pro | kandinsky5-pro-text-to-video | Artistic, high quality |
| Seedance v1.5 Pro | seedance-v1-5-pro-text-to-video | Cinematic quality |
| Wan v2.6 | wan-v2-6-text-to-video | Long/narrative content |
| Kling v2.6 Pro | kling-v2-6-pro-text-to-video | Motion control |
| Pika v2.2 | pika-v2-2-text-to-video | Stylized, effects |
| Minimax Hailuo V2.3 Pro | minimax-hailuo-v2-3-pro-text-to-video | High fidelity |
| Sora 2 Pro | sora-2-text-to-video-pro | Premium quality |
| Veo 3 | veo-3 | Google's best quality |
| Veo 3.1 | veo3-1-text-to-video | Latest Google model |
| LTX v2 Fast | ltx-v-2-text-to-video-fast | Fastest generation |
| Moonvalley Marey | moonvalley-marey-text-to-video | Cinematic style |
| Ovi | ovi-text-to-video | General purpose |
| Model | Slug | Best For |
|-------|------|----------|
| Pixverse v5.6 | pixverse-v5-6-image-to-video | General purpose |
| XAI Grok Imagine | xai-grok-imagine-image-to-video | Creative edits |
| Wan v2.6 Flash | wan-v2-6-image-to-video-flash | Fastest |
| Wan v2.6 | wan-v2-6-image-to-video | High quality |
| Seedance v1.5 Pro | seedance-v1-5-pro-image-to-video | Cinematic |
| Kandinsky 5 Pro | kandinsky5-pro-image-to-video | Artistic |
| Kling v2.6 Pro I2V | kling-v2-6-pro-image-to-video | Best Kling quality |
| Kling O1 | kling-o1-image-to-video | Latest Kling model |
| Pika v2.2 I2V | pika-v2-2-image-to-video | Effects, PikaScenes |
| Minimax Hailuo V2.3 Pro | minimax-hailuo-v2-3-pro-image-to-video | High fidelity |
| Sora 2 I2V | sora-2-image-to-video | Premium quality |
| Veo 3.1 I2V | veo3-1-image-to-video | Google's latest |
| Runway Gen4 Turbo | gen4-turbo | Fast, film quality |
| Veed Fabric 1.0 | veed-fabric-1-0 | Social media |
| Model | Slug | Best For |
|-------|------|----------|
| Pixverse v5.6 Transition | pixverse-v5-6-transition | Smooth transitions |
| Pika v2.2 PikaScenes | pika-v2-2-pikascenes | Scene effects |
| Pixverse v4.5 Effect | pixverse-v4-5-effect | Video effects |
| Veo 3.1 First Last Frame | veo3-1-first-last-frame-to-video | Interpolation |
| Model | Slug | Best For |
|-------|------|----------|
| Kling v2.6 Pro Motion | kling-v2-6-pro-motion-control | Pro motion control |
| Kling v2.6 Standard Motion | kling-v2-6-standard-motion-control | Standard motion |
| Motion Fast | motion-fast | Fast motion transfer |
| Motion Video 14B | motion-video-14b | High quality motion |
| Wan v2.6 R2V | wan-v2-6-reference-to-video | Reference-based |
| Kling O1 Reference I2V | kling-o1-reference-image-to-video | Reference-based |
| Model | Slug | Best For |
|-------|------|----------|
| Bytedance Omnihuman v1.5 | bytedance-omnihuman-v1-5 | Full body animation |
| Creatify Aurora | creatify-aurora | Audio-driven avatar |
| Infinitalk I2V | infinitalk-image-to-video | Image talking head |
| Infinitalk V2V | infinitalk-video-to-video | Video talking head |
| Sync Lipsync v2 Pro | sync-lipsync-v2-pro | Lip sync |
| Kling Avatar v2 Pro | kling-avatar-v2-pro | Pro avatar |
| Kling Avatar v2 Standard | kling-avatar-v2-standard | Standard avatar |
| Echomimic V3 | echomimic-v3 | Face animation |
| Stable Avatar | stable-avatar | Stable talking head |
GET https://api.eachlabs.ai/v1/model?slug= — validates the model exists and returns the request_schema with exact input parameters. Always do this before creating a prediction to ensure correct inputs.https://api.eachlabs.ai/v1/prediction with model slug, version "0.0.1", and input parameters matching the schemaGET https://api.eachlabs.ai/v1/prediction/{id} until status is "success" or "failed"curl -X POST https://api.eachlabs.ai/v1/prediction \
-H "Content-Type: application/json" \
-H "X-API-Key: $EACHLABS_API_KEY" \
-d '{
"model": "wan-v2-6-image-to-video-flash",
"version": "0.0.1",
"input": {
"image_url": "https://example.com/photo.jpg",
"prompt": "The person turns to face the camera and smiles",
"duration": "5",
"resolution": "1080p"
}
}'
curl -X POST https://api.eachlabs.ai/v1/prediction \
-H "Content-Type: application/json" \
-H "X-API-Key: $EACHLABS_API_KEY" \
-d '{
"model": "pixverse-v5-6-transition",
"version": "0.0.1",
"input": {
"prompt": "Smooth morphing transition between the two images",
"first_image_url": "https://example.com/start.jpg",
"end_image_url": "https://example.com/end.jpg",
"duration": "5",
"resolution": "720p"
}
}'
curl -X POST https://api.eachlabs.ai/v1/prediction \
-H "Content-Type: application/json" \
-H "X-API-Key: $EACHLABS_API_KEY" \
-d '{
"model": "kling-v2-6-pro-motion-control",
"version": "0.0.1",
"input": {
"image_url": "https://example.com/character.jpg",
"video_url": "https://example.com/dance-reference.mp4",
"character_orientation": "video"
}
}'
curl -X POST https://api.eachlabs.ai/v1/prediction \
-H "Content-Type: application/json" \
-H "X-API-Key: $EACHLABS_API_KEY" \
-d '{
"model": "bytedance-omnihuman-v1-5",
"version": "0.0.1",
"input": {
"image_url": "https://example.com/portrait.jpg",
"audio_url": "https://example.com/speech.mp3",
"resolution": "1080p"
}
}'
See references/MODELS.md for complete parameter details for each model.
Generated Mar 1, 2026
Marketers and influencers can generate short, engaging video clips for platforms like TikTok, Instagram Reels, and YouTube Shorts. Using text-to-video models like Pixverse v5.6 or LTX v2 Fast, they quickly produce branded or viral content without filming equipment, ideal for campaigns and trends.
Online retailers can create dynamic video demonstrations of products from static images using image-to-video models like Wan v2.6 Flash. This enhances product listings with animated visuals, showing items in use to boost engagement and sales on websites like Amazon or Shopify.
Educators and trainers can generate instructional videos from text prompts or images, such as explaining concepts with animated visuals. Models like Seedance v1.5 Pro offer cinematic quality for professional e-learning courses, making content more interactive and accessible for students.
Filmmakers and animators can prototype scenes or storyboards using text-to-video models like Sora 2 Pro for high-quality previews. This allows for rapid iteration of creative ideas, saving time and resources in pre-production phases for movies, games, or animations.
Businesses can create talking head videos or avatar-based presentations for training modules using models like Kling Avatar v2 Pro. This personalizes internal communications, enabling HR teams to produce consistent, engaging videos for onboarding or policy updates without live filming.
Offer a platform where users pay a monthly fee to access the EachLabs API for video generation, with tiered plans based on usage limits and model access. This provides recurring revenue and scalability for businesses integrating AI video into their workflows.
License the EachLabs skill to other companies, such as marketing agencies or software providers, who rebrand it as their own video generation tool. This generates upfront licensing fees and ongoing support contracts, expanding market reach through partnerships.
Charge users based on the number of video generations or compute time, with pricing per prediction or minute of output. This appeals to developers and small businesses with variable needs, ensuring cost-effectiveness and flexibility for occasional users.
💬 Integration Tip
Always check the model schema via the GET endpoint before making predictions to ensure correct input parameters and avoid errors in video generation workflows.
Generate spectrograms and feature-panel visualizations from audio with the songsee CLI.
Best practices for Remotion - Video creation in React
Best practices for Remotion - Video creation in React
Long-form AI video production: the frontier of multi-agent coordination. CellCog orchestrates 6-7 foundation models to produce up to 4-minute videos from a single prompt — scripted, filmed, voiced, lipsync'd, scored, and edited automatically. Create marketing videos, product demos, explainer videos, educational content, spokesperson videos, training materials, UGC content, news reports.
HeyGen AI video creation API. Use when: (1) Using Video Agent for one-shot prompt-to-video generation, (2) Generating AI avatar videos with /v2/video/generat...
Complete toolkit for programmatic video creation with Remotion + React. Covers animations, timing, rendering (CLI/Node.js/Lambda/Cloud Run), captions, 3D, charts, text effects, transitions, and media handling. Use when writing Remotion code, building video generation pipelines, or creating data-driven video templates.