eachlabs-video-editEdit, transform, extend, upscale, and enhance videos using EachLabs AI models. Supports lip sync, video translation, subtitle generation, audio merging, style transfer, and video extension. Use when the user wants to edit or transform existing video content.
Install via ClawdBot CLI:
clawdbot install eftalyurtseven/eachlabs-video-editEdit, transform, and enhance existing videos using 25+ AI models via the EachLabs Predictions API.
Header: X-API-Key: <your-api-key>
Set the EACHLABS_API_KEY environment variable. Get your key at eachlabs.ai.
| Model | Slug | Best For |
|-------|------|----------|
| Veo 3.1 Extend | veo3-1-extend-video | Best quality extension |
| Veo 3.1 Fast Extend | veo3-1-fast-extend-video | Fast extension |
| PixVerse v5 Extend | pixverse-v5-extend | PixVerse extension |
| PixVerse v4.5 Extend | pixverse-v4-5-extend | Older PixVerse extension |
| Model | Slug | Best For |
|-------|------|----------|
| Sync Lipsync v2 Pro | sync-lipsync-v2-pro | Best lip sync quality |
| PixVerse Lip Sync | pixverse-lip-sync | PixVerse lip sync |
| LatentSync | latentsync | Open-source lip sync |
| Video Retalking | video-retalking | Audio-based lip sync |
| Model | Slug | Best For |
|-------|------|----------|
| Runway Gen4 Aleph | runway-gen4-aleph | Video transformation |
| Kling O1 Video Edit | kling-o1-video-to-video-edit | AI video editing |
| Kling O1 V2V Reference | kling-o1-video-to-video-reference | Reference-based edit |
| ByteDance Video Stylize | bytedance-video-stylize | Style transfer |
| Wan v2.2 Animate Move | wan-v2-2-14b-animate-move | Motion animation |
| Wan v2.2 Animate Replace | wan-v2-2-14b-animate-replace | Object replacement |
| Model | Slug | Best For |
|-------|------|----------|
| Topaz Upscale Video | topaz-upscale-video | Best quality upscale |
| Luma Ray 2 Video Reframe | luma-dream-machine-ray-2-video-reframe | Video reframing |
| Luma Ray 2 Flash Reframe | luma-dream-machine-ray-2-flash-video-reframe | Fast reframing |
| Model | Slug | Best For |
|-------|------|----------|
| FFmpeg Merge Audio Video | ffmpeg-api-merge-audio-video | Merge audio track |
| MMAudio V2 | mm-audio-v-2 | Add audio to video |
| MMAudio | mmaudio | Add audio to video |
| Auto Subtitle | auto-subtitle | Generate subtitles |
| Merge Videos | merge-videos | Concatenate videos |
| Model | Slug | Best For |
|-------|------|----------|
| Heygen Video Translate | heygen-video-translate | Translate video speech |
| Model | Slug | Best For |
|-------|------|----------|
| Motion Fast | motion-fast | Fast motion transfer |
| Infinitalk V2V | infinitalk-video-to-video | Talking head from video |
| Model | Slug | Best For |
|-------|------|----------|
| Faceswap Video | faceswap-video | Swap face in video |
GET https://api.eachlabs.ai/v1/model?slug= โ validates the model exists and returns the request_schema with exact input parameters. Always do this before creating a prediction to ensure correct inputs.https://api.eachlabs.ai/v1/prediction with model slug, version "0.0.1", and input matching the schemaGET https://api.eachlabs.ai/v1/prediction/{id} until status is "success" or "failed"curl -X POST https://api.eachlabs.ai/v1/prediction \
-H "Content-Type: application/json" \
-H "X-API-Key: $EACHLABS_API_KEY" \
-d '{
"model": "veo3-1-extend-video",
"version": "0.0.1",
"input": {
"video_url": "https://example.com/video.mp4",
"prompt": "Continue the scene with the camera slowly pulling back"
}
}'
curl -X POST https://api.eachlabs.ai/v1/prediction \
-H "Content-Type: application/json" \
-H "X-API-Key: $EACHLABS_API_KEY" \
-d '{
"model": "sync-lipsync-v2-pro",
"version": "0.0.1",
"input": {
"video_url": "https://example.com/talking-head.mp4",
"audio_url": "https://example.com/new-audio.mp3"
}
}'
curl -X POST https://api.eachlabs.ai/v1/prediction \
-H "Content-Type: application/json" \
-H "X-API-Key: $EACHLABS_API_KEY" \
-d '{
"model": "auto-subtitle",
"version": "0.0.1",
"input": {
"video_url": "https://example.com/video.mp4"
}
}'
curl -X POST https://api.eachlabs.ai/v1/prediction \
-H "Content-Type: application/json" \
-H "X-API-Key: $EACHLABS_API_KEY" \
-d '{
"model": "ffmpeg-api-merge-audio-video",
"version": "0.0.1",
"input": {
"video_url": "https://example.com/video.mp4",
"audio_url": "https://example.com/music.mp3",
"start_offset": 0
}
}'
curl -X POST https://api.eachlabs.ai/v1/prediction \
-H "Content-Type: application/json" \
-H "X-API-Key: $EACHLABS_API_KEY" \
-d '{
"model": "topaz-upscale-video",
"version": "0.0.1",
"input": {
"video_url": "https://example.com/low-res-video.mp4"
}
}'
See references/MODELS.md for complete parameter details for each model.
Generated Mar 1, 2026
Content creators and marketers can use this skill to quickly edit and enhance videos for platforms like Instagram, TikTok, and YouTube. For example, they can extend short clips, add subtitles for accessibility, or sync lip movements to new audio tracks, improving engagement and reach.
Educators and e-learning platforms can leverage this skill to translate instructional videos into multiple languages, generate accurate subtitles, and merge audio tracks for clarity. This makes content more accessible to diverse audiences and reduces production time.
Independent filmmakers and studios can utilize AI models for tasks like upscaling low-resolution footage, applying style transfers for artistic effects, or extending scenes seamlessly. This enhances production quality without extensive manual editing.
Businesses can create professional training videos by adding subtitles, merging audio with video, and using lip sync for localized versions. This ensures clear communication and consistency across global teams, improving training effectiveness.
Offer a subscription-based platform where users pay monthly or annually to access the video editing AI models. This provides recurring revenue and can include tiered plans based on usage limits, model access, or priority support.
Charge users per API call or processing minute for each video edit task, such as lip sync or upscaling. This model appeals to developers and businesses with variable needs, allowing them to scale costs with usage.
License the skill to large enterprises or media companies for integration into their internal tools or customer-facing platforms. This generates high-value contracts and can include customization and dedicated support services.
๐ฌ Integration Tip
Always check the model schema via the API before making predictions to ensure correct input parameters, and set up environment variables for API keys to streamline authentication.
Extract frames or short clips from videos using ffmpeg.
Download videos, audio, subtitles, and clean paragraph-style transcripts from YouTube and any other yt-dlp supported site. Use when asked to โdownload this videoโ, โsave this clipโ, โrip audioโ, โget subtitlesโ, โget transcriptโ, or to troubleshoot yt-dlp/ffmpeg and formats/playlists.
Generate SRT subtitles from video/audio with translation support. Transcribes Hebrew (ivrit.ai) and English (whisper), translates between languages, burns subtitles into video. Use for creating captions, transcripts, or hardcoded subtitles for WhatsApp/social media.
Create AI videos with optimized prompts, motion control, and platform-ready output.
่ชๅจ็ปๅฝๆ้ณ่ดฆๅท๏ผไธไผ ๅนถๅๅธ่ง้ขๅฐๆ้ณๅไฝ่ ๅนณๅฐ๏ผๆฏๆ่ง้ขๆ ็ญพ็ฎก็ๅ็ปๅฝ็ถๆๆฃๆฅใ
AI video generation workflow on Volcengine. Use when users need text-to-video, image-to-video, generation parameter tuning, or async task troubleshooting for video jobs.