alicloud-ai-video-wan-videoGenerate videos with Model Studio DashScope SDK using the wan2.6-i2v-flash model. Use when implementing or documenting video.generate requests/responses, mapping prompt/negative_prompt/duration/fps/size/seed/reference_image/motion_strength, or integrating video generation into the video-agent pipeline.
Install via ClawdBot CLI:
clawdbot install cinience/alicloud-ai-video-wan-videoCategory: provider
Provide consistent video generation behavior for the video-agent pipeline by standardizing video.generate inputs/outputs and using DashScope SDK (Python) with the exact model name.
Use ONLY this exact model string:
wan2.6-i2v-flashDo not add date suffixes or aliases.
python3 -m venv .venv
. .venv/bin/activate
python -m pip install dashscope
DASHSCOPE_API_KEY in your environment, or add dashscope_api_key to ~/.alibabacloud/credentials (env takes precedence).prompt (string, required)negative_prompt (string, optional)duration (number, required) secondsfps (number, required)size (string, required) e.g. 1280*720seed (int, optional)reference_image (string | bytes, required for wan2.6-i2v-flash)motion_strength (number, optional)video_url (string)duration (number)fps (number)seed (int)Video generation is usually asynchronous. Expect a task ID and poll until completion.
Note: wan2.6-i2v-flash requires an input image; map reference_image to img_url.
import os
from dashscope import VideoSynthesis
# Prefer env var for auth: export DASHSCOPE_API_KEY=...
# Or use ~/.alibabacloud/credentials with dashscope_api_key under [default].
def generate_video(req: dict) -> dict:
payload = {
"model": "wan2.6-i2v-flash",
"prompt": req["prompt"],
"negative_prompt": req.get("negative_prompt"),
"duration": req.get("duration", 4),
"fps": req.get("fps", 24),
"size": req.get("size", "1280*720"),
"seed": req.get("seed"),
"motion_strength": req.get("motion_strength"),
"api_key": os.getenv("DASHSCOPE_API_KEY"),
}
if req.get("reference_image"):
# DashScope expects img_url for i2v models; local files are auto-uploaded.
payload["img_url"] = req["reference_image"]
response = VideoSynthesis.call(**payload)
# Some SDK versions require polling for the final result.
# If a task_id is returned, poll until status is SUCCEEDED.
result = response.output.get("results", [None])[0]
return {
"video_url": None if not result else result.get("url"),
"duration": response.output.get("duration"),
"fps": response.output.get("fps"),
"seed": response.output.get("seed"),
}
import os
from dashscope import VideoSynthesis
task = VideoSynthesis.async_call(
model="wan2.6-i2v-flash",
prompt=req["prompt"],
img_url=req["reference_image"],
duration=req.get("duration", 4),
fps=req.get("fps", 24),
size=req.get("size", "1280*720"),
api_key=os.getenv("DASHSCOPE_API_KEY"),
)
final = VideoSynthesis.wait(task)
video_url = final.output.get("video_url")
(prompt, negative_prompt, duration, fps, size, seed, reference_image hash, motion_strength).reference_image can be a URL or local path; the SDK auto-uploads local files.Field required: input.img_url, the reference image is missing or not mapped.WxH format (e.g. 1280*720).output/ai-video-wan-video/videos/OUTPUT_DIR.wan2.6-i2v-flash only.references/api_reference.md for DashScope SDK mapping and async handling notes.references/sources.mdGenerated Mar 1, 2026
Generate short videos showcasing products in action from static images, such as a blender mixing ingredients or a drone flying. This enhances product listings with dynamic content, improving engagement and conversion rates.
Create promotional videos from brand images and text prompts, allowing for rapid prototyping of ads without extensive filming. Useful for social media ads or digital signage to convey brand messages visually.
Produce instructional videos from diagrams or photos, such as animating scientific processes or historical events. Helps educators and e-learning platforms develop engaging visual aids for students.
Generate short clips for platforms like TikTok or Instagram from user-uploaded images and creative prompts. Enables content creators to produce unique, AI-driven videos quickly for viral trends.
Animate static property photos into brief videos showing room transitions or exterior views. Provides potential buyers with a more immersive experience, complementing traditional photo galleries.
Offer a cloud-based platform where users pay monthly or annual fees to access video generation tools via API. Includes tiered pricing based on usage limits, such as number of videos or resolution options.
Charge customers per video generation request, with pricing based on factors like video duration or quality. Attracts occasional users or startups who prefer flexible, on-demand costs without long-term commitments.
License the video generation technology to other companies for embedding into their own products, such as editing software or marketing platforms. Includes customization and support services for seamless integration.
💬 Integration Tip
Ensure proper handling of asynchronous video generation with polling to avoid blocking applications, and cache results to optimize performance and reduce API costs.
Extract frames or short clips from videos using ffmpeg.
Download videos, audio, subtitles, and clean paragraph-style transcripts from YouTube and any other yt-dlp supported site. Use when asked to “download this video”, “save this clip”, “rip audio”, “get subtitles”, “get transcript”, or to troubleshoot yt-dlp/ffmpeg and formats/playlists.
Generate SRT subtitles from video/audio with translation support. Transcribes Hebrew (ivrit.ai) and English (whisper), translates between languages, burns subtitles into video. Use for creating captions, transcripts, or hardcoded subtitles for WhatsApp/social media.
Create AI videos with optimized prompts, motion control, and platform-ready output.
自动登录抖音账号,上传并发布视频到抖音创作者平台,支持视频标签管理和登录状态检查。
AI video generation workflow on Volcengine. Use when users need text-to-video, image-to-video, generation parameter tuning, or async task troubleshooting for video jobs.