fal-text-to-imageGenerate, remix, and edit images using fal.ai's AI models. Supports text-to-image generation, image-to-image remixing, and targeted inpainting/editing.
Install via ClawdBot CLI:
clawdbot install delorenj/fal-text-to-imageProfessional AI-powered image workflows using fal.ai's state-of-the-art models including FLUX, Recraft V3, Imagen4, and more.
Generate images from scratch using text prompts
Transform existing images while preserving composition
Targeted inpainting and masked editing
Trigger when user:
# Basic generation
uv run python fal-text-to-image "A cyberpunk city at sunset with neon lights"
# With specific model
uv run python fal-text-to-image -m flux-pro/v1.1-ultra "Professional headshot"
# With style reference
uv run python fal-text-to-image -i reference.jpg "Mountain landscape" -m flux-2/lora/edit
# Transform style while preserving composition
uv run python fal-image-remix input.jpg "Transform into oil painting"
# With strength control (0.0=original, 1.0=full transformation)
uv run python fal-image-remix photo.jpg "Anime style character" --strength 0.6
# Premium quality remix
uv run python fal-image-remix -m flux-1.1-pro image.jpg "Professional portrait"
# Edit with mask image (white=edit area, black=preserve)
uv run python fal-image-edit input.jpg mask.png "Replace with flowers"
# Auto-generate mask from text
uv run python fal-image-edit input.jpg --mask-prompt "sky" "Make it sunset"
# Remove objects
uv run python fal-image-edit photo.jpg mask.png "Remove object" --strength 1.0
# General editing (no mask)
uv run python fal-image-edit photo.jpg "Enhance lighting and colors"
The script intelligently selects the best model based on task context:
fal-ai/flux-pro/v1.1-ultrafal-ai/recraft/v3/text-to-imagefal-ai/flux-2fal-ai/flux-2/lora-i flagfal-ai/flux-2/lora/editfal-ai/imagen4/previewfal-ai/stable-diffusion-v35-largefal-ai/ideogram/v2fal-ai/bria/text-to-image/3.2uv run python fal-text-to-image [OPTIONS] PROMPT
Arguments:
PROMPT Text description of the image to generate
Options:
-m, --model TEXT Model to use (see model list above)
-i, --image TEXT Path or URL to reference image for style transfer
-o, --output TEXT Output filename (default: generated_image.png)
-s, --size TEXT Image size (e.g., "1024x1024", "landscape_16_9")
--seed INTEGER Random seed for reproducibility
--steps INTEGER Number of inference steps (model-dependent)
--guidance FLOAT Guidance scale (higher = more prompt adherence)
--help Show this message and exit
Before first use, set your fal.ai API key:
export FAL_KEY="your-api-key-here"
Or create a .env file in the skill directory:
FAL_KEY=your-api-key-here
Get your API key from: https://fal.ai/dashboard/keys
uv run python fal-text-to-image \
-m flux-pro/v1.1-ultra \
"Professional headshot of a business executive in modern office" \
-s 2048x2048
uv run python fal-text-to-image \
-m recraft/v3/text-to-image \
"Modern tech startup logo with text 'AI Labs' in minimalist style"
uv run python fal-text-to-image \
-m flux-2/lora/edit \
-i artistic_style.jpg \
"Portrait of a woman in a garden"
uv run python fal-text-to-image \
-m flux-2 \
--seed 42 \
"Futuristic cityscape with flying cars"
The script automatically selects the best model when -m is not specified:
-i provided: Uses flux-2/lora/edit for style transferrecraft/v3/text-to-imageflux-pro/v1.1-ultrarecraft/v3/text-to-imageflux-2 for general purposeGenerated images are saved with metadata:
| Problem | Solution |
|---------|----------|
| FAL_KEY not set | Export FAL_KEY environment variable or create .env file |
| Model not found | Check model name against supported list |
| Image reference fails | Ensure image path/URL is accessible |
| Generation timeout | Some models take longer; wait or try faster model |
| Rate limit error | Check fal.ai dashboard for usage limits |
flux-2 or stable-diffusion-v35-large for general useflux-pro/v1.1-ultra only when high-res is requiredAvailable models for image-to-image remixing:
fal-ai/flux/dev/image-to-imagefal-ai/flux-profal-ai/flux-pro/v1.1fal-ai/recraft/v3/text-to-imagefal-ai/stable-diffusion-v35-largeuv run python fal-image-remix [OPTIONS] INPUT_IMAGE PROMPT
Arguments:
INPUT_IMAGE Path or URL to source image
PROMPT How to transform the image
Options:
-m, --model TEXT Model to use (auto-selected if not specified)
-o, --output TEXT Output filename (default: remixed_TIMESTAMP.png)
-s, --strength FLOAT Transformation strength 0.0-1.0 (default: 0.75)
0.0 = preserve original, 1.0 = full transformation
--guidance FLOAT Guidance scale (default: 7.5)
--seed INTEGER Random seed for reproducibility
--steps INTEGER Number of inference steps
--help Show help
The --strength parameter controls transformation intensity:
| Strength | Effect | Use Case |
|----------|--------|----------|
| 0.3-0.5 | Subtle changes | Minor color adjustments, lighting tweaks |
| 0.5-0.7 | Moderate changes | Style hints while preserving details |
| 0.7-0.85 | Strong changes | Clear style transfer, significant transformation |
| 0.85-1.0 | Maximum changes | Complete style overhaul, dramatic transformation |
# Subtle artistic style (low strength)
uv run python fal-image-remix photo.jpg "Oil painting style" --strength 0.4
# Balanced transformation (default)
uv run python fal-image-remix input.jpg "Cyberpunk neon aesthetic"
# Strong transformation (high strength)
uv run python fal-image-remix portrait.jpg "Anime character" --strength 0.9
# Vector conversion
uv run python fal-image-remix -m recraft/v3 logo.png "Clean vector illustration"
# Premium quality remix
uv run python fal-image-remix -m flux-1.1-pro photo.jpg "Professional studio portrait"
Available models for targeted editing and inpainting:
fal-ai/flux-2/reduxfal-ai/flux-2/fillfal-ai/flux-pro-v11/fillfal-ai/stable-diffusion-v35-large/inpaintingfal-ai/ideogram/v2/editfal-ai/recraft/v3/svguv run python fal-image-edit [OPTIONS] INPUT_IMAGE [MASK_IMAGE] PROMPT
Arguments:
INPUT_IMAGE Path or URL to source image
MASK_IMAGE Path or URL to mask (white=edit, black=preserve) [optional]
PROMPT How to edit the masked region
Options:
-m, --model TEXT Model to use (auto-selected if not specified)
-o, --output TEXT Output filename (default: edited_TIMESTAMP.png)
--mask-prompt TEXT Generate mask from text (no mask image needed)
-s, --strength FLOAT Edit strength 0.0-1.0 (default: 0.95)
--guidance FLOAT Guidance scale (default: 7.5)
--seed INTEGER Random seed for reproducibility
--steps INTEGER Number of inference steps
--help Show help
The --strength parameter controls edit intensity:
| Strength | Effect | Use Case |
|----------|--------|----------|
| 0.5-0.7 | Subtle edits | Minor touch-ups, color adjustments |
| 0.7-0.9 | Moderate edits | Clear modifications while blending naturally |
| 0.9-1.0 | Strong edits | Complete replacement, object removal |
Mask images define edit regions:
Create masks using:
--mask-prompt flag)# Edit with mask image
uv run python fal-image-edit photo.jpg mask.png "Replace with beautiful garden"
# Auto-generate mask from text
uv run python fal-image-edit landscape.jpg --mask-prompt "sky" "Make it sunset with clouds"
# Remove objects
uv run python fal-image-edit photo.jpg object_mask.png "Remove completely" --strength 1.0
# Seamless object insertion
uv run python fal-image-edit room.jpg region_mask.png "Add modern sofa" --strength 0.85
# General editing (no mask)
uv run python fal-image-edit -m flux-2/redux photo.jpg "Enhance lighting and saturation"
# Premium quality inpainting
uv run python fal-image-edit -m flux-pro-v11/fill image.jpg mask.png "Professional portrait background"
# Artistic modification
uv run python fal-image-edit -m stable-diffusion-v35/inpainting photo.jpg mask.png "Van Gogh style"
fal-text-to-image/
āāā SKILL.md # This file
āāā README.md # Quick reference
āāā pyproject.toml # Dependencies (uv)
āāā fal-text-to-image # Text-to-image generation script
āāā fal-image-remix # Image-to-image remixing script
āāā fal-image-edit # Image editing/inpainting script
āāā references/
ā āāā model-comparison.md # Detailed model benchmarks
āāā outputs/ # Generated images (created on first run)
Managed via uv:
fal-client: Official fal.ai Python SDKpython-dotenv: Environment variable managementpillow: Image handling and EXIF metadataclick: CLI interface--seed for consistent results during iteration# 1. Generate base image
uv run python fal-text-to-image -m flux-2 "Modern office space, minimalist" -o base.png
# 2. Remix to different style
uv run python fal-image-remix base.png "Cyberpunk aesthetic with neon" -o styled.png
# 3. Edit specific region
uv run python fal-image-edit styled.png --mask-prompt "desk" "Add holographic display"
# Generate with seed for reproducibility
uv run python fal-text-to-image "Mountain landscape" --seed 42 -o v1.png
# Remix with same seed, different style
uv run python fal-image-remix v1.png "Oil painting style" --seed 42 -o v2.png
# Fine-tune with editing
uv run python fal-image-edit v2.png --mask-prompt "sky" "Golden hour lighting" --seed 42
# 1. Remove unwanted object
uv run python fal-image-edit photo.jpg object_mask.png "Remove" --strength 1.0 -o removed.png
# 2. Fill with new content
uv run python fal-image-edit removed.png region_mask.png "Beautiful flowers" --strength 0.9
| Problem | Solution | Tool |
|---------|----------|------|
| FAL_KEY not set | Export FAL_KEY or create .env file | All |
| Model not found | Check model name in documentation | All |
| Image upload fails | Check file exists and is readable | Remix, Edit |
| Mask not working | Verify mask is grayscale PNG (white=edit) | Edit |
| Transformation too strong | Reduce --strength value | Remix, Edit |
| Transformation too weak | Increase --strength value | Remix, Edit |
| Mask-prompt not precise | Create manual mask in image editor | Edit |
| Generation timeout | Try faster model or wait longer | All |
| Rate limit error | Check fal.ai dashboard usage limits | All |
Generated Mar 1, 2026
Marketing teams can generate high-quality, brand-aligned images for social media, ads, and websites using text prompts, reducing reliance on stock photos or designers. This enables rapid A/B testing of visual concepts and creation of custom graphics for campaigns, such as product mockups or lifestyle imagery.
E-commerce businesses can create realistic product images or variations from descriptions, aiding in prototyping or showcasing items before physical production. The image remix mode allows updating existing product photos with new styles or backgrounds, while editing can remove unwanted elements or enhance details.
Designers can quickly generate logos, posters, and vector-style artwork using models like Recraft V3 or Ideogram for precise typography. This skill supports iterative design by remixing drafts or editing specific areas, streamlining the creation of brand assets and visual identities.
Content creators and publishers can produce custom illustrations, book covers, or article images from text, enhancing storytelling without extensive resources. The high-resolution capabilities ensure professional quality for print or digital media, with style transfer options to match existing aesthetics.
Architects and interior designers can generate realistic renderings of spaces from descriptions or remix existing plans with different styles, such as modern or rustic. Targeted editing allows modifications like changing furniture or lighting, aiding in client presentations and concept development.
Offer a subscription-based platform where teams access the skill via an API or web interface, with tiered pricing based on usage (e.g., image generations per month). This model targets agencies and enterprises needing scalable, high-quality image generation for ongoing projects, with premium support and custom model fine-tuning.
Provide the skill as an API that developers integrate into their applications, charging per image generation or editing task. This appeals to startups and tech companies building AI-powered tools, with revenue generated from API calls and potential volume discounts for high-traffic users.
License the skill as a white-label product for marketing or design agencies to rebrand and offer to their clients. This includes customization options and dedicated support, generating revenue through one-time licensing fees or ongoing royalties based on client usage.
š¬ Integration Tip
Set up the FAL_KEY environment variable in your deployment environment to authenticate API calls seamlessly, ensuring the skill runs without manual key entry.
Generate/edit images with Nano Banana Pro (Gemini 3 Pro Image). Use for image create/modify requests incl. edits. Supports text-to-image + image-to-image; 1K/2K/4K; use --input-image.
Capture frames or clips from RTSP/ONVIF cameras.
Batch-generate images via OpenAI Images API. Random prompt sampler + `index.html` gallery.
Generate images using the internal Google Antigravity API (Gemini 3 Pro Image). High quality, native generation without browser automation.
使ēØå ē½® image_generate.py čę¬ēęå¾ē, åå¤ęø ę°å ·ä½ē `prompt`ć
AI image generation powered by CellCog. Create images, edit photos, consistent characters, product photography, reference-based images, sets of images, style transfer. Professional image creation with AI.