flaresolverrBypass Cloudflare protection by routing requests through FlareSolverr’s browser API, handling challenges and returning page content, cookies, and headers.
Install via ClawdBot CLI:
clawdbot install Dolverin/flaresolverrUse FlareSolverr to bypass Cloudflare protection when direct curl requests fail with 403 or Cloudflare challenge pages.
docker run -d --name flaresolverr -p 8191:8191 ghcr.io/flaresolverr/flaresolverr:latest
export FLARESOLVERR_URL="http://localhost:8191"
curl -s "$FLARESOLVERR_URL/health" | jq '.'
# Expected: {"status":"ok","version":"3.x.x"}
curl -X POST "$FLARESOLVERR_URL/v1" \
-H "Content-Type: application/json" \
-d '{
"cmd": "request.get",
"url": "https://example.com/protected-page",
"maxTimeout": 60000
}' | jq '.'
{
"status": "ok",
"message": "Challenge solved!",
"solution": {
"url": "https://example.com/protected-page",
"status": 200,
"headers": {},
"response": "<html>...</html>",
"cookies": [
{
"name": "cf_clearance",
"value": "...",
"domain": ".example.com"
}
],
"userAgent": "Mozilla/5.0 ..."
},
"startTimestamp": 1234567890,
"endTimestamp": 1234567895,
"version": "3.3.2"
}
curl -s -X POST "$FLARESOLVERR_URL/v1" \
-H "Content-Type: application/json" \
-d '{
"cmd": "request.get",
"url": "https://example.com/protected-page"
}' | jq -r '.solution.response'
curl -s -X POST "$FLARESOLVERR_URL/v1" \
-H "Content-Type: application/json" \
-d '{
"cmd": "request.get",
"url": "https://example.com"
}' | jq -r '.solution.cookies[] | "\(.name)=\(.value)"'
Sessions allow reusing browser context (cookies, user-agent) for multiple requests, improving performance.
curl -s -X POST "$FLARESOLVERR_URL/v1" \
-H "Content-Type: application/json" \
-d '{"cmd": "sessions.create"}' | jq -r '.session'
curl -s -X POST "$FLARESOLVERR_URL/v1" \
-H "Content-Type: application/json" \
-d '{
"cmd": "request.get",
"url": "https://example.com/page1",
"session": "SESSION_ID"
}' | jq -r '.solution.response'
curl -s -X POST "$FLARESOLVERR_URL/v1" \
-H "Content-Type: application/json" \
-d '{"cmd": "sessions.list"}' | jq '.sessions'
curl -s -X POST "$FLARESOLVERR_URL/v1" \
-H "Content-Type: application/json" \
-d '{
"cmd": "sessions.destroy",
"session": "SESSION_ID"
}'
curl -s -X POST "$FLARESOLVERR_URL/v1" \
-H "Content-Type: application/json" \
-d '{
"cmd": "request.post",
"url": "https://example.com/api/endpoint",
"postData": "key1=value1&key2=value2",
"maxTimeout": 60000
}' | jq '.'
For JSON POST data:
curl -s -X POST "$FLARESOLVERR_URL/v1" \
-H "Content-Type: application/json" \
-d '{
"cmd": "request.post",
"url": "https://example.com/api/endpoint",
"postData": "{\"key\":\"value\"}",
"headers": {
"Content-Type": "application/json"
}
}' | jq '.'
curl -s -X POST "$FLARESOLVERR_URL/v1" \
-H "Content-Type: application/json" \
-d '{
"cmd": "request.get",
"url": "https://example.com",
"userAgent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36"
}' | jq '.'
curl -s -X POST "$FLARESOLVERR_URL/v1" \
-H "Content-Type: application/json" \
-d '{
"cmd": "request.get",
"url": "https://example.com",
"headers": {
"Accept-Language": "en-US,en;q=0.9",
"Referer": "https://google.com"
}
}' | jq '.'
curl -s -X POST "$FLARESOLVERR_URL/v1" \
-H "Content-Type: application/json" \
-d '{
"cmd": "request.get",
"url": "https://example.com",
"proxy": {
"url": "http://proxy.example.com:8080"
}
}' | jq '.'
curl -s -X POST "$FLARESOLVERR_URL/v1" \
-H "Content-Type: application/json" \
-d '{
"cmd": "request.get",
"url": "https://example.com/file.pdf",
"download": true
}' | jq -r '.solution.response' | base64 -d > file.pdf
"status": "error": Request failed (check message field)"status": "timeout": maxTimeout exceeded (increase timeout)"status": "captcha": Manual captcha required (rare, usually auto-solved)curl -s -X POST "$FLARESOLVERR_URL/v1" \
-H "Content-Type: application/json" \
-d '{"cmd": "request.get", "url": "https://example.com"}' | \
jq -r '.status'
# Step 1: Fetch page through FlareSolverr
RESPONSE=$(curl -s -X POST "$FLARESOLVERR_URL/v1" \
-H "Content-Type: application/json" \
-d '{
"cmd": "request.get",
"url": "https://example.com/protected-page"
}')
# Step 2: Check if successful
STATUS=$(echo "$RESPONSE" | jq -r '.status')
if [ "$STATUS" != "ok" ]; then
echo "Failed: $(echo "$RESPONSE" | jq -r '.message')"
exit 1
fi
# Step 3: Extract and parse HTML
echo "$RESPONSE" | jq -r '.solution.response'
# Create session
SESSION=$(curl -s -X POST "$FLARESOLVERR_URL/v1" \
-H "Content-Type: application/json" \
-d '{"cmd": "sessions.create"}' | jq -r '.session')
# Page 1
curl -s -X POST "$FLARESOLVERR_URL/v1" \
-H "Content-Type: application/json" \
-d "{\"cmd\": \"request.get\", \"url\": \"https://example.com/page1\", \"session\": \"$SESSION\"}" | \
jq -r '.solution.response'
# Page 2 (reuses cookies from page 1)
curl -s -X POST "$FLARESOLVERR_URL/v1" \
-H "Content-Type: application/json" \
-d "{\"cmd\": \"request.get\", \"url\": \"https://example.com/page2\", \"session\": \"$SESSION\"}" | \
jq -r '.solution.response'
# Cleanup
curl -s -X POST "$FLARESOLVERR_URL/v1" \
-H "Content-Type: application/json" \
-d "{\"cmd\": \"sessions.destroy\", \"session\": \"$SESSION\"}"
curl -s "$FLARESOLVERR_URL/health" | jq '.'
status field)Generated Mar 1, 2026
E-commerce businesses use FlareSolverr to scrape competitor websites protected by Cloudflare, extracting real-time pricing and product availability data. This enables dynamic pricing strategies and inventory management without being blocked by anti-bot measures.
Fintech companies leverage FlareSolverr to bypass Cloudflare on financial news sites and market data platforms, aggregating stock prices, economic indicators, and investment insights for automated analysis and reporting.
Travel agencies and booking platforms use FlareSolverr to access airline and hotel websites with Cloudflare protection, scraping fare and availability data to offer competitive deals and optimize pricing algorithms.
Researchers and academic institutions employ FlareSolverr to scrape scholarly articles, datasets, or public records from websites with Cloudflare, facilitating large-scale data analysis for studies without manual intervention.
Real estate firms use FlareSolverr to extract property listings, pricing trends, and neighborhood data from websites with Cloudflare protection, enabling market analysis and investment decision-making.
Offer subscription-based access to scraped data from Cloudflare-protected sites, providing cleaned and structured datasets to clients in industries like e-commerce or finance. Revenue comes from monthly or annual licensing fees based on data volume and updates.
Provide tailored web scraping services using FlareSolverr to bypass Cloudflare for specific client needs, such as monitoring competitors or aggregating market data. Revenue is generated through project-based contracts or hourly consulting rates.
Develop and sell an API that wraps FlareSolverr functionality, allowing developers to easily integrate Cloudflare bypass into their applications. Revenue streams include pay-per-request pricing, tiered API plans, or enterprise licenses.
💬 Integration Tip
Ensure the FLARESOLVERR_URL environment variable is correctly set and test with a simple curl command to verify connectivity before integrating into automated workflows.
Automatically update Clawdbot and all installed skills once daily. Runs via cron, checks for updates, applies them, and messages the user with a summary of what changed.
Full desktop computer use for headless Linux servers. Xvfb + XFCE virtual desktop with xdotool automation. 17 actions (click, type, scroll, screenshot, drag,...
Essential Docker commands and workflows for container management, image operations, and debugging.
Tool discovery and shell one-liner reference for sysadmin, DevOps, and security tasks. AUTO-CONSULT this skill when the user is: troubleshooting network issues, debugging processes, analyzing logs, working with SSL/TLS, managing DNS, testing HTTP endpoints, auditing security, working with containers, writing shell scripts, or asks 'what tool should I use for X'. Source: github.com/trimstray/the-book-of-secret-knowledge
Deploy applications and manage projects with complete CLI reference. Commands for deployments, projects, domains, environment variables, and live documentation access.
Monitor topics of interest and proactively alert when important developments occur. Use when user wants automated monitoring of specific subjects (e.g., product releases, price changes, news topics, technology updates). Supports scheduled web searches, AI-powered importance scoring, smart alerts vs weekly digests, and memory-aware contextual summaries.