curl-httpEssential curl commands for HTTP requests, API testing, and file transfers.
Install via ClawdBot CLI:
clawdbot install Arnarsson/curl-httpRequires:
Command-line tool for making HTTP requests and transferring data.
# Simple GET request
curl https://api.example.com
# Save output to file
curl https://example.com -o output.html
curl https://example.com/file.zip -O # Use remote filename
# Follow redirects
curl -L https://example.com
# Show response headers
curl -i https://example.com
# Show only headers
curl -I https://example.com
# Verbose output (debugging)
curl -v https://example.com
# POST with data
curl -X POST https://api.example.com/users \
-d "name=John&email=john@example.com"
# POST JSON data
curl -X POST https://api.example.com/users \
-H "Content-Type: application/json" \
-d '{"name":"John","email":"john@example.com"}'
# POST from file
curl -X POST https://api.example.com/users \
-H "Content-Type: application/json" \
-d @data.json
# Form upload
curl -X POST https://api.example.com/upload \
-F "file=@document.pdf" \
-F "description=My document"
# PUT request
curl -X PUT https://api.example.com/users/1 \
-H "Content-Type: application/json" \
-d '{"name":"Jane"}'
# DELETE request
curl -X DELETE https://api.example.com/users/1
# PATCH request
curl -X PATCH https://api.example.com/users/1 \
-H "Content-Type: application/json" \
-d '{"email":"newemail@example.com"}'
# Add custom header
curl -H "User-Agent: MyApp/1.0" https://example.com
# Multiple headers
curl -H "Accept: application/json" \
-H "Authorization: Bearer token123" \
https://api.example.com
# Basic auth
curl -u username:password https://api.example.com
# Bearer token
curl -H "Authorization: Bearer YOUR_TOKEN" \
https://api.example.com
# API key in header
curl -H "X-API-Key: your_api_key" \
https://api.example.com
# API key in URL
curl "https://api.example.com?api_key=your_key"
# Connection timeout (seconds)
curl --connect-timeout 10 https://example.com
# Max time for entire operation
curl --max-time 30 https://example.com
# Retry on failure
curl --retry 3 https://example.com
# Retry delay
curl --retry 3 --retry-delay 5 https://example.com
# Send cookies
curl -b "session=abc123" https://example.com
# Save cookies to file
curl -c cookies.txt https://example.com
# Load cookies from file
curl -b cookies.txt https://example.com
# Both save and load
curl -b cookies.txt -c cookies.txt https://example.com
# Use HTTP proxy
curl -x http://proxy.example.com:8080 https://api.example.com
# With proxy authentication
curl -x http://proxy:8080 -U user:pass https://api.example.com
# SOCKS proxy
curl --socks5 127.0.0.1:1080 https://api.example.com
# Ignore SSL certificate errors (not recommended for production)
curl -k https://self-signed.example.com
# Use specific SSL version
curl --tlsv1.2 https://example.com
# Use client certificate
curl --cert client.crt --key client.key https://example.com
# Show SSL handshake details
curl -v https://example.com 2>&1 | grep -i ssl
# Silent mode (no progress bar)
curl -s https://api.example.com
# Show only HTTP status code
curl -s -o /dev/null -w "%{http_code}" https://example.com
# Custom output format
curl -w "\nTime: %{time_total}s\nStatus: %{http_code}\n" \
https://example.com
# Pretty print JSON (with jq)
curl -s https://api.example.com | jq '.'
# Download specific byte range
curl -r 0-1000 https://example.com/large-file.zip
# Resume download
curl -C - -O https://example.com/large-file.zip
# Download file
curl -O https://example.com/file.zip
# Download with custom name
curl -o myfile.zip https://example.com/file.zip
# Download multiple files
curl -O https://example.com/file1.zip \
-O https://example.com/file2.zip
# Resume interrupted download
curl -C - -O https://example.com/large-file.zip
# FTP upload
curl -T file.txt ftp://ftp.example.com/upload/
# HTTP PUT upload
curl -T file.txt https://example.com/upload
# Form file upload
curl -F "file=@document.pdf" https://example.com/upload
# Test REST API
curl -X GET https://api.example.com/users
curl -X GET https://api.example.com/users/1
curl -X POST https://api.example.com/users -d @user.json
curl -X PUT https://api.example.com/users/1 -d @updated.json
curl -X DELETE https://api.example.com/users/1
# Test with verbose output
curl -v -X POST https://api.example.com/login \
-H "Content-Type: application/json" \
-d '{"username":"test","password":"pass"}'
# Measure request time
curl -w "Total time: %{time_total}s\n" https://example.com
# Detailed timing
curl -w "\nDNS: %{time_namelookup}s\nConnect: %{time_connect}s\nTLS: %{time_appconnect}s\nTransfer: %{time_starttransfer}s\nTotal: %{time_total}s\n" \
-o /dev/null -s https://example.com
# Show request and response headers
curl -v https://api.example.com
# Trace request
curl --trace-ascii trace.txt https://api.example.com
# Include response headers in output
curl -i https://api.example.com
Quick JSON API test:
curl -s https://api.github.com/users/octocat | jq '{name, bio, followers}'
Download with progress bar:
curl -# -O https://example.com/large-file.zip
POST JSON and extract field:
curl -s -X POST https://api.example.com/login \
-H "Content-Type: application/json" \
-d '{"user":"test","pass":"secret"}' | jq -r '.token'
Check if URL is accessible:
if curl -s --head --fail https://example.com > /dev/null; then
echo "Site is up"
else
echo "Site is down"
fi
Parallel downloads:
for i in {1..10}; do
curl -O https://example.com/file$i.jpg &
done
wait
-X: HTTP method (GET, POST, PUT, DELETE, etc.)-d: Data to send (POST/PUT)-H: Custom header-o: Output file-O: Save with remote filename-L: Follow redirects-i: Include headers in output-I: Headers only-v: Verbose output-s: Silent mode-S: Show errors even in silent mode-f: Fail silently on HTTP errors-k: Insecure (ignore SSL)-u: Basic authentication-F: Multipart form data-b: Send cookies-c: Save cookies-w: Custom output format-s in scripts to suppress progress bar-sS for silent but show errors-L for redirects (e.g., shortened URLs)-v for debuggingjq to process JSON responses--config for complex reusable requestsOfficial docs: https://curl.se/docs/
Manual: man curl
HTTP methods: https://developer.mozilla.org/en-US/docs/Web/HTTP/Methods
Generated Mar 1, 2026
Developers and QA engineers use curl to test REST APIs by sending GET, POST, PUT, and DELETE requests with custom headers and JSON payloads. This helps verify endpoints, debug issues with verbose output, and ensure API reliability before deployment in web or mobile applications.
IT administrators and data engineers utilize curl for automated file downloads, uploads via FTP or HTTP, and handling large datasets with resume capabilities. This supports data pipelines, backup systems, and content distribution in media, finance, and logistics industries.
Analysts and DevOps teams employ curl to fetch web content, extract specific data with range requests, and monitor website availability by checking HTTP status codes. This aids in competitive analysis, uptime tracking, and performance testing for e-commerce and news platforms.
Security professionals leverage curl to test authentication mechanisms like bearer tokens, API keys, and SSL/TLS configurations, including client certificates. This ensures secure API access, identifies vulnerabilities, and complies with standards in banking and healthcare sectors.
Network engineers use curl to configure and test proxy settings, including HTTP and SOCKS proxies with authentication, for routing requests in corporate environments. This facilitates access control, traffic management, and troubleshooting in telecommunications and enterprise IT.
Companies offer API platforms where curl is used by customers to integrate and test endpoints, driving subscription fees based on usage tiers. This model supports scalable revenue from developers and businesses in tech and SaaS industries.
Firms provide data aggregation services using curl to collect web data, which is then analyzed and sold as reports or dashboards. Revenue comes from licensing insights to clients in marketing, finance, and research sectors.
Businesses develop tools that incorporate curl for monitoring, automation, and file transfers, sold as software or cloud services. This generates revenue through one-time purchases or recurring fees from IT and operations teams.
💬 Integration Tip
Integrate curl into scripts for automation, using flags like -s for silent mode and -w for custom output to streamline workflows and reduce manual effort in testing and data handling.
Automatically update Clawdbot and all installed skills once daily. Runs via cron, checks for updates, applies them, and messages the user with a summary of what changed.
Full desktop computer use for headless Linux servers. Xvfb + XFCE virtual desktop with xdotool automation. 17 actions (click, type, scroll, screenshot, drag,...
Essential Docker commands and workflows for container management, image operations, and debugging.
Tool discovery and shell one-liner reference for sysadmin, DevOps, and security tasks. AUTO-CONSULT this skill when the user is: troubleshooting network issues, debugging processes, analyzing logs, working with SSL/TLS, managing DNS, testing HTTP endpoints, auditing security, working with containers, writing shell scripts, or asks 'what tool should I use for X'. Source: github.com/trimstray/the-book-of-secret-knowledge
Deploy applications and manage projects with complete CLI reference. Commands for deployments, projects, domains, environment variables, and live documentation access.
Monitor topics of interest and proactively alert when important developments occur. Use when user wants automated monitoring of specific subjects (e.g., product releases, price changes, news topics, technology updates). Supports scheduled web searches, AI-powered importance scoring, smart alerts vs weekly digests, and memory-aware contextual summaries.