ipfs-serverFull IPFS node operations — install, configure, pin content, publish IPNS, manage peers, and run gateway services
Install via ClawdBot CLI:
clawdbot install apexfork/ipfs-serverYou are an IPFS server administrator. You help users run IPFS nodes, manage content, publish data, and operate gateway services. This skill handles full node operations including content publishing and network configuration.
For read-only IPFS queries and content exploration, use the ipfs-client skill.
# Homebrew (recommended)
brew install ipfs
# Or download binary from dist.ipfs.tech
curl -O https://dist.ipfs.tech/kubo/v0.24.0/kubo_v0.24.0_darwin-amd64.tar.gz
tar -xzf kubo_v0.24.0_darwin-amd64.tar.gz
sudo ./kubo/install.sh
First-time setup:
# Initialize repository
ipfs init
# Show peer ID
ipfs id
# Configure for low-resource usage (optional)
ipfs config profile apply lowpower
Basic configuration:
# Allow gateway on all interfaces (for local network access)
ipfs config Addresses.Gateway /ip4/0.0.0.0/tcp/8080
# Configure API (keep localhost for security)
ipfs config Addresses.API /ip4/127.0.0.1/tcp/5001
# Set storage limit
ipfs config Datastore.StorageMax 10GB
Start IPFS daemon:
ipfs daemon &> ipfs.log 2>&1 &
Check daemon status:
ipfs swarm peers | wc -l # Connected peer count
ipfs repo stat # Repository statistics
Stop daemon:
pkill ipfs
Add files and directories:
# Add single file
ipfs add myfile.txt
# Returns: added QmHash myfile.txt
# Add directory recursively
ipfs add -r ./my-directory/
# Add and only show final hash
ipfs add -Q myfile.txt
# Add with custom name
ipfs add --wrap-with-directory myfile.txt
Add from stdin:
echo "Hello IPFS" | ipfs add
cat largefile.json | ipfs add --pin=false # Don't pin immediately
Pin content (prevent garbage collection):
ipfs pin add QmHash
ipfs pin add -r QmHash # Recursively pin directory
# List pinned content
ipfs pin ls --type=recursive
ipfs pin ls --type=direct
# Unpin content
ipfs pin rm QmHash
Remote pinning services:
# Configure remote pinning (Pinata, Web3.Storage, etc.)
ipfs pin remote service add pinata https://api.pinata.cloud/psa YOUR_JWT
# Pin to remote service
ipfs pin remote add --service=pinata --name="my-content" QmHash
# List remote pins
ipfs pin remote ls --service=pinata
Clean up unpinned content:
# Show what would be collected
ipfs repo gc --dry-run
# Run garbage collection
ipfs repo gc
# Check repo size before/after
ipfs repo stat
Publish content to IPNS:
# Publish to default key
ipfs name publish QmHash
# Create and use custom key
ipfs key gen --type=ed25519 my-site
ipfs name publish --key=my-site QmHash
# List published records
ipfs name pubsub subs
IPNS with custom domains:
# Create DNS TXT record: _dnslink.example.com = "dnslink=/ipns/k51qzi5uqu5d..."
# Then resolve via:
ipfs name resolve /ipns/example.com
Update IPNS record:
# Publish new version
ipfs add -r ./updated-site/
ipfs name publish --key=my-site QmNewHash
Peer operations:
# List connected peers
ipfs swarm peers
# Connect to specific peer
ipfs swarm connect /ip4/104.131.131.82/tcp/4001/p2p/QmPeerID
# Disconnect peer
ipfs swarm disconnect /ip4/104.131.131.82/tcp/4001/p2p/QmPeerID
Address configuration:
# Show current addresses
ipfs config Addresses
# Add custom swarm address
ipfs config --json Addresses.Swarm '["/ip4/0.0.0.0/tcp/4001", "/ip6/::/tcp/4001"]'
Manage bootstrap peers:
# List bootstrap nodes
ipfs bootstrap list
# Add custom bootstrap node
ipfs bootstrap add /ip4/104.131.131.82/tcp/4001/p2p/QmBootstrapPeer
# Remove all bootstrap nodes (private network)
ipfs bootstrap rm --all
Configure gateway:
# Basic gateway configuration
ipfs config Addresses.Gateway /ip4/127.0.0.1/tcp/8080
# Public gateway (be careful!)
ipfs config Addresses.Gateway /ip4/0.0.0.0/tcp/8080
# Enable directory listing
ipfs config --json Gateway.PublicGateways '{
"localhost": {
"Paths": ["/ipfs", "/ipns"],
"UseSubdomains": false
}
}'
Access patterns:
# Via path
http://localhost:8080/ipfs/QmHash
# Via subdomain (if configured)
http://QmHash.ipfs.localhost:8080
Nginx configuration example:
server {
listen 80;
server_name gateway.example.com;
location / {
proxy_pass http://127.0.0.1:8080;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
}
}
High-performance settings:
# Apply server profile
ipfs config profile apply server
# Increase connection limits
ipfs config Swarm.ConnMgr.HighWater 2000
ipfs config Swarm.ConnMgr.LowWater 1000
# Adjust bitswap settings
ipfs config --json Bitswap.MaxOutstandingBytesPerPeer 1048576
Create private IPFS network:
# Generate swarm key
echo -e "/key/swarm/psk/1.0.0/\n/base16/\n$(tr -dc 'a-f0-9' < /dev/urandom | head -c64)" > ~/.ipfs/swarm.key
# ⚠️ SECURITY: This swarm key is your network's access control credential.
# Anyone with this file can join your private network. Protect it accordingly.
# Remove all bootstrap nodes
ipfs bootstrap rm --all
# Start daemon (will only connect to nodes with same key)
ipfs daemon
Configure datastore:
# Set storage limits
ipfs config Datastore.StorageMax 100GB
ipfs config Datastore.GCPeriod "1h"
# Enable flatfs for better performance
ipfs config --json Datastore.Spec '{
"mounts": [
{
"child": {"type": "flatfs", "path": "blocks", "shardFunc": "/repo/flatfs/shard/v1/next-to-last/2"},
"mountpoint": "/blocks",
"prefix": "flatfs.datastore",
"type": "mount"
}
],
"type": "mount"
}'
Basic health monitoring:
# Check daemon status
ipfs stats bw # Bandwidth usage
ipfs stats repo # Repository stats
ipfs diag sys # System information
ipfs log level debug # Enable debug logging
Connection monitoring:
# Monitor peer connections
while true; do
echo "$(date): $(ipfs swarm peers | wc -l) peers"
sleep 60
done
Configure logging:
# Set log levels
ipfs log level bitswap info
ipfs log level dht warn
# Tail logs
ipfs log tail
API access:
127.0.0.1:5001) unless in trusted networkGateway security:
Content policy:
Connection issues:
Performance problems:
ipfs repo gcipfs stats bwContent not accessible:
ipfs pin lsipfs dht findprovs QmHashRelated skills: /ipfs-client (read-only queries), /eth-readonly (blockchain integration)
Generated Mar 1, 2026
Media companies can use this skill to publish articles, videos, and podcasts directly to IPFS, ensuring censorship-resistant distribution. They can manage pinning to prevent content loss and use IPNS for updating content without changing URLs, enabling resilient digital archives.
Research institutions can deploy IPFS nodes to archive large datasets, scientific papers, and experimental results in a decentralized manner. The skill allows configuration of storage limits, garbage collection, and remote pinning services for long-term preservation and easy sharing among collaborators.
Enterprises can create private IPFS networks using this skill to securely share internal documents and data among teams. By configuring swarm keys and removing bootstrap nodes, they ensure data remains within the organization while leveraging IPFS for efficient peer-to-peer file transfer.
Web3 platforms can utilize this skill to host NFT metadata and assets on IPFS, providing immutable and decentralized storage. They can manage pinning to guarantee availability, publish updates via IPNS, and set up gateways for public access, enhancing trust in digital collectibles.
Community networks or libraries can run local IPFS gateways to provide offline or low-bandwidth access to educational content. The skill enables configuration of gateway addresses, reverse proxies, and performance tuning to serve content efficiently within local networks.
Offer managed IPFS node hosting for businesses that need reliable decentralized storage without technical overhead. Provide installation, configuration, and maintenance services, with revenue from subscription fees based on storage usage and uptime guarantees.
Develop a service that helps clients pin and archive critical data on IPFS using remote pinning integrations. Charge for data storage, retrieval speeds, and additional features like automated backups and IPNS management, targeting industries with high data integrity needs.
Provide consulting and training services to organizations looking to integrate IPFS into their infrastructure. Offer workshops on node setup, private networks, and gateway configuration, generating revenue through project-based contracts and ongoing support packages.
💬 Integration Tip
Integrate this skill with existing DevOps tools for automated deployment and monitoring, and consider using remote pinning services like Pinata for enhanced reliability in production environments.
Automatically update Clawdbot and all installed skills once daily. Runs via cron, checks for updates, applies them, and messages the user with a summary of what changed.
Full desktop computer use for headless Linux servers. Xvfb + XFCE virtual desktop with xdotool automation. 17 actions (click, type, scroll, screenshot, drag,...
Essential Docker commands and workflows for container management, image operations, and debugging.
Tool discovery and shell one-liner reference for sysadmin, DevOps, and security tasks. AUTO-CONSULT this skill when the user is: troubleshooting network issues, debugging processes, analyzing logs, working with SSL/TLS, managing DNS, testing HTTP endpoints, auditing security, working with containers, writing shell scripts, or asks 'what tool should I use for X'. Source: github.com/trimstray/the-book-of-secret-knowledge
Deploy applications and manage projects with complete CLI reference. Commands for deployments, projects, domains, environment variables, and live documentation access.
Monitor topics of interest and proactively alert when important developments occur. Use when user wants automated monitoring of specific subjects (e.g., product releases, price changes, news topics, technology updates). Supports scheduled web searches, AI-powered importance scoring, smart alerts vs weekly digests, and memory-aware contextual summaries.