duckdb-cli-ai-skillsDuckDB CLI specialist for SQL analysis, data processing and file conversion. Use for SQL queries, CSV/Parquet/JSON analysis, database queries, or data conversion. Triggers on "duckdb", "sql", "query", "data analysis", "parquet", "convert data".
Install via ClawdBot CLI:
clawdbot install CamelSprout/duckdb-cli-ai-skillsHelps with data analysis, SQL queries and file conversion via DuckDB CLI.
# CSV
duckdb -c "SELECT * FROM 'data.csv' LIMIT 10"
# Parquet
duckdb -c "SELECT * FROM 'data.parquet'"
# Multiple files with glob
duckdb -c "SELECT * FROM read_parquet('logs/*.parquet')"
# JSON
duckdb -c "SELECT * FROM read_json_auto('data.json')"
# Create/open database
duckdb my_database.duckdb
# Read-only mode
duckdb -readonly existing.duckdb
| Flag | Format |
|------|--------|
| -csv | Comma-separated |
| -json | JSON array |
| -table | ASCII table |
| -markdown | Markdown table |
| -html | HTML table |
| -line | One value per line |
| Argument | Description |
|----------|-------------|
| -c COMMAND | Run SQL and exit |
| -f FILENAME | Run script from file |
| -init FILE | Use alternative to ~/.duckdbrc |
| -readonly | Open in read-only mode |
| -echo | Show commands before execution |
| -bail | Stop on first error |
| -header / -noheader | Show/hide column headers |
| -nullvalue TEXT | Text for NULL values |
| -separator SEP | Column separator |
duckdb -c "COPY (SELECT * FROM 'input.csv') TO 'output.parquet' (FORMAT PARQUET)"
duckdb -c "COPY (SELECT * FROM 'input.parquet') TO 'output.csv' (HEADER, DELIMITER ',')"
duckdb -c "COPY (SELECT * FROM read_json_auto('input.json')) TO 'output.parquet' (FORMAT PARQUET)"
duckdb -c "COPY (SELECT * FROM 'data.csv' WHERE amount > 1000) TO 'filtered.parquet' (FORMAT PARQUET)"
| Command | Description |
|---------|-------------|
| .tables [pattern] | Show tables (with LIKE pattern) |
| .schema [table] | Show CREATE statements |
| .databases | Show attached databases |
| Command | Description |
|---------|-------------|
| .mode FORMAT | Change output format |
| .output file | Send output to file |
| .once file | Next output to file |
| .headers on/off | Show/hide column headers |
| .separator COL ROW | Set separators |
| Command | Description |
|---------|-------------|
| .timer on/off | Show execution time |
| .echo on/off | Show commands before execution |
| .bail on/off | Stop on error |
| .read file.sql | Run SQL from file |
| Command | Description |
|---------|-------------|
| .edit or \e | Open query in external editor |
| .help [pattern] | Show help |
| Shortcut | Action |
|----------|--------|
| Home / End | Start/end of line |
| Ctrl+Left/Right | Jump word |
| Ctrl+A / Ctrl+E | Start/end of buffer |
| Shortcut | Action |
|----------|--------|
| Ctrl+P / Ctrl+N | Previous/next command |
| Ctrl+R | Search history |
| Alt+< / Alt+> | First/last in history |
| Shortcut | Action |
|----------|--------|
| Ctrl+W | Delete word backward |
| Alt+D | Delete word forward |
| Alt+U / Alt+L | Uppercase/lowercase word |
| Ctrl+K | Delete to end of line |
| Shortcut | Action |
|----------|--------|
| Tab | Autocomplete / next suggestion |
| Shift+Tab | Previous suggestion |
| Esc+Esc | Undo autocomplete |
Context-aware autocomplete activated with Tab:
CREATE TABLE sales AS SELECT * FROM 'sales_2024.csv';
INSERT INTO sales SELECT * FROM 'sales_2025.csv';
COPY sales TO 'backup.parquet' (FORMAT PARQUET);
SELECT
COUNT(*) as count,
AVG(amount) as average,
SUM(amount) as total
FROM 'transactions.csv';
SELECT
category,
COUNT(*) as count,
SUM(amount) as total
FROM 'data.csv'
GROUP BY category
ORDER BY total DESC;
SELECT a.*, b.name
FROM 'orders.csv' a
JOIN 'customers.parquet' b ON a.customer_id = b.id;
DESCRIBE SELECT * FROM 'data.csv';
# Read from stdin
cat data.csv | duckdb -c "SELECT * FROM read_csv('/dev/stdin')"
# Pipe to another command
duckdb -csv -c "SELECT * FROM 'data.parquet'" | head -20
# Write to stdout
duckdb -c "COPY (SELECT * FROM 'data.csv') TO '/dev/stdout' (FORMAT CSV)"
Save common settings in ~/.duckdbrc:
.timer on
.mode duckbox
.maxrows 50
.highlight on
.keyword green
.constant yellow
.comment brightblack
.error red
Open complex queries in your editor:
.edit
Editor is chosen from: DUCKDB_EDITOR → EDITOR → VISUAL → vi
Secure mode that restricts file access. When enabled:
.read, .output, .import, .sh etc.LIMIT on large files for quick previewread_csv_auto and read_json_auto guess column typesmemory_limit values on some Ubuntu versionsGenerated Mar 1, 2026
A data analyst needs to quickly explore and summarize data from various file formats like CSV, Parquet, or JSON without setting up a full database. They use DuckDB CLI to run SQL queries directly on files, generate statistics, and export results in formats like Markdown for reports. This is ideal for ad-hoc analysis or prototyping in industries like marketing or finance.
A data engineer converts data between formats, such as CSV to Parquet for efficient storage, or JSON to CSV for compatibility with other tools. They use DuckDB CLI to filter and transform data during conversion, streamlining ETL processes in data pipelines. This supports industries like e-commerce or logistics where data integration is critical.
A researcher processes large datasets from experiments or surveys, using DuckDB CLI to query and aggregate data without complex database setup. They export results in LaTeX or HTML formats for papers or presentations, enabling efficient data analysis in fields like social sciences or biology.
A developer tests data-related code by running SQL queries on sample files to validate outputs or debug issues. They use DuckDB CLI in scripts or command-line workflows to quickly check data integrity and format conversions, supporting agile development in tech or SaaS industries.
A business user generates periodic reports by querying data files, grouping results, and exporting to CSV or JSON for further analysis in tools like spreadsheets. DuckDB CLI allows them to automate these tasks with simple commands, aiding decision-making in retail or consulting sectors.
Offer services to small businesses for data analysis and conversion using DuckDB CLI, charging per project or hourly. This model leverages quick setup and file-based queries to provide cost-effective solutions without infrastructure overhead. Revenue comes from client projects in sectors like local retail or non-profits.
Integrate DuckDB CLI into a SaaS platform for data processing features, such as file conversion or SQL query execution, as part of a subscription service. This adds value to existing products in data analytics or cloud storage, generating revenue through tiered subscriptions or usage-based pricing.
Provide training courses or workshops on using DuckDB CLI for data analysis, targeting professionals in data science or IT. Revenue is generated from course fees, certifications, or corporate training packages, helping users improve efficiency in data handling tasks.
💬 Integration Tip
Integrate DuckDB CLI into shell scripts or automation pipelines for batch processing, using flags like -csv for output compatibility with other tools.
Use the @steipete/oracle CLI to bundle a prompt plus the right files and get a second-model review (API or browser) for debugging, refactors, design checks, or cross-validation.
Manage Things 3 via the `things` CLI on macOS (add/update projects+todos via URL scheme; read/search/list from the local Things database). Use when a user asks Clawdbot to add a task to Things, list inbox/today/upcoming, search tasks, or inspect projects/areas/tags.
Local search/indexing CLI (BM25 + vectors + rerank) with MCP mode.
Use when designing database schemas, writing migrations, optimizing SQL queries, fixing N+1 problems, creating indexes, setting up PostgreSQL, configuring EF Core, implementing caching, partitioning tables, or any database performance question.
Connect to Supabase for database operations, vector search, and storage. Use for storing data, running SQL queries, similarity search with pgvector, and managing tables. Triggers on requests involving databases, vector stores, embeddings, or Supabase specifically.
Query, design, migrate, and optimize SQL databases. Use when working with SQLite, PostgreSQL, or MySQL — schema design, writing queries, creating migrations, indexing, backup/restore, and debugging slow queries. No ORMs required.