Skip to content

SkipprLike Codex, but for data.

Go from raw source data to production-ready dbt models in a single command. Skippr handles extraction, loading, schema mapping, and dbt code generation -- so you can skip the weeks of pipeline plumbing and start querying clean data in minutes.

Install

bash
curl -fsSL https://install.skippr.io | sh

Run your first pipeline

bash
skippr user login              # log in or create a new account
skippr init my-project
skippr connect warehouse snowflake
skippr connect source mssql
skippr run

Five commands. That's extract, load, and a full bronze/silver/gold dbt project -- compiled, validated, and materialised in your warehouse.

How it works

  1. Extract -- reads tables and files from your source systems.
  2. Load -- writes raw data into a bronze schema in your warehouse.
  3. Model -- generates, compiles, and materialises silver and gold dbt models using AI-assisted schema mapping.

See How It Works for the full pipeline breakdown.

Connectors

Sources

CategoryConnectors
DatabasesMSSQL, MySQL, PostgreSQL, Redshift, MongoDB, DynamoDB, ClickHouse, MotherDuck
Object StoresS3, SFTP, Delta Lake
StreamingKafka, SQS, Kinesis, AMQP (RabbitMQ), SNS, EventBridge, MQTT, WebSocket
HTTPHTTP Client, HTTP Server
OtherSocket (TCP/UDP/Unix), StatsD, Local File, Stdin

Destinations

CategoryConnectors
WarehousesSnowflake, BigQuery, PostgreSQL, Athena (S3 + Glue), Databricks, Synapse, Redshift, ClickHouse, MotherDuck
Cloud StorageGCS, Azure Blob, SFTP
MessagingAMQP (RabbitMQ)
OtherLocal File, Stdout

See the Source Connectors and Destination Connectors for setup instructions per provider.

Requirements

DependencyWhy
Python 3.10+Required by dbt
dbt-core + warehouse adapterModel compilation and materialisation
A Skippr accountProvides LLM keys, cloud storage, and usage metering