Skip to content

Skippr

Like Codex, but for data.

Go from raw source data to production-ready dbt models in a single command. Skippr handles extraction, loading, schema mapping, and dbt code generation -- so you can skip the weeks of pipeline plumbing and start querying clean data in minutes.

Install

curl -fsSL https://install.skippr.io | sh

Run your first pipeline

skippr user login              # log in or create a new account
skippr init my-project
skippr connect warehouse snowflake
skippr connect source mssql
skippr run

Five commands. That's extract, load, and a full bronze/silver/gold dbt project -- compiled, validated, and materialised in your warehouse.

Get started{ .md-button .md-button--primary }

Why Skippr

  • Minutes, not months -- a working data pipeline from first install to materialised dbt models, without writing a single line of SQL or YAML by hand.
  • AI-assisted schema mapping -- source schemas are automatically discovered and mapped to well-typed, cleanly named dbt models. You review the output, not the boilerplate.
  • Your data stays on your machine -- source data flows directly from the machine running skippr to your warehouse. Row-level data is never sent to Skippr. By default only schema metadata and small data samples are shared with the AI (the only third party) for mapping. Sampling is optional and can be disabled.
  • Incremental from day one -- re-runs sync only new and changed rows. No full reloads, no duplicate handling. It just works.
  • You own the output -- the generated dbt project is standard dbt. Add tests, snapshots, custom gold models, or plug it into your existing CI/CD.

How it works

  1. Extract -- reads tables and files from your source systems.
  2. Load -- writes raw data into a bronze schema in your warehouse.
  3. Model -- generates, compiles, and materialises silver and gold dbt models using AI-assisted schema mapping.

See How It Works for the full pipeline breakdown.

Connectors

Direction Supported
Sources MSSQL, S3
Warehouses Snowflake, BigQuery, Postgres

See the Connect Guides for setup instructions per provider.

Requirements

Dependency Why
Python 3.10+ Required by dbt
dbt-core + warehouse adapter Model compilation and materialisation
A Skippr account Provides LLM keys, cloud storage, and usage metering