Use dbt to publish transformation outputs from your private Dune namespace into a configured Datashare target such as Snowflake or BigQuery. This is useful when you want to keep the transformation logic on Dune, but consume the resulting tables inside your own warehouse.Documentation Index
Fetch the complete documentation index at: https://docs.dune.com/llms.txt
Use this file to discover all available pages before exploring further.
Datashare syncs are billed based on bytes transferred and byte-months of storage for the synced table.
How It Works
- dbt builds a
tableorincrementalmodel in your Dune namespace. - A dbt post-hook runs
ALTER TABLE ... EXECUTE datashare(...)on that table. - Dune registers or updates the datashare sync and sends the data to your configured target.
- You monitor the sync in
dune.datashare.table_syncsanddune.datashare.table_sync_runs.
Prerequisites
- Enterprise account with Data Transformations enabled
- Datashare configured for your team by Dune
- Dune API key with write access
- A dbt project using the Dune Trino connector
Recommended Starting Point
The fastest way to get started is the public template repo:dune-dbt-template
Includes the datashare macro, a prod-only post-hook, and an opt-in example model.
Add The dbt Post-Hook
Add the datashare post-hook to yourdbt_project.yml:
prod target. Temporary dev or CI schemas should not create datashare syncs by default.
Check Table Size Before Syncing
Datashare syncs are billed by reported bytes written during the sync and by byte-months of storage at your target warehouse, so it is worth checking how large a source table is before you enable a sync. You can read the Dune source table’s on-disk size from the$size metadata table, but you first need to know the physical table name that Dune is scanning, which may differ from the logical name you query.
Use the following two-step workflow from any Trino-compatible client (the Dune SQL Editor, a notebook, or dbt run-operation).
These queries are read-only and do not require the
transformations=true session property. The $size suffix must be attached to the table name and wrapped in double quotes (for example, schema."table$size").Step 1: Find The Physical Table Name With EXPLAIN
Run EXPLAIN against the table you want to size up:
TableScan or ScanProject node. The table = ... value is the physical table name you need for step 2. The catalog prefix before the colon (for example dune: or delta_prod:) is part of the connector and should be dropped — only the schema.table portion is used in the size query.
Step 2: Query The $size Metadata Table
Plug the physical schema.table name from step 1 into the size query, wrapping table$size in double quotes:
1 GB = 1,000,000,000 bytes). Treat it as a planning estimate only: it is not the destination warehouse table size, and it is not the exact billing meter. Destination storage can differ because each warehouse uses different formats, compression, layout, and metadata. Datashare sync billing is based on bytes written during synchronization, as reported by Dune’s sync system. This can differ from both the Dune source table at-rest size and the destination warehouse at-rest size.
Example 1: A dbt-Created Table
For the dbt modeldune.dune.lineage_query_2:
TableScan line in the plan reads:
Example 2: A Dune-Managed Table With A Suffix
For a Dune-managed table such asbase.transactions:
ScanProject node referencing a different physical name:
_0002 suffix on the physical table — that is the name you must use in the size query (drop the delta_prod: catalog prefix):
Configure A Model
Enable datashare inmeta.datashare on a table or incremental model:
Configuration Fields
| Field | Required | Description |
|---|---|---|
enabled | Yes | Must be true to trigger a sync. |
time_column | Yes | Column used by the sync window. |
time_start | Yes | SQL expression for the start of the window. |
time_end | No | SQL expression for the end of the window. Defaults to now(). |
unique_key_columns | No | Unique row identity columns. Falls back to the model unique_key. |
target_type | Yes | Datashare target platform where the synced table is delivered. One of snowflake or bigquery. Must match a target configured for your team. |
target_region | Yes | Region of the target warehouse to sync into (for example, us or eu). Must match a region configured for your team’s datashare target. |
time_column, time_start, and time_end at the same granularity. For example, if time_column is a date, use date-based expressions rather than hour-based timestamp windows.
Full Refresh Behavior
The sync usesfull_refresh = true when:
- the model is materialized as a
table - the incremental model is running for the first time
- the model is run with
--full-refresh
full_refresh = false.
Manual Syncs
You can trigger a sync manually withdbt run-operation:
dbt run.
Monitor Syncs
table_syncs for the current registration state and table_sync_runs for execution history.