Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.dune.com/llms.txt

Use this file to discover all available pages before exploring further.

Use dbt to publish transformation outputs from your private Dune namespace into a configured Datashare target such as Snowflake or BigQuery. This is useful when you want to keep the transformation logic on Dune, but consume the resulting tables inside your own warehouse.
dbt to Datashare is an enterprise workflow that requires both Data Transformations and Datashare to be enabled for your team.
Datashare syncs are billed based on bytes transferred and byte-months of storage for the synced table.

How It Works

  1. dbt builds a table or incremental model in your Dune namespace.
  2. A dbt post-hook runs ALTER TABLE ... EXECUTE datashare(...) on that table.
  3. Dune registers or updates the datashare sync and sends the data to your configured target.
  4. You monitor the sync in dune.datashare.table_syncs and dune.datashare.table_sync_runs.

Prerequisites

  • Enterprise account with Data Transformations enabled
  • Datashare configured for your team by Dune
  • Dune API key with write access
  • A dbt project using the Dune Trino connector
The fastest way to get started is the public template repo:

dune-dbt-template

Includes the datashare macro, a prod-only post-hook, and an opt-in example model.

Add The dbt Post-Hook

Add the datashare post-hook to your dbt_project.yml:
models:
  your_project:
    +post-hook:
      - sql: "{{ datashare_trigger_sync() }}"
        transaction: true
For safety, the template runs this hook only on the prod target. Temporary dev or CI schemas should not create datashare syncs by default.

Check Table Size Before Syncing

Datashare syncs are billed by reported bytes written during the sync and by byte-months of storage at your target warehouse, so it is worth checking how large a source table is before you enable a sync. You can read the Dune source table’s on-disk size from the $size metadata table, but you first need to know the physical table name that Dune is scanning, which may differ from the logical name you query. Use the following two-step workflow from any Trino-compatible client (the Dune SQL Editor, a notebook, or dbt run-operation).
These queries are read-only and do not require the transformations=true session property. The $size suffix must be attached to the table name and wrapped in double quotes (for example, schema."table$size").

Step 1: Find The Physical Table Name With EXPLAIN

Run EXPLAIN against the table you want to size up:
EXPLAIN SELECT * FROM team.schema.table;
In the query plan, look for the TableScan or ScanProject node. The table = ... value is the physical table name you need for step 2. The catalog prefix before the colon (for example dune: or delta_prod:) is part of the connector and should be dropped — only the schema.table portion is used in the size query.

Step 2: Query The $size Metadata Table

Plug the physical schema.table name from step 1 into the size query, wrapping table$size in double quotes:
SELECT
    size_bytes,
    size_bytes / 1000.0 / 1000 / 1000 AS source_size_gb,
    size_bytes / 1000.0 / 1000 / 1000 / 1000 AS source_size_tb
FROM dune.<schema>."<table>$size";
This returns the Dune source table’s at-rest size using SI units (1 GB = 1,000,000,000 bytes). Treat it as a planning estimate only: it is not the destination warehouse table size, and it is not the exact billing meter. Destination storage can differ because each warehouse uses different formats, compression, layout, and metadata. Datashare sync billing is based on bytes written during synchronization, as reported by Dune’s sync system. This can differ from both the Dune source table at-rest size and the destination warehouse at-rest size.

Example 1: A dbt-Created Table

For the dbt model dune.dune.lineage_query_2:
EXPLAIN SELECT * FROM dune.dune.lineage_query_2;
The TableScan line in the plan reads:
TableScan[table = dune:dune.lineage_query_2]
Here the physical name matches the logical name, so the size query is:
SELECT
    size_bytes,
    size_bytes / 1000.0 / 1000 / 1000 AS source_size_gb,
    size_bytes / 1000.0 / 1000 / 1000 / 1000 AS source_size_tb
FROM dune.dune."lineage_query_2$size";

Example 2: A Dune-Managed Table With A Suffix

For a Dune-managed table such as base.transactions:
EXPLAIN SELECT * FROM base.transactions;
The plan shows a ScanProject node referencing a different physical name:
ScanProject[table = delta_prod:base.transactions_0002]
Notice the _0002 suffix on the physical table — that is the name you must use in the size query (drop the delta_prod: catalog prefix):
SELECT
    size_bytes,
    size_bytes / 1000.0 / 1000 / 1000 AS source_size_gb,
    size_bytes / 1000.0 / 1000 / 1000 / 1000 AS source_size_tb
FROM base."transactions_0002$size";
After confirming the source-size estimate is acceptable, run a one-off sync with dry_run: true (see Manual Syncs) to preview the generated SQL before enabling the post-hook in production.

Configure A Model

Enable datashare in meta.datashare on a table or incremental model:
{% set time_start = "current_date - interval '1' day" if is_incremental() else "current_date - interval '2' day" %}
{% set time_end = "current_date + interval '1' day" %}

{{ config(
    materialized = 'incremental',
    incremental_strategy = 'merge',
    unique_key = ['block_number', 'block_date'],
    meta = {
        "datashare": {
            "enabled": true,
            "time_column": "block_date",
            "time_start": time_start,
            "time_end": time_end,
            "target_type": "snowflake",
            "target_region": "us"
        }
    }
) }}

select ...

Configuration Fields

FieldRequiredDescription
enabledYesMust be true to trigger a sync.
time_columnYesColumn used by the sync window.
time_startYesSQL expression for the start of the window.
time_endNoSQL expression for the end of the window. Defaults to now().
unique_key_columnsNoUnique row identity columns. Falls back to the model unique_key.
target_typeYesDatashare target platform where the synced table is delivered. One of snowflake or bigquery. Must match a target configured for your team.
target_regionYesRegion of the target warehouse to sync into (for example, us or eu). Must match a region configured for your team’s datashare target.
Keep time_column, time_start, and time_end at the same granularity. For example, if time_column is a date, use date-based expressions rather than hour-based timestamp windows.

Full Refresh Behavior

The sync uses full_refresh = true when:
  • the model is materialized as a table
  • the incremental model is running for the first time
  • the model is run with --full-refresh
Normal incremental runs use full_refresh = false.

Manual Syncs

You can trigger a sync manually with dbt run-operation:
uv run dbt run-operation datashare_trigger_sync_operation --target prod --args '
model_selector: my_model
dry_run: true
'
This is useful for previewing the generated SQL or running one-off syncs outside a normal dbt run.

Monitor Syncs

SELECT *
FROM dune.datashare.table_syncs
WHERE source_schema = 'your_team';

SELECT *
FROM dune.datashare.table_sync_runs
WHERE source_schema = 'your_team'
ORDER BY created_at DESC;
Use table_syncs for the current registration state and table_sync_runs for execution history.

Remove A Table From Datashare

ALTER TABLE dune.your_team.my_table EXECUTE delete_datashare

Reference