Adding Lightdash's static IP addresses to your allow-list
id
and pop it into the project
field in the Warehouse Connection form in Lightdash.
roles/bigquery.dataViewer
(to see data in your project)
roles/bigquery.jobUser
(to run queries in your project)
roles/bigquery.dataViewer
on each additional BigQuery project.
Once you have a service account all ready to go, you’ll need to add its JSON key file to Lightdash in the key file
section of the Warehouse Connection page.
location
may be either a multi-regional location (e.g. EU
, US
), or a regional location (e.g. us-west2
). Check out the BigQuery documentation for more information on dataset locations.
You can find the location of the dataset you’re using for your dbt project in your dbt profiles.yml file, or in your BigQuery console.
Timeout in seconds
configuration.
priority
configuration in your Warehouse Connection settings. The priority
field can be set to one of batch
or interactive
.
For more information on query priority, check out the BigQuery documentation.
retries
configuration specifies the number of times Lightdash should retry queries that result in unhandled server errors.
For example, setting retries
to 5 means that Lightdash will retry BigQuery queries 5 times with a delay. If the query does not succeed after the fifth attempt, then Lightdash will raise an error.
By default, the number of retries is set to 3.
Maximum bytes billed
is set, then queries executed by Lightdash will fail if they exceed the configured maximum bytes threshhold. This configuration should be supplied as an integer number of bytes.
For example, setting this to 1000000000
means if a query would bill more than a gigabyte of data (e.g. 2Gb), then BigQuery will reject the query and you’d get an error like this:
Auto
sets it to whatever the default is for your data warehouse. Or, you can customize it and select the day of the week from the drop-down menu. This will be taken into account when using ‘WEEK’ time interval in Lightdash.
profiles.yml
file.
If you’re a dbt cloud user you can find this under your profile in the dbt cloud IDE:
profiles.yml
file at ~/.dbt/profiles.yml
and look for a field named schema
:
verify-full
.
verify-full
.
verify-ca
or verify-full
.
Auto
sets it to whatever the default is for your data warehouse. Or, you can customize it and select the day of the week from the drop-down menu. This will be taken into account when using ‘WEEK’ time interval in Lightdash.
profiles.yml
file.
If you’re a dbt cloud user you can find this under your profile in the dbt cloud IDE:
profiles.yml
file at ~/.dbt/profiles.yml
and look for a field named schema
:
SSL SYSCALL error: EOF detected
. Lowering the keepalives_idle value may prevent this, because the server will send a ping to keep the connection active more frequently.
By default, this value is set to 240 seconds, but can be configured lower (perhaps 120 or 60), at the cost of a chattier network connection.
Auto
sets it to whatever the default is for your data warehouse. Or, you can customize it and select the day of the week from the drop-down menu. This will be taken into account when using ‘WEEK’ time interval in Lightdash.
snowflakecomputing.com
.
For example in the image below, the user logs in via https://aaa99827.snowflakecomputing.com/console/login#/
so the account identifier is aaa99827
.
<organization_name>-<account_name>
whereorganization_name
and account_name
can be found by following any of the methods listed inManaging accounts in your organization.
Adding security integration on Snowflake warehouse
security integration
like this:OAUTH_AUTHORIZATION_ENDPOINT
and OAUTH_TOKEN_ENDPOINT
Then runOAUTH_CLIENT_SECRET
, OAUTH_CLIENT_SECRET_2
and OAUTH_CLIENT_ID
Finally, contact the Lightdash Team support@lightdash.com to setup Sign in with Snowflake for your instance.profiles.yml
file.
If you’re a dbt cloud user you can find this under your profile in the dbt cloud IDE:
profiles.yml
file at ~/.dbt/profiles.yml
and look for a field named schema
:
Auto
sets it to whatever the default is for your data warehouse. Or, you can customize it and select the day of the week from the drop-down menu. This will be taken into account when using ‘WEEK’ time interval in Lightdash.
Compute
tab in the sidebar.
Advanced options
tab
JDBC/ODBC
tab
Settings
by clicking the cog ⚙️ in the sidebar and select User settings
Generate token
. You’ll be asked to enter a name and expiry.
Auto
sets it to whatever the default is for your data warehouse. Or, you can customize it and select the day of the week from the drop-down menu. This will be taken into account when using ‘WEEK’ time interval in Lightdash.
mycluster.mydomain.com
Don’t include the http:// or https:// prefix.
user.name
or user.name@mydomain.com
Format for Starburst Galaxy:user.name@mydomain.com/role
my_postgres_catalog
443
Auto
sets it to whatever the default is for your data warehouse. Or, you can customize it and select the day of the week from the drop-down menu. This will be taken into account when using ‘WEEK’ time interval in Lightdash.
repo
scope when you’re creating the token.
dbt
project is located.Repository
permissions, select Contents
—> Read and Write
and Pull Requests
—> Read and Write
.Generate token
and copy the token.my-org/my-repo
. e.g. lightdash/lightdash-analytics
main
, master
or dev
By default, we’ve set this to main
but you can change it to whatever you’d like.
dbt_project.yml
file is found in the GitHub repository you entered above.
/
if your dbt_project.yml
file is in the main folder of your repo (e.g. lightdash/lightdash-analytics/dbt_project.yml)
/dbt
in this field.
github.com
read_repository
scope when you’re creating the token. The token, if using a project access token, or the user, when using a personal access token, needs to have permission to download the code. Normally this would be the Reporter
role.
my-org/my-repo
. e.g. if my browser had https://gitlab.com/lightdash/lightdash-analytics.gitlab.io
, I’d put in: lightdash/lightdash-analytics
as my repository in Lightdash.
main
, master
or dev
By default, we’ve set this to main
but you can change it to whatever you’d like.
dbt_project.yml
file is found in the GitLab repository you entered above.
If your dbt_project.yml
file is in the main folder of your repo (e.g. lightdash/lightdash-analytics/dbt_project.yml
), then you don’t need to change anything in here. You can just leave the default value we’ve put in.
If your dbt project is in a sub-folder in your repo (e.g. lightdash/lightdash-analytics/dbt/dbt_project.yml
), then you’ll need to include the path to the sub-folder where your dbt project is (e.g. /dbt
).
gitlab.io
.
main
, master
or dev
By default, we’ve set this to main
but you can change it to whatever you’d like.
dbt_project.yml
file is found in the repository you entered above.
If your dbt_project.yml
file is in the main folder of your repo (e.g. lightdash/lightdash-analytics/dbt_project.yml
), then you don’t need to change anything in here. You can just leave the default value we’ve put in.
If your dbt project is in a sub-folder in your repo (e.g. lightdash/lightdash-analytics/dbt/dbt_project.yml
), then you’ll need to include the path to the sub-folder where your dbt project is (e.g. /dbt
).
Project read
and Repository read
scope when you’re creating the token.
my-org/my-repo
. e.g. lightdash/lightdash-analytics
main
, master
or dev
By default, we’ve set this to main
but you can change it to whatever you’d like.
dbt_project.yml
file is found in the Bitbucket repository you entered above.
/
if your dbt_project.yml
file is in the main folder of your repo (e.g. lightdash/lightdash-analytics/dbt_project.yml)
/dbt
in this field.
CLI
connection type is the default type for projects that were created using the CLI via the lightdash deploy --create
command.
Usually, we recommend swapping to a direct connection to your git repo after initial project creation, but if you want to continue managing deployments in the CLI, read this guide on how to use Lightdash deploy and set up continuous deployment.