On Lightdash, we generate some files like:
These files need to be stored in a S3 Compatible Cloud storage. Some options are GCP Buckets, S3 Storage and MinIO.
Go into GCP: Cloud storage
Create a new bucket with the following details:
Bucket name
like - lightdash-cloud-file-storage-eu
enforce public access
and select fine-grained
none
Go to settings > Interoperability and create a Access key for service account
access key
and secret
S3_ENDPOINT
for google is https://storage.googleapis.com
S3_REGION
for google is auto
To export your S3 credentials, you need to follow these steps:
Check this guide to see what’s the right S3_ENDPOINT
for your bucket
Creating a bucket in MinIO
Creating access credentials in MinIO
MinIO needs path style bucket URLs, for this you will need to set S3_FORCE_PATH_STYLE: true
in your environment variables.
Azure Blob Storage is not natively compatible with the S3 API. While Lightdash supports external object storage by allowing integration with S3-compatible APIs, Azure’s storage service does not provide this compatibility out of the box. This means that you cannot use Azure Blob Storage as a drop-in replacement for S3 in Lightdash deployments.
Instead, you can use one of the following S3-compatible solutions within your Azure setup:
To enable Lightdash to use your S3 bucket for cloud storage, you’ll need to set the following environment variables:
For a comprehensive list of all possible S3-related environment variables and other configurations, please visit the Environment Variables documentation.
Lightdash also supports authentication via IAM roles. If you omit the S3_ACCESS_KEY
and S3_SECRET_KEY
variables, the S3 library will automatically attempt to use IAM roles. For more details on how this works, refer to the AWS SDK for JavaScript documentation on setting credentials in Node.js.
If you are using an IAM role to generate signed URLs, be aware that these URLs have a maximum validity of 7 days due to AWS limitations, independently of the S3_EXPIRATION_TIME
configuration.