Running the dbt Core collector
Note
The latest version of the Collector is 2.159. To view the release notes for this version and all previous versions, please go here.
Generating the command or YAML file
This section walks you through the process of generating the command or YAML file for running the collector from Windows or Linux or MAC OS.
To generate the command or YAML file:
On the Organization profile page, go to the Settings tab > Metadata collectors section.
Click the Help me set up a collector button.
On the On-prem collector setup prerequisites screen, read the pre-requisites and click Next.
On the On which platform will this collector execute? screen, select if you will be running the collector on Windows or Mac OS or Linux. This will determine the format of the YAML and CLI that is generated in the end. Click Next.
On the Choose metadata collector type you would like to setup screen, select dbt Core. Click Next.
On the Configure a new on premises dbt Core Collector screen, set the following properties and click Next.
On the next screen, set the following properties and click Next.
Table 2.Field name
Corresponding parameter name
Description
Required?
dbt artifactory directory
-d= <artifactDirectory>
--artifact-directory= <artifactDirectory>
The directory containing the dbt
catalog.json
,manifest.json
,profiles.yml
,dbt_project.yml
, andrun_results.json
.Without
run_results.json
, the emitted catalog will not contain activity information, but will otherwise be complete.Yes
dbt profile file path
-P= <profileFile>
--profile-file=<profileFile>
The file containing profile definitions (defaults to dbt default of .
dbt/profiles.yml
in the user's home directory)No
dbt profile
-p= <profile> --profile= <profile>
The dbt profile to use to obtain database location information (defaults to first profile found in profile definitions file)
No
dbt target
-g= <target>
--target= <target>
The dbt profile target to use to obtain database location information (defaults to the profile's 'target' value)
No
On the next screen, first select the Targetdatabase. Options available are: PostgreSQL, Snowflake, Bigquery. You would use these options to override the connection information from the dbt profile file or if the dbt profile file is not provided.
For the PostgreSQL database, set the following properties and click Next.
Table 3.Field name
Corresponding parameter name
Description
Required?
Database server
--database-server= <databaseServer>
The server/host for the target database.
No
Server port
--database-port= <databasePort>
The port for the target database.
No
Database
--database= <database>
The name of the target database.
No
Username
--database-user= <databaseUser>
The user credential to use in connecting to the target database.
No
Password
--database-password= <databasePassword>
The password credential to use in connecting to the target database. Default value is environment variable ${DW_DBT_PASSWORD}.
No
If you selected the Target database as Snowflake, set the following properties and click Next.
Table 4.Field name
Corresponding parameter name
Description
Required?
Snowflake Account
--snowflake-account= <snowflakeAccount>
The Snowflake account/tenant. You can use
--database-server
as an alternative.)No
Database server
--database-server= <databaseserver>
The hostname of the database server to connect to.
No
Databse port
-p= <port>
--port= <port>
The port of the database server (if not the default).
No
Authentication
Select from one of the following options:
Authenticate with a username & password
Authenticate using a private key file
Authenticate with a username & password
Username
-u= <user>
--user=<user>
The username used to connect to the database.
No
Password
-P=<password>
--password= <password>
The environment variable of the password used to connect to the database. We recommend that you use an environment variable ${DW_SNOWFLAKE_PASSWORD} for this parameter.
No
Authenticate using a private key file
Snowflake Private Key File
--snowflake-private-key-file= <snowflakePrivateKey>
The private key file to use for authentication with Snowflake (for example rsa_key.p8).
No
Snowflake Private Key Password
--snowflake-private-key-file-password= <snowflakePrivateKeyFilePassword>
The password for the private key file to use for authentication with Snowflake, if the key is encrypted and a password was set. Set it as an environment variable ${DW_SNOWFLAKE_PK_PASSWORD}.
No
Snowflake Account
--snowflake-account= <snowflakeAccount>
The Snowflake account/tenant.
You can use --database-server as an alternative.
No
Snowflake Application
--snowflake-application= <snowflakeApplication>
The application connection parameter to use in connecting to the target Snowflake database. Use datadotworld unless otherwise directed.
No
Snowflake Role
----snowflake-role= <snowflakeDatbaseRole>
The role to use in connecting to the target Snowflake database. This is case-insensitive.
No
Snowflake Warehouse
--snowflake-warehouse= <snowflakeDatbaseWarehouse>
The warehouse to use in connecting to the target Snowflake database. This is case-insensitive.
No
If you selected the Target database as BigQuery, set the following property and click Next.
Table 5.Field name
Corresponding parameter name
Description
Required?
BigQuery account credentials file path
--bigquery-credentials-file= <bigqueryCredentialsFile>
The file containing bigquery service account credentials. This applies only to models with bigquery references.
If provided, the bigquery project id is read from this file, otherwise the bigquery project in the profile file is used.
No
On the next screen, provide the Collector configuration name. This is the name used to save the configuration details. The configuration is saved and made available on the Metadata collectors summary page from where you can edit or delete the configuration at a later point. Click Save and Continue.
On the Finalize your dbt Core Collector configuration screen, you are notified about the environment variables and directories you need to setup for running the collector. Select if you want to generate a Configuration file( YAML) or Command line arguments (CLI). Click Next.
Important
You must ensure that you have set up these environment variables and directories before you run the collector.
The next screen gives you an option to download the YAML configuration file or copy the CLI command. Click Done. If you are generating a YAML file, click Next.
The dbt Core command screen gives you the command to use for running the collector using the YAML file.
You will notice that the YAML/CLI has following additional parameters that are automatically set for you.
Important
Except for the collector version, you should not change the values of any of the parameter listed here.
Table 6.Parameter name
Details
Required?
-a= <agent>
--agent= <agent>
--account= <agent>
The ID for the data.world account into which you will load this catalog - this is used to generate the namespace for any URIs generated.
Yes
--site= <site>
This parameter should be set only for Private instances. Do not set it for public instances and single-tenant installations. Required for private instance installations.
Yes
(required for private instance installations)
-U
--upload
Whether to upload the generated catalog to the organization account's catalogs dataset.
Yes
-L
--no-log-upload
Do not upload the log of the Collector run to the organization account's catalogs dataset.
Yes
dwcc: <CollectorVersion>
The version of the collector you want to use (For example,
datadotworld/dwcc:2.113)
Yes
Verifying environment variables and directories
Verify that you have set up all the required environment variables that were identified by the Collector Wizard before running the collector. Alternatively, you can set these credentials in a credential vault and use a script to retrieve those credentials.
Verify that you have set up all the required directories that were identified by the Collector Wizard.
Running the collector
Important
Before you begin running the collector make sure you have the correct version of collectors downloaded and available.
Running collector using YAML file
Go to the server where you have setup docker to run the collector.
Make sure you have download the correct version of collectors. This version should match the version of the collector specified in the command you are using to run the collector.
Place the YAML file generated from the Collector wizard to the correct directory.
From the command line, run the command generated from the application for executing the YAML file.
Caution
Note that is just a sample command for showing the syntax. You must generate the command specific to your setup from the application UI.
docker run -it --rm --mount type=bind,source=${HOME}/dwcc,target=/dwcc-output \ --mount type=bind,source=${HOME}/dwcc,target=${HOME}/dwcc --mount type=bind,source=${HOME]/artifactDirectory,target=${HOME]/artifactDirectory \ --mount type=bind,source=creds.json,target=creds.json -e DW_AUTH_TOKEN=${DW_AUTH_TOKEN} \ datadotworld/dwcc:2.124 --config-file=/dwcc-output/config-dbt_core.yml
The collector automatically uploads the file to the specified dataset and you can also find the output at the location you specified while running the collector.
At a later point, if you download a newer version of collector from docker, you can edit the collector version in the generated command to run the collector with the newer version.
Running collector without the YAML file
Go to the server where you have setup docker to run the collector.
Make sure you have download the version of collectors from here. This version should match the version of the collector specified in the command you are using to run the collector.
From the command line, run the command generated from the application. Here is a sample command. The following sample command is generated using BigQuery as the target database. Your command will vary based on the target database you select while generating the command.
Caution
Note that is just a sample command for showing the syntax. You must generate the command specific to your setup from the application UI.
docker run -it --rm --mount type=bind,source=${HOME}/dwcc,target=/dwcc-output \ --mount type=bind,source=${HOME}/dwcc,target=${HOME}/dwcc --mount type=bind,source=${HOME]/artifactDirectory,target=${HOME]/artifactDirectory \ --mount type=bind,source=creds.json,target=creds.json datadotworld/dwcc:2.124 \ catalog-dbt --agent=8bank-catalog-sources --site=solutions --no-log-upload=false \ --upload=true --api-token=${DW_AUTH_TOKEN} --output=/dwcc-output \ --name=8bank-catalog-sources-collection --upload-location=ddw-catalogs \ --artifact-directory=${HOME]/artifactDirectory --bigquery-credentials-file=creds.json
Warning
If the command includes the --database_server parameter, make sure it is not followed by a trailing space (\).
The collector automatically uploads the file to the specified dataset and you can also find the output at the location you specified while running the collector.
At a later point, if you download a newer version of collector from docker, you can edit the collector version in the generated command to run the collector with the newer version.
Collector runtime and troubleshooting
The catalog collector may run in several seconds to many minutes depending on the size and complexity of the system being crawled.
If the catalog collector runs without issues, you should see no output on the terminal, but a new file that matching *.dwec.ttl should be in the directory you specified for the output.
If there was an issue connecting or running the catalog collector, there will be either a stack trace or a *.log file. Both of those can be sent to support to investigate if the errors are not clear.
A list of common issues and problems encountered when running the collectors is available here.
Automating updates to your metadata catalog
Keep your metadata catalog up to date using cron, your Docker container, or your automation tool of choice to run the catalog collector on a regular basis. Considerations for how often to schedule include:
Frequency of changes to the schema
Business criticality of up-to-date data
For organizations with schemas that change often and where surfacing the latest data is business critical, daily may be appropriate. For those with schemas that do not change often and which are less critical, weekly or even monthly may make sense. Consult your data.world representative for more tailored recommendations on how best to optimize your catalog collector processes.
Managing collector runs and configuration details
From the Metadata collectors summary page, view the collectors runs to ensure they are running successfully,
From the same Metadata collectors summary page you can view, edit, or delete the configuration details for the collectors.