Skip to main content

dbt legacy metadata collector

With the release of the data.world Collector 2.85 we now use our Collector for dbt. This documentation is maintained for those customers who prefer to remain with the legacy collector, however for all new users we suggest using the new data.world Collector.

Prerequisites

  • The computer running the catalog collector should have connectivity to the internet or access to the source instance, a minimum of 2G memory, and a 2Ghz processor.

  • Docker must be installed. For more information see https://docs.docker.com/get-docker/. If you can't use docker, we have a java version available as well -- contact us for more details.

  • The files catalog.json, manifest.json , and profiles.yml must be in the same directory on the host machine, e.g., /tmp.

Installing the collector

  1. Request access to a download link from your data.world representative for the catalog collector. Once you receive the link, download the catalog collector Docker image (or programmatically download it with curl).

  2. Load the docker image into the local computer’s Docker environment:

    docker load -i dwdbt-X.Y.tar.gz

    where X.Y is the version number of the dbt collector image.

  3. The previous command will return an <image id>which needs to be renamed as 'dwbt'. Copy the &lt;image id&gt; and use it in the docker-load command:

    docker tag <image id> dwdbt

Basic parameters

Each collector has parameters that are required, parameters that are recommended, and parameters that are completely optional. Required parameters must be present for the command to run. Recommended parameters are either:

  • parameters that exist in pairs, and one or the other must be present for the command to run (e.g., --agent and --base)

  • parameters that we recommend to improve your experience running the command in some way

Together, the required and recommended parameters make up the Basic parameters for each collector. The Basic parameters for this collector are:

-a, --agent, --account=<agent> - The ID for the data.world account into which you will load this catalog - this is used to generate the namespace for any URIs generated.

-P, --profile-file <profileFile> - The file containing profile definitions (defaults to dbt default of .dbt/profiles.yml in the user's home directory)

-g, --target <target> - The dbt profile target to use to obtain database location information (defaults to the profile's 'target' value)

-p, --profile=<profile> - the dbt profile to use to obtain database location information (defaults to first profile found in profile definitions file)

Example of a dbt command

The example below is an almost copy-and-paste command for any Unix environment that uses a Bash shell (e.g., MacOS and Linux). It uses the minimal set of parameters required to run the collector--your instance may require more. Information about the referenced parameters follows, and a complete list of parameters is at the end of the guide. Edit the command by adding any other parameters you wish to use, and by replacing the values for all your parameters with your information as appropriate. Parameters required by the collector are in bold. When you are finished, run your command.

docker run -it --rm --mount type bind,source /tmp,target /dbt-input \
--mount type bind,source /tmp,target /dbt-output dwdbt -a <account> \
-d <profileFile> -g <target> -p <profile> /dbt-input /dbt-output

Collector runtime and troubleshooting

The catalog collector may run in several seconds to many minutes depending on the size and complexity of the system being crawled.

  • If the catalog collector runs without issues, you should see no output on the terminal, but a new file that matching *.dwec.ttl should be in the directory you specified for the output.

  • If there was an issue connecting or running the catalog collector, there will be either a stack trace or a *.log file. Both of those can be sent to support to investigate if the errors are not clear.

A list of common issues and problems encountered when running the collectors is available here.

Automating updates to your metadata catalog

Keep your metadata catalog up to date using cron, your Docker container, or your automation tool of choice to run the catalog collector on a regular basis. Considerations for how often to schedule include:

  • Frequency of changes to the schema

  • Business criticality of up-to-date data

For organizations with schemas that change often and where surfacing the latest data is business critical, daily may be appropriate. For those with schemas that do not change often and which are less critical, weekly or even monthly may make sense. Consult your data.world representative for more tailored recommendations on how best to optimize your catalog collector processes.

dbt parameters

--help - Show the help text and exit.

--upload-location=<uploadLocation> - The dataset to which the catalog is to be uploaded, specified as a simple dataset name to upload to that dataset within the organization's account, or [account/dataset] to upload to a dataset in some other account (ignored if --upload not specified)

-a, --agent, --account=<agent> - The ID for the data.world account into which you will load this catalog - this is used to generate the namespace for any URIs generated

-b, --base=<base> - The base URI to use as the namespace for any URIs generated (Must use this OR --agent)

-P, --profile-file <profileFile> - The file containing profile definitions (defaults to dbt default of .dbt/profiles.yml in the user's home directory)

-g, --target <target> - The dbt profile target to use to obtain database location information (defaults to the profile's 'target' value)

-p, --profile=<profile> - the dbt profile to use to obtain database location information (defaults to first profile found in profile definitions file)

-t, --api-token=<apiToken> - The data.world API token to use for authentication; default is to use an environment variable named DW_AUTH_TOKEN

-U, --upload - Whether to upload the generated catalog to the  organization account's catalogs dataset or to another location specified with --upload-location (requires --api-token)