Skip to main content

Documentation

BigQuery and the data.world Collector
Introduction

The steps to catalog metadata using the data.world Collector are as follows:

  1. Read over the data.world Collector FAQ to familiarize yourself with the Collector.

  2. Verify that your installation meets the prerequisites below.

  3. Validate that you have a ddw-catalogs (or other) dataset set up to hold your catalog files when you are done running the collector.

  4. Write the collector command and run it.

  5. Upload the resulting file to your ddw-catalogs dataset (or other dataset as configured for your organization).

Prerequisites
  • The computer running the catalog collector should have connectivity to the internet or access to the source instance, a minimum of 2G memory, and a 2Ghz processor.

  • Docker must be installed. For more information see https://docs.docker.com/get-docker/.

  • The user defined to run the data.world Collector must have read access to all resources being cataloged.

  • The computer running the data.world Collector needs a Java Runtime Environment (JRE), version 11 or higher. (OpenJDK available here)

Permissions

The account used to authenticate needs to read the information schema of the databases about which it is collecting metadata. It does not need read access to the data within the tables.

More on BigQuery
BigQuery Data Viewer

roles/bigquery.dataViewer

When applied to a dataset, dataViewer provides permissions to:

  • Read the dataset's metadata and to list tables in the dataset.

  • Read data and metadata from the dataset's tables.

When applied at the project or organization level, this role can also enumerate all datasets in the project. Additional roles, however, are necessary to allow the running of jobs.

Permissions

bigquery.datasets.get
bigquery.datasets.getIamPolicy
bigquery.models.getData
bigquery.models.getMetadata
bigquery.models.list
bigquery.routines.get
bigquery.routines.list
bigquery.tables.export
bigquery.tables.get
bigquery.tables.getData
bigquery.tables.list
resourcemanager.projects.get
resourcemanager.projects.list
BigQuery User

roles/bigquery.user

Provides permissions to run jobs, including queries, within the project. The user role can enumerate their own jobs, cancel their own jobs, and enumeratedatasets within a project. Additionally, allows the creation of new datasetswithin the project; the creator is granted the bigquery.data Owner role for these new datasets.

Permissions

bigquery.bireservations.get
bigquery.capacityCommitments.get
bigquery.capacityCommitments.list
bigquery.config.get
bigquery.datasets.create
bigquery.datasets.get
bigquery.datasets.getIamPolicy
bigquery.jobs.create
bigquery.jobs.list
bigquery.models.list
bigquery.readsessions.*
bigquery.reservationAssignments.list
bigquery.reservationAssignments.search
bigquery.reservations.get
bigquery.reservations.list
bigquery.routines.list
bigquery.savedqueries.get
bigquery.savedqueries.list
bigquery.tables.list
bigquery.transfers.get
resourcemanager.projects.get
resourcemanager.projects.list
What is cataloged

The information cataloged by the collector includes:

Ways to run the data.world Collector

There are a few different ways to run the data.world Collector--any of which can be combined with an automation strategy to keep your catalog up to date:

  • Create a configuration file (config.yml) - This option stores all the information needed to catalog your data sources. It is an especially valuable option if you have multiple data sources to catalog as you don't need to run multiple scripts or CLI commands separately.

  • Create a configuration script - This option is very similar to the one for the config.yml file. We have a quick start with instructions to create and run a script here. The instructions are for Snowflake, but you can get information on the parameters you will need to include for your particular data source from the the Parameters section below.

  • Run the collector though a CLI - Makes regular, repeating runs of the collector very laborious and time-consuming as the commands are re-entered for each run.

For this example we will be running the command from a CLI.

Writing the data.world Collector command

The easiest way to create your Collector command is to:

  1. Copy the example command below

  2. Edit it for your organization and data source

  3. Open a terminal window in any Unix environment that uses a Bash shell and paste your command into it.

The example command includes the minimal parameters required to run the collector (described below)--your instance may require more. A description of all the available parameters is at the end of this article. Edit the command by adding any other parameters you wish to use, and by replacing the values for all your parameters with your information as appropriate. Parameters required by the Collector are in bold.

Important

Do not forget to replace x.y in datadotworld/dwcc:x.y with the version of the Collector you want to use (e.g., datadotworld/dwcc:2.80).

Base parameters
Docker and the data.world Collector

Detailed information about the Docker portion of the command can be found here. When you run the command, run will attempt to find the image locally, and if it doesn't find it, it will go to Dockerhub and download it automatically:

dwcc_and_cli.png

Collector runtime and troubleshooting

The catalog collector may run in several seconds to many minutes depending on the size and complexity of the system being crawled. If the catalog collector runs without issues, you should see no output on the terminal, but a new file that matching *.dwec.ttl should be in the directory you specified for the output. If there was an issue connecting or running the catalog collector, there will be either a stack trace or a *.log file. Both of those can be sent to support to investigate if the errors are not clear. A list of common issues and problems encountered when running the collectors is available here.

Upload the .ttl file generated from running the Collector

When the data.world Collector runs successfully, it creates a .ttl file in the directory you specified as the dwcc-output directory. The automatically-generated file name is databaseName.catalogName.dwec.ttl. You can rename the file or leave the default, and then upload it to your ddw-catalogs dataset (or wherever you store your catalogs.

Caution

If there is already a .ttl catalog file with the same name in your ddw-catalogs dataset, when you add the new one it will overwrite the existing one.

Automatic updates to your metadata catalog

Keep your metadata catalog up to date using cron, your Docker container, or your automation tool of choice to run the catalog collector on a regular basis. Considerations for how often to schedule include:

  • Frequency of changes to the schema

  • Business criticality of up-to-date data

For organizations with schemas that change often and where surfacing the latest data is business critical, daily may be appropriate. For those with schemas that do not change often and which are less critical, weekly or even monthly may make sense. Consult your data.world representative for more tailored recommendations on how best to optimize your catalog collector processes.