Skip to main content

Running the Amazon QuickSight collector on-premise

Important

This collector is available in Private Preview. If you would like access to this collector, please contact your Customer Success Director.

Note

The latest version of the Collector is 2.200. To view the release notes for this version and all previous versions, please go here.

Ways to run the data.world Collector

There are a few different ways to run the data.world Collector--any of which can be combined with an automation strategy to keep your catalog up to date:

  • Create a configuration file (config.yml) - This option stores all the information needed to catalog your data sources. It is an especially valuable option if you have multiple data sources to catalog as you don't need to run multiple scripts or CLI commands separately.

  • Run the collector through a CLI - Repeat runs of the collector requires you to re-enter the command for each run.

Note

This section walks you through the process of running the collector using CLI.

Preparing and running the command

The easiest way to create your Collector command is to:

  1. Copy the following example command in a text editor.

  2. Set the required parameters in the command. The example command includes the minimal parameters required to run the collector

  3. Open a terminal window in any Unix environment that uses a Bash shell and paste the command in it and run in.

docker run -it --rm --mount type=bind,source=${HOME}/dwcc,target=/dwcc-output \
  --mount type=bind,source=${HOME}/dwcc,target=/app/log \
  --mount type=bind,source=${HOME}/.aws/credentials,target=/root/.aws/credentials \
  -e AWS_PROFILE=services-admin \
  datadotworld/dwcc:2.187 catalog-amazon-quicksight --aws-region=us-east-1 --aws-account-id XXXXXXXX \
  --agent=collector-warehouse --name=quicksight-catalog \
  --output=/dwcc-outputocker run -it --rmd

The following table describes the parameters for the command.

Table 1.

Parameter

Details

Required?

dwcc:<CollectorVersion>

Replace <CollectorVersion> in with the version of the collector you want to use (For example, datadotworld/dwcc:2.113)

Yes

-a =<agent>

--agent=<agent>

--account=<agent>

The ID for the data.world account/organization into which you will load this catalog. For example, use --account="orgName" for the https://siteName.app.data.world/orgName. This is used to generate the namespace for any URIs generated.

Yes

--site=<site>

The name for the data.world site into which you will load this catalog. For example, use --site="siteName" for the https://siteName.app.data.world/orgName. This is used to generate the namespace for any URIs generated. This parameter should not be used for the multi-tenant or VPC instances.

Yes (required for private instance installations)

-U

--upload

Whether to upload the generated catalog to the organization account's catalogs dataset or to another location specified with --upload-location (This requires that the --api-token is specified.)

Yes

-L

--no-log-upload

Do not upload the log of the dwcc run to the organization account's catalogs dataset or to another location specified with --upload-location (ignored if --upload notspecified).

Yes

-t

--api-token=<apiToken>

The data.world API token to use for authentication. The default is to use an environment variable named DW_AUTH_TOKEN.

No

-o

--output=<outputDir>

The output directory into which any catalog files should be written.

Yes

-n

--name=<catalogName>

The name of the catalog - this will be used to generate the ID for the catalog as well as the filename into which the catalog file will be written.

Yes

--upload-location=<uploadLocation>

The dataset to which the catalog is to be uploaded, specified as a simple dataset name to upload to that dataset within the organization's account, or [account/dataset] to upload to a dataset in some other account (ignored if--upload not specified).

No

-H

--api-host=<apiHost>

The host for the data.world API.

NOTE: This parameter is required for single-tenant installations. For example, api.site.data.world where site is the name of the single-tenant install.

Yes (required for single-tenant installations)

source=${HOME}/.aws/credentials

When you set up the CLI command to run the collector, mount the path of the local directory containing the AWS credentials profiles file to the /root/.aws/credentials directory on the Docker container by specifying the source path to your existing AWS credentials.

Yes

--aws-region=<awsRegion>

The AWS region used to initialize the QuickSight client.

Yes

--aws-account-id=<awsAccountId>

The AWS account ID subscribed for QuickSight.

Yes

--dry-run

Specify this option to run the collector in dry run mode to test the connection details provided. No metadata is harvested in dry run mode.

No



Common troubleshooting tasks

Collector runtime and troubleshooting

The catalog collector may run in several seconds to many minutes depending on the size and complexity of the system being crawled.

  • If the catalog collector runs without issues, you should see no output on the terminal, but a new file that matching *.dwec.ttl should be in the directory you specified for the output.

  • If there was an issue connecting or running the catalog collector, there will be either a stack trace or a *.log file. Both of those can be sent to support to investigate if the errors are not clear.

A list of common issues and problems encountered when running the collectors is available here.

Automating updates to your metadata catalog

Maintaining an up-to-date metadata catalog is crucial and can be achieved by employing Azure Pipelines, CircleCI, or any automation tool of your preference to execute the catalog collector regularly.

There are two primary strategies for setting up the collector run times:

  • Scheduled: You can configure the collector according to the anticipated frequency of metadata changes in your data source and the business need to access updated metadata. It's necessary to account for the completion time of the collector run (which depends on the size of the source) and the time required to load the collector's output into your catalog. This could be for instance daily or weekly. We recommend scheduling the collector run during off-peak times for optimal performance.

  • Event-triggered: If you have set up automations that refresh the data in a source technology, you can set up the collector to execute whenever the upstream jobs are completed successfully. For example, if you're using Airflow, Github actions, dbt, etc., you can configure the collector to automatically run and keep your catalog updated following modifications to your data sources.