Skip to main content

Grafana and the data.world Collector

Important

The Grafana collector is available as a private beta release for select customers. Please contact data.world if you are interested in using this collector.

Introduction

Note

The latest version of the Collector is 2.119. To view the release notes for this version and all previous versions, please go here.

The data.world Collector harvests metadata from your source system. Please read over the data.world Collector FAQ to familiarize yourself with the Collector.

Prerequisites

  • You must have a ddw-catalogs (or other) dataset set up to hold your catalog files when you are done running the collector.

  • The machine running the catalog collector should have connectivity to the internet or access to the source instance. For Linux- or Unix-based machines, it is recommended to have a minimum of 2G memory, and a 2Ghz processor. For Windows-based machines, it is recommended to have a minimum of 4G, and a 2Ghz processor.

  • Docker must be installed. For more information see https://docs.docker.com/get-docker/.

  • The user defined to run the data.world Collector must have read access to all resources being cataloged.

  • The computer running the data.world Collector needs a Java Runtime Environment. OpenJDK 17 is supported and available here.

Grafana versions supported

  • The collector supports Grafana version 9.0.0.

Authentication supported for cataloging Grafana

The collector authenticates to Grafana using an API key. For the collector to run successfully, the API key needs to have Viewer role.

To generate the API key:

Complete the following tasks to generate the API key that you will use for running the collector. See the Grafana documentation for all the details about doing this task.

  1. Navigate to https://<your_organization_name>.grafana.net/org/apikeys.

  2. Click Add API Key to add a new API key.

    1. Provide the key name.

    2. Set the Role as Viewer.

    3. In the Time to live field, set the expiry time for the API key.

      grafana_api_key.png

Note down the API token that is generated after this task. We will use it while setting up the command for Grafana.

What is cataloged

The collector catalogs the following information from Grafana.

Table 1.

Object

Information cataloged

Dashboards

Owner, created by date, style, type, version, url, slug (human-friendly portion of the dashboard URL).

Dashboard Panels

Title, description, type, associated dashboard, source if exists.

Dashboard Annotations

Title, query text, source if exists.

Dashboard Variables

Title, description, query text, source if exists.

Data Source

Type, title, source json.

Playlists

Title, dashboards that are part of the playlist.

Folders

Title, dashboards that are part of the folder.



Lineage for Grafana

The following lineage information is collected by the Grafana collector.

Table 2.

Object

Lineage available

Dashboards

The collector identifies the associated annotations, variables, and any upstream data sources.

Data source

The collector identifies downstream annotations, dashboards, and panels.

Annotation

The collector identifies the associated upstream data source and the dashboards containing the annotation.

Variable

The collector identifies the dashboard containing the variable.

Panel

The collector identifies the associated dashboard annotations and variables, and upstream data sources.



Ways to run the data.world Collector

There are a few different ways to run the data.world Collector--any of which can be combined with an automation strategy to keep your catalog up to date:

  • Create a configuration file (config.yml) - This option stores all the information needed to catalog your data sources. It is an especially valuable option if you have multiple data sources to catalog as you don't need to run multiple scripts or CLI commands separately.

  • Run the collector though a CLI - Clear-cut and efficient, but makes regular, repeating runs of the collector very laborious and time-consuming as the commands are re-entered for each run.

Details about the command

The easiest way to create your Collector command is to:

  1. Copy the following example command.

  2. Edit it for your organization and data source

  3. Open a terminal window in any Unix environment that uses a Bash shell and paste your command into it.

The example command includes the minimal parameters required to run the collector --your instance may require more. Edit the command by adding any other parameters you wish to use, and by replacing the values for all your parameters with your information as appropriate.

docker run -it --rm --mount type=bind,source=/tmp,target=/dwcc-output \
--mount type=bind,source=/tmp,target=/app/log datadotworld/dwcc:<CollectorVersion> \
catalog-grafana -a <account> \
--grafana-api-base-url=<baseUrl> --grafana-api-token=<token> \
-n <catalogName> -o "/dwcc-output"

The following table describes the parameters for the command. Detailed information about the Docker portion of the command can be found here.

Table 1.

Parameter

Details

Required?

dwcc:<CollectorVersion>

Replace <CollectorVersion> in with the version of the collector you want to use (For example, datadotworld/dwcc:2.113)

Yes

-A, --all-schemas

Catalog all schemas to which the user has access (exclusive of --schema)

Yes

-a = <agent>

--agent=<agent>

--account=<agent>

The ID for the data.world account into which you will load this catalog. The ID is the organization name as it appears in your organization. This is used to generate the namespace for any URIs generated.

You must specify this OR -the --base parameter.

Yes

(if base parameter is not provided)

-b=<base>

--base=<base>

The base URI to use as the namespace for any URIs generated

You must specify this OR -the -agent parameter.

Yes

(if agent parameter is not provided)

--grafana-api-base-url=<baseUrl>

Base URL of the Grafana API. Format: https://organizationName.grafana.net/api

Yes

--grafana-api-token=<token>

The token for authentication to the Grafana API. This is the token you generated in this task.

Yes

-n=<catalogName>

--name=<catalogName>

The name of the catalog - this will be used to generate the ID for the catalog as well as the filename into which the catalog file will be written.

Yes

-o=<outputDir>

--output=<outputDir>

The output directory into which any catalog files should be written.

In our example we use the /dwcc -output as it is running in a Docker container and that is what we specified in the script for a Docker mount point.

You can change this value to anything you would like as long as it matches what you use in the mount point:

-mount type=bind,source=/tmp,target=/dwcc-output ...-o /dwcc-output

In this example, the output will be written to the /tmp directory on the local machine, as indicated by the mount point directive. The log file, in addition to any catalog files, will be written to the directory specified in the mount point directive.

Yes

--dry-run

Specify this option to run the collector in dry run mode to test the connection details provided. No metadata is harvested in dry run mode.

No

-L

--no-log-upload

Do not upload the log of the dwcc run to the organization account's catalogs dataset or to another location specified with --upload-location (ignored if --upload not specified)

No

--site=<site>

The slug for the data.world site into which you will load this catalog this is used to generate the namespace for any URIs generated.

No

-H=Host

--api-host=Host

The host for the data.world API.

No

-t= <apiToken>

--api-token=<apiToken>

The data.world API token to use for authentication. The default is to use an environment variable named DW_AUTH_TOKEN.

In order to automatically upload the catalog to data.world, you will need a read/write API token for data.world.

No

-U

--upload

Whether to upload the generated catalog to the organization account's catalogs dataset or to another location specified with --upload-location (This requires that the --api-token is specified.

No

--upload-location=<uploadLocation>

The dataset to which the catalog is to be uploaded, specified as a simple dataset name to upload to that dataset within the organization's account, or [account/dataset] to upload to a dataset in some other account. This parameter is ignored if --upload is not specified.

No

-z=<postProcessSparql>

--post-process-sparql=<postProcessSparql>

A file containing a SPARQL query to execute to transform the catalog graph emitted by the collector.

No



Collector runtime and troubleshooting

The catalog collector may run in several seconds to many minutes depending on the size and complexity of the system being crawled. If the catalog collector runs without issues, you should see no output on the terminal, but a new file that matching *.dwec.ttl should be in the directory you specified for the output. If there was an issue connecting or running the catalog collector, there will be either a stack trace or a *.log file. Both of those can be sent to support to investigate if the errors are not clear. A list of common issues and problems encountered when running the collectors is available here.

Upload the .ttl file generated from running the Collector

When the data.world Collector runs successfully, it creates a .ttl file in the directory you specified as the dwcc-output directory. The automatically-generated file name is databaseName.catalogName.dwec.ttl. You can rename the file or leave the default, and then upload it to your ddw-catalogs dataset (or wherever you store your catalogs).

Caution

If there is already a .ttl catalog file with the same name in your ddw-catalogs dataset, when you add the new one it will overwrite the existing one.

Automatic updates to your metadata catalog

Keep your metadata catalog up to date using cron, your Docker container, or your automation tool of choice to run the catalog collector on a regular basis. Considerations for how often to schedule include:

  • Frequency of changes to the schema

  • Business criticality of up-to-date data

For organizations with schemas that change often and where surfacing the latest data is business critical, daily may be appropriate. For those with schemas that do not change often and which are less critical, weekly or even monthly may make sense. Consult your data.world representative for more tailored recommendations on how best to optimize your catalog collector processes.