Skip to main content

Azure Data Lake Storage Gen2 and the Collector


The latest version of the Collector is 2.137. To view the release notes for this version and all previous versions, please go here.

About the collector

Use this collector to directly harvest metadata on Azure Data Lake Storage Gen2 storage accounts, containers, and files from your Azure Data Lake Storage Gen 2 instance.

Authentication supported

Authenticate to Azure Data Lake Storage Gen 2 using Service principal.

What is cataloged

The collector catalogs the following information from Azure Data Lake Storage Gen 2.

Table 1.


Information collected

Storage Account

Name, Description, Last Modified, Resource Group name, Region Name, Creation Time, Subscription ID, Account Status, Account Kind, Access Control, Access Tier, Provisioning State, Tags


Name, Description, Server, Last Modified, Metadata, Subscription ID, Entity Tag, Public Access, Access Control


Name, Description, File URL, File Path, Blob Type, Content Length, Creation Time, Last Modified, Metadata, Subscription ID, Entity Tag, Access Control

Relationships between objects

By default, the catalog will include catalog pages for the resource types below. Each catalog page will have a relationship to other related resource types. Note that the catalog presentation and relationships are fully configurable, so these will list the default configuration.

Table 2.

Resource page


Storage Account

  • Relationship to Containers contained within Storage Account


  • Relationship to Blobs contained within Container

  • Relationship to Storage Account containing Container


  • Relationship to Container containing Blob

Important things to note about maximum resource limits

  • By default the collector harvests metadata from Azure Data Lake Storage Gen 2 with up to 10,000 objects in each Storage Account. If your Azure Data Lake Storage Gen 2 has more than 10,000 objects in a given Storage Account, you must set the --max-resource-limit parameter to what you want. The max value can be set to 10 million. If the contents of a Storage Account cross this maximum limit, the Storage Account is skipped and a warning message is logged for the Storage Account.

Setting up access for cataloging Power BI resources

Authentication types supported

The Azure Data Lake Storage Gen 2 collector authenticates using Azure Service Principal.

STEP 1: Registering your application

To register a new application:

  1. Go to the Azure Portal.

  2. Select Azure Active Directory.

  3. Click the App Registrations option in the left sidebar.

  4. Click New Registration and enter the following information:

    1. Application Name: DataDotWorldADLSGen2Application.

    2. Supported account types: Accounts in this organizational directory only.

  5. Click Register to complete the registration.

STEP 2: Creating Client secret and getting the Client ID

To create a Client Secret:

  1. On the new application page you created, select Certificates and Secrets.

  2. Under the Client secrets tab, click the New client secret button.

  3. Add a Description.

  4. Set the expiration for the client secret.

  5. Click Add, and copy the secret value.

To get the Client ID from the Azure portal:

  1. Click on the Overview tab in the left sidebar of the application home page.

  2. Copy the Application (Client) ID from the Essentials section.

STEP 3: Obtaining Subscription ID and Tenant ID

  1. From the page of new application you created from step 1, copy and save the Directory (tenant) ID. You will use this for the --tenant-id parameter.

  2. Navigate to a storage account that you would like to harvest from. From the Overview page, copy the Subscription ID. You will use this for the --subscription-id parameter.

Enable access to the detailed data source information (like tables and columns) provided by Power BI through the read-only admin APIs. For details about doing this task, please see this documentation.

STEP 4: Grant Service Principal access to each Storage Account


Perform these tasks for each Storage Account you plan to harvest.

  1. Go to Storage Account. Click on Access Control (IAM).

  2. Click Add > Add role assignment.

  3. In the Role tab, select Job function role as Storage Blob Data Reader.

  4. Click Members tab. Click Select Members.

  5. Find and click the Service Principal you created earlier. Click Select.

  6. Click Review + assign.

Setting up pre-requisites for running the collector

Make sure that the machine from where you are running the collector meets the following hardware and software requirements.

Table 1.





8 GB


2 Ghz processor



Click here to get Docker.

Java Runtime Environment

OpenJDK 17 is supported and available here. specific objects


You must have a ddw-catalogs (or other) dataset set up to hold your catalog files when you are done running the collector.

Ways to run the Collector

There are a few different ways to run the Collector--any of which can be combined with an automation strategy to keep your catalog up to date:

  • Create a configuration file (config.yml) - This option stores all the information needed to catalog your data sources. It is an especially valuable option if you have multiple data sources to catalog as you don't need to run multiple scripts or CLI commands separately.

  • Run the collector though a CLI - Repeat runs of the collector requires you to re-enter the command for each run.


This section walks you through the process of running the collector using CLI.

Preparing and running the command

The easiest way to create your Collector command is to:

  1. Copy the following example command in a text editor.

  2. Set the required parameters in the command. The example command includes the minimal parameters required to run the collector.

  3. Open a terminal window in any Unix environment that uses a Bash shell and paste the command in it and run in.

    The example command includes the minimal parameters required to run the collector. Your instance may require more. Edit the command by adding any other parameters you wish to use, and by replacing the values for all your parameters with your information as appropriate.

    docker run -it --rm --mount type=bind,source=/tmp,target=/dwcc-output \
    --mount type=bind,source=/tmp,target=/app/log \
    datadotworld/dwcc:<CollectorVersion> \
    catalog-adls-gen2 -a <account> \
    --client-id=<clientId> --client-secret=<clientSecret> \
    --subscription-id=<subscriptionId> --tenant-id=<tenantId> \
    --storage-account-name=<storageAccountName> \
    --max-resource-limit=<maxResourceLimit> \
    -n <catalogName> -o "/dwcc-output"

The following table describes the parameters for the command. Detailed information about the Docker portion of the command can be found here.

Table 1.




dwcc: <CollectorVersion>

Replace <CollectorVersion> in with the version of the collector you want to use (For example, datadotworld/dwcc:2.113)


-A, --all-schemas

Catalog all schemas to which the user has access (exclusive of --schema).


-a = <agent>

--agent= <agent>

--account= <agent>

The ID for the account into which you will load this catalog. The ID is the organization name as it appears in your organization. This is used to generate the namespace for any URIs generated.


--client-id= <clientId>

The Active Directory client ID used to initialize the azure client.


--client-secret= <clientSecret>

The Active Directory client secret used to initialize the azure client.


--subscription-id= <subscriptionId>

The subscription ID for the Azure account. It is required for fetching the list of storage account.


--tenant-id= <tenantId>

The Active Directory tenant id is used to initialize the Azure client. To find the tenant ID, navigate to the Azure Active Directory resource. You can find the Tenant ID listed within the Overview page.


--storage-account-name= <storageAccountName>

The azure storage account name used to initialize the azure client. It can be declared multiple times.


--max-resource-limit= <maxResourceLimit>

The maximum resources the collector will harvest, The maximum limit can be up to 10 million. If not specified, by default the collector harvests a maximum of 10,000 resources.


-n= <catalogName>

--name= <catalogName>

The name of the catalog - this will be used to generate the ID for the catalog as well as the filename into which the catalog file will be written.


-o= <outputDir>

--output= <outputDir>

The output directory into which any catalog files should be written.

In our example we use the /dwcc -outputas it is running in a Docker container and that is what we specified in the script for a Docker mount point.

You can change this value to anything you would like as long as it matches what you use in the mount point:

-mount type=bind,source=/tmp,target=/dwcc-output ...-o /dwcc-output

In this example, the output will be written to the /tmp directory on the local machine, as indicated by the mount point directive. The log file, in addition to any catalog files, will be written to the directory specified in the mount point directive.




Do not upload the log of the dwcc run to the organization account's catalogs dataset or to another location specified with --upload-location (ignored if --upload not specified)


--site= <site>

The slug for the site into which you will load this catalog this is used to generate the namespace for any URIs generated.


-H= Host

--api-host= Host

The host for the API.


-t= <apiToken>

--api-token= <apiToken>

The API token to use for authentication. The default is to use an environment variable named DW_AUTH_TOKEN.




Whether to upload the generated catalog to the organization account's catalogs dataset or to another location specified with --upload-location (This requires that the --api-token is specified.


--upload-location= <uploadLocation>

The dataset to which the catalog is to be uploaded, specified as a simple dataset name to upload to that dataset within the organization's account, or [account/dataset] to upload to a dataset in some other account. This parameter is ignored if --upload is not specified.


Common troubleshooting tasks

Collector runtime and troubleshooting

The catalog collector may run in several seconds to many minutes depending on the size and complexity of the system being crawled. If the catalog collector runs without issues, you should see no output on the terminal, but a new file that matching *.dwec.ttl should be in the directory you specified for the output. If there was an issue connecting or running the catalog collector, there will be either a stack trace or a *.log file. Both of those can be sent to support to investigate if the errors are not clear. A list of common issues and problems encountered when running the collectors is available here.

Issue 1: Resources from certain storage accounts are not getting cataloged

  • Cause: This generally happens when the bucket has more than 10,000 resources or what is set in the --max-resource-limit parameter.

  • Solution: Check if the --max-resource-limit parameter is set and if so, what value is configured for the parameter.

Issue 2: Collector did not harvest metadata from a specific storage account

  • Cause: The collector did not have permissions to read from a storage account.

  • Solution: Ensure that the Service Principal has Storage Blob Data Reader role for each of the Storage Accounts you want to harvest.

Upload the .ttl file generated from running the Collector

When the Collector runs successfully, it creates a .ttl file in the directory you specified as the dwcc-output directory. The automatically-generated file name is databaseName.catalogName.dwec.ttl. You can rename the file or leave the default, and then upload it to your ddw-catalogs dataset (or wherever you store your catalogs).


If there is already a .ttl catalog file with the same name in your ddw-catalogs dataset, when you add the new one it will overwrite the existing one.

Automating updates to your metadata catalog

Keep your metadata catalog up to date using cron, your Docker container, or your automation tool of choice to run the catalog collector on a regular basis. Considerations for how often to schedule include:

  • Frequency of changes to the schema

  • Business criticality of up-to-date data

For organizations with schemas that change often and where surfacing the latest data is business critical, daily may be appropriate. For those with schemas that do not change often and which are less critical, weekly or even monthly may make sense. Consult your representative for more tailored recommendations on how best to optimize your catalog collector processes.