Azure Data Lake Storage Gen2 and the data.world Collector
Note
The latest version of the Collector is 2.137. To view the release notes for this version and all previous versions, please go here.
About the collector
Use this collector to directly harvest metadata on Azure Data Lake Storage Gen2 storage accounts, containers, and files from your Azure Data Lake Storage Gen 2 instance.
Authentication supported
Authenticate to Azure Data Lake Storage Gen 2 using Service principal.
What is cataloged
The collector catalogs the following information from Azure Data Lake Storage Gen 2.
Object | Information collected |
---|---|
Storage Account | Name, Description, Last Modified, Resource Group name, Region Name, Creation Time, Subscription ID, Account Status, Account Kind, Access Control, Access Tier, Provisioning State, Tags |
Container | Name, Description, Server, Last Modified, Metadata, Subscription ID, Entity Tag, Public Access, Access Control |
Blob | Name, Description, File URL, File Path, Blob Type, Content Length, Creation Time, Last Modified, Metadata, Subscription ID, Entity Tag, Access Control |
Relationships between objects
By default, the data.world catalog will include catalog pages for the resource types below. Each catalog page will have a relationship to other related resource types. Note that the catalog presentation and relationships are fully configurable, so these will list the default configuration.
Resource page | Relationship |
---|---|
Storage Account |
|
Container |
|
Blob |
|
Important things to note about maximum resource limits
By default the collector harvests metadata from Azure Data Lake Storage Gen 2 with up to 10,000 objects in each Storage Account. If your Azure Data Lake Storage Gen 2 has more than 10,000 objects in a given Storage Account, you must set the --max-resource-limit parameter to what you want. The max value can be set to 10 million. If the contents of a Storage Account cross this maximum limit, the Storage Account is skipped and a warning message is logged for the Storage Account.
Setting up access for cataloging Power BI resources
Authentication types supported
The Azure Data Lake Storage Gen 2 collector authenticates using Azure Service Principal.
STEP 1: Registering your application
To register a new application:
Go to the Azure Portal.
Select Azure Active Directory.
Click the App Registrations option in the left sidebar.
Click New Registration and enter the following information:
Application Name: DataDotWorldADLSGen2Application.
Supported account types: Accounts in this organizational directory only.
Click Register to complete the registration.
STEP 2: Creating Client secret and getting the Client ID
To create a Client Secret:
On the new application page you created, select Certificates and Secrets.
Under the Client secrets tab, click the New client secret button.
Add a Description.
Set the expiration for the client secret.
Click Add, and copy the secret value.
To get the Client ID from the Azure portal:
Click on the Overview tab in the left sidebar of the application home page.
Copy the Application (Client) ID from the Essentials section.
STEP 3: Obtaining Subscription ID and Tenant ID
From the page of new application you created from step 1, copy and save the Directory (tenant) ID. You will use this for the --tenant-id parameter.
Navigate to a storage account that you would like to harvest from. From the Overview page, copy the Subscription ID. You will use this for the --subscription-id parameter.
Enable access to the detailed data source information (like tables and columns) provided by Power BI through the read-only admin APIs. For details about doing this task, please see this documentation.
STEP 4: Grant Service Principal access to each Storage Account
Important
Perform these tasks for each Storage Account you plan to harvest.
Go to Storage Account. Click on Access Control (IAM).
Click Add > Add role assignment.
In the Role tab, select Job function role as Storage Blob Data Reader.
Click Members tab. Click Select Members.
Find and click the Service Principal you created earlier. Click Select.
Click Review + assign.
Setting up pre-requisites for running the collector
Make sure that the machine from where you are running the collector meets the following hardware and software requirements.
Item | Requirement |
---|---|
Hardware | |
RAM | 8 GB |
CPU | 2 Ghz processor |
Software | |
Docker | Click here to get Docker. |
Java Runtime Environment | OpenJDK 17 is supported and available here. |
data.world specific objects | |
Dataset | You must have a ddw-catalogs (or other) dataset set up to hold your catalog files when you are done running the collector. |
Ways to run the data.world Collector
There are a few different ways to run the data.world Collector--any of which can be combined with an automation strategy to keep your catalog up to date:
Create a configuration file (config.yml) - This option stores all the information needed to catalog your data sources. It is an especially valuable option if you have multiple data sources to catalog as you don't need to run multiple scripts or CLI commands separately.
Run the collector though a CLI - Repeat runs of the collector requires you to re-enter the command for each run.
Note
This section walks you through the process of running the collector using CLI.
Preparing and running the command
The easiest way to create your Collector command is to:
Copy the following example command in a text editor.
Set the required parameters in the command. The example command includes the minimal parameters required to run the collector.
Open a terminal window in any Unix environment that uses a Bash shell and paste the command in it and run in.
The example command includes the minimal parameters required to run the collector. Your instance may require more. Edit the command by adding any other parameters you wish to use, and by replacing the values for all your parameters with your information as appropriate.
docker run -it --rm --mount type=bind,source=/tmp,target=/dwcc-output \ --mount type=bind,source=/tmp,target=/app/log \ datadotworld/dwcc:<CollectorVersion> \ catalog-adls-gen2 -a <account> \ --client-id=<clientId> --client-secret=<clientSecret> \ --subscription-id=<subscriptionId> --tenant-id=<tenantId> \ --storage-account-name=<storageAccountName> \ --max-resource-limit=<maxResourceLimit> \ -n <catalogName> -o "/dwcc-output"
The following table describes the parameters for the command. Detailed information about the Docker portion of the command can be found here.
Parameter | Details | Required? |
---|---|---|
dwcc: <CollectorVersion> | Replace | Yes |
-A, --all-schemas | Catalog all schemas to which the user has access (exclusive of --schema). | Yes |
-a = <agent> --agent= <agent> --account= <agent> | The ID for the data.world account into which you will load this catalog. The ID is the organization name as it appears in your organization. This is used to generate the namespace for any URIs generated. | Yes |
--client-id= <clientId> | The Active Directory client ID used to initialize the azure client. | Yes |
--client-secret= <clientSecret> | The Active Directory client secret used to initialize the azure client. | Yes |
--subscription-id= <subscriptionId> | The subscription ID for the Azure account. It is required for fetching the list of storage account. | Yes |
--tenant-id= <tenantId> | The Active Directory tenant id is used to initialize the Azure client. To find the tenant ID, navigate to the Azure Active Directory resource. You can find the Tenant ID listed within the Overview page. | Yes |
--storage-account-name= <storageAccountName> | The azure storage account name used to initialize the azure client. It can be declared multiple times. | No |
--max-resource-limit= <maxResourceLimit> | The maximum resources the collector will harvest, The maximum limit can be up to 10 million. If not specified, by default the collector harvests a maximum of 10,000 resources. | No |
-n= <catalogName> --name= <catalogName> | The name of the catalog - this will be used to generate the ID for the catalog as well as the filename into which the catalog file will be written. | Yes |
-o= <outputDir> --output= <outputDir> | The output directory into which any catalog files should be written. In our example we use the /dwcc -outputas it is running in a Docker container and that is what we specified in the script for a Docker mount point. You can change this value to anything you would like as long as it matches what you use in the mount point: -mount type=bind,source=/tmp,target=/dwcc-output ...-o /dwcc-output In this example, the output will be written to the /tmp directory on the local machine, as indicated by the mount point directive. The log file, in addition to any catalog files, will be written to the directory specified in the mount point directive. | Yes |
-L --no-log-upload | Do not upload the log of the dwcc run to the organization account's catalogs dataset or to another location specified with --upload-location (ignored if --upload not specified) | No |
--site= <site> | The slug for the data.world site into which you will load this catalog this is used to generate the namespace for any URIs generated. | No |
-H= Host --api-host= Host | The host for the data.world API. | No |
-t= <apiToken> --api-token= <apiToken> | The data.world API token to use for authentication. The default is to use an environment variable named DW_AUTH_TOKEN. | No |
-U --upload | Whether to upload the generated catalog to the organization account's catalogs dataset or to another location specified with --upload-location (This requires that the --api-token is specified. | No |
--upload-location= <uploadLocation> | The dataset to which the catalog is to be uploaded, specified as a simple dataset name to upload to that dataset within the organization's account, or [account/dataset] to upload to a dataset in some other account. This parameter is ignored if --upload is not specified. | No |
Common troubleshooting tasks
Collector runtime and troubleshooting
The catalog collector may run in several seconds to many minutes depending on the size and complexity of the system being crawled. If the catalog collector runs without issues, you should see no output on the terminal, but a new file that matching *.dwec.ttl should be in the directory you specified for the output. If there was an issue connecting or running the catalog collector, there will be either a stack trace or a *.log file. Both of those can be sent to support to investigate if the errors are not clear. A list of common issues and problems encountered when running the collectors is available here.
Issue 1: Resources from certain storage accounts are not getting cataloged
Cause: This generally happens when the bucket has more than 10,000 resources or what is set in the --max-resource-limit parameter.
Solution: Check if the --max-resource-limit parameter is set and if so, what value is configured for the parameter.
Issue 2: Collector did not harvest metadata from a specific storage account
Cause: The collector did not have permissions to read from a storage account.
Solution: Ensure that the Service Principal has Storage Blob Data Reader role for each of the Storage Accounts you want to harvest.
Upload the .ttl file generated from running the Collector
When the data.world Collector runs successfully, it creates a .ttl file in the directory you specified as the dwcc-output
directory. The automatically-generated file name is databaseName.catalogName.dwec.ttl
. You can rename the file or leave the default, and then upload it to your ddw-catalogs dataset (or wherever you store your catalogs).
Caution
If there is already a .ttl
catalog file with the same name in your ddw-catalogs dataset, when you add the new one it will overwrite the existing one.
Automating updates to your metadata catalog
Keep your metadata catalog up to date using cron, your Docker container, or your automation tool of choice to run the catalog collector on a regular basis. Considerations for how often to schedule include:
Frequency of changes to the schema
Business criticality of up-to-date data
For organizations with schemas that change often and where surfacing the latest data is business critical, daily may be appropriate. For those with schemas that do not change often and which are less critical, weekly or even monthly may make sense. Consult your data.world representative for more tailored recommendations on how best to optimize your catalog collector processes.