Amazon S3 and the data.world Collector
Note
The latest version of the Collector is 2.129. To view the release notes for this version and all previous versions, please go here.
About the collector
Use this collector to directly harvest metadata on S3 buckets and objects metadata from your Amazon S3 instance. Note that if you are looking to harvest tables and columns from Amazon S3 objects that have been cataloged in AWS Glue Data Catalog, you must instead use the AWS Glue collector.
Authentication supported
The S3 collector authenticates to S3 using the default credential profiles file. The collector needs a user created in the AWS portal with read access to S3.
What is cataloged
The collector catalogs the following information.
Object | Information cataloged |
---|---|
Buckets |
|
Objects |
|
Relationships between objects
By default, the harvested metadata includes catalog pages for the following resource types. Each catalog page has a relationship to the other related resource types. If the metadata presentation for this data source has been customized with the help of the data.world Solutions team, you may see other resource pages and relationships.
Resource page | Relationship |
---|---|
S3 Bucket | Relationship to S3 Object |
S3 Object | Relationship to S3 Bucket |
Important note about maximum limits for S3 buckets
For an optimized experience, the system has set limits for harvesting metadata from the buckets in S3.
By default, the collector has a limit of harvesting 10,000 objects per bucket. If the contents of a bucket cross this limit, the bucket is skipped, no metadata is harvested for it, and a warning message is logged for the bucket.
If you want to overwrite the default limit, set the --max-resources parameter in your collector command. The maximum value for this parameter can be 10,000,000 (ten million). If the total contents (total buckets and objects) cross this limit, further buckets are not cataloged, and a warning message is logged for the bucket.
Setting up authentication for cataloging Amazon S3
This section will walk you through the process of setting up an account with S3 read-access policy and setting up a credentials profile file.
Creating a user
Skip this step if you already have an user that you want to run the collector with and the user has ReadOnlyAccess access to Amazon S3. Detailed AWS documentation on this topic is available here.
Login to the AWS portal and navigate to IAM service. Under Users, click Add users to add an user. You can also select an existing user.
On the next screen:
In the Permissions option, select Add permissions (attach policies directly).
In, Permissions policies section, select AmazonS3ReadOnlyAccess.
Click Next.
On the last screen click the Add permissions or Create user.
Obtaining access key for the user
Skip this step if you already have the access key for the user that you plan to use for running the collector. Detailed AWS documentation on this topic is available here.
Login to the AWS portal and navigate to IAM service.
Under Users, select the user that plan to use for the collector.
On the Security credentials tab, click Create access key.
Select Application running outside AWS. Click Next.
Add the optional Description tag. Click Create Access key.
Note down the Access key ID and Secret access key. You will need this information for setting up the credentials file.
Setting up credentials file
Skip this step if you already have the AWS CLI installed and credentials profiles file set up.
Install the AWS CLI.
From the command line, run aws configure. This stores the credentials to ~/.aws/credentials.
Pre-requisites for running the collector
Make sure that the machine from where you are running the collector meets the following hardware and software requirements.
Item | Requirement |
---|---|
Hardware | |
RAM | 8 GB |
CPU | 2 Ghz processor |
Software | |
Docker | Click here to get Docker. |
Java Runtime Environment | OpenJDK 17 is supported and available here. |
data.world specific objects | |
Dataset | You must have a ddw-catalogs (or other) dataset set up to hold your catalog files when you are done running the collector. |
Ways to run the data.world Collector
There are a few different ways to run the data.world Collector--any of which can be combined with an automation strategy to keep your catalog up to date:
Create a configuration file (config.yml) - This option stores all the information needed to catalog your data sources. It is an especially valuable option if you have multiple data sources to catalog as you don't need to run multiple scripts or CLI commands separately.
Run the collector though a CLI - Repeat runs of the collector requires you to re-enter the command for each run.
Note
This section walks you through the process of running the collector using CLI.
Preparing and running the command
The easiest way to create your Collector command is to:
Copy the following example command in a text editor.
Set the required parameters in the command. The example command includes the minimal parameters required to run the collector
Open a terminal window in any Unix environment that uses a Bash shell and paste the command in it and run in.
docker run -it --rm --mount type=bind,source=/tmp,target=/dwcc-output \ --mount type=bind,source=/tmp,target=/app/log \ --mount type=bind,source=/path/to/local/.aws/credentials,target=/root/.aws/credentials \datadotworld/dwcc:<CollectorVersion> \ catalog-amazon-s3 -a <account> \ --aws-region=<awsRegion> --max-resources=<maxResources> \ -n <catalogName> -o "/dwcc-output"
The following table describes the parameters for the command. Detailed information about the Docker portion of the command can be found here.
Parameter | Details | Required? |
---|---|---|
Location of credentials profiles file | Provide the location of the credentials file you generated for authentication. For example, --mount type=bind,source=~/.aws/credentials,target=/root/.aws/credentials | Yes |
dwcc:<CollectorVersion> | Replace | Yes |
-A, --all-schemas | Catalog all schemas to which the user has access (exclusive of --schema). | Yes |
-a =<agent> --agent=<agent> --account=<agent> | The ID for the data.world account into which you will load this catalog. The ID is the organization name as it appears in your organization. This is used to generate the namespace for any URIs generated. You must specify this OR -the --base parameter. | Yes (if base parameter is not provided) |
-b=<base> --base=<base> | The base URI to use as the namespace for any URIs generated You must specify this OR -the -agent parameter. | Yes (if agent parameter is not provided) |
--awsRegion=<awsRegion> | The AWS region used to initialize the S3 client. | Yes |
---max-resources=<maxResources> | The maximum resources the collector should harvest. More details on when this parameter should be set are available here. | No |
-n=<catalogName> --name=<catalogName> | The name of the catalog - this will be used to generate the ID for the catalog as well as the filename into which the catalog file will be written. | Yes |
-o=<outputDir> --output=<outputDir> | The output directory into which any catalog files should be written. In our example we use the /dwcc -output as it is running in a Docker container and that is what we specified in the script for a Docker mount point. You can change this value to anything you would like as long as it matches what you use in the mount point: -mount type=bind,source=/tmp,target=/dwcc-output ...-o /dwcc-output In this example, the output will be written to the /tmp directory on the local machine, as indicated by the mount point directive. The log file, in addition to any catalog files, will be written to the directory specified in the mount point directive. | Yes |
-L --no-log-upload | Do not upload the log of the dwcc run to the organization account's catalogs dataset or to another location specified with --upload-location (ignored if --upload not specified) | No |
--site=<site> | The slug for the data.world site into which you will load this catalog this is used to generate the namespace for any URIs generated. | No |
-H=Host --api-host=Host | The host for the data.world API. | No |
-t=<apiToken> --api-token=<apiToken> | The data.world API token to use for authentication. The default is to use an environment variable named DW_AUTH_TOKEN. In order to automatically upload the catalog to data.world, you will need a read/write API token for data.world. | No |
-U --upload | Whether to upload the generated catalog to the organization account's catalogs dataset or to another location specified with --upload-location (This requires that the --api-token is specified. | No |
--upload-location=<uploadLocation> | The dataset to which the catalog is to be uploaded, specified as a simple dataset name to upload to that dataset within the organization's account, or [account/dataset] to upload to a dataset in some other account. This parameter is ignored if --upload is not specified. | No |
-z=<postProcessSparql> --post-process-sparql=<postProcessSparql> | A file containing a SPARQL query to execute to transform the catalog graph emitted by the collector. | No |
Common troubleshooting tasks
Collector runtime and troubleshooting
The catalog collector may run in several seconds to many minutes depending on the size and complexity of the system being crawled. If the catalog collector runs without issues, you should see no output on the terminal, but a new file that matching *.dwec.ttl should be in the directory you specified for the output. If there was an issue connecting or running the catalog collector, there will be either a stack trace or a *.log file. Both of those can be sent to support to investigate if the errors are not clear. A list of common issues and problems encountered when running the collectors is available here.
Issue 1: An access error occurs while running the collector
Cause: The account used to authenticate to Amazon S3 does not have permissions to read buckets or objects.
Solution: Follow the instructions to set the user and permissions for the collector.
Issue 2: An invalid access token error occurs while running the collector
Cause: The access key for the AWS account is expired or is incorrect.
Solution: Delete the ~/.aws/credentials file and re-run the steps to obtain the access key and setting up the credentials file.
Issue 3: Resources from certain buckets from S3 not getting cataloged
Cause: This issue generally happens when the bucket has resources more than 10,000 or what is set in the --max-resources parameter.
Solution: Check if the --max-resources parameter is set and if so, what value is configured for it.
Upload the .ttl file generated from running the Collector
When the data.world Collector runs successfully, it creates a .ttl file in the directory you specified as the dwcc-output
directory. The automatically-generated file name is databaseName.catalogName.dwec.ttl
. You can rename the file or leave the default, and then upload it to your ddw-catalogs dataset (or wherever you store your catalogs).
Caution
If there is already a .ttl
catalog file with the same name in your ddw-catalogs dataset, when you add the new one it will overwrite the existing one.
Automating updates to your metadata catalog
Keep your metadata catalog up to date using cron, your Docker container, or your automation tool of choice to run the catalog collector on a regular basis. Considerations for how often to schedule include:
Frequency of changes to the schema
Business criticality of up-to-date data
For organizations with schemas that change often and where surfacing the latest data is business critical, daily may be appropriate. For those with schemas that do not change often and which are less critical, weekly or even monthly may make sense. Consult your data.world representative for more tailored recommendations on how best to optimize your catalog collector processes.