Skip to main content

Troubleshooting the collectors

If you are having difficulty running a collector, the following list of common problems can help you troubleshoot what went wrong. If your issue is unanswered, please contact for more assistance.

User permission issues for the Collector

If your run of the Collector does not capture everything in the catalog that you think should be there, the first thing to check is the user account you use to connect to your resource to ensure that you can authenticate to the resource outside of the collector and find those objects. For instance, with a database, you should be able to log into the database with a client (preferably a JDBC client like DBeaver) and see the objects. If the objects don't show up there either, it's a permissions issue.

Overwriting files on upload to the catalog

When you run a collector, the output file name is of the form [database name].[collection name].dwec.ttl. The result is that any time the collector is run more than one time against the same database and uploaded to the same collection, the output file will be overwritten. Overwriting the results when cataloging all schemas in a database is fine as the previously produced file is just updated.

However there are instances--e.g., when it is necessary to catalog one schema in a database at a time--where using the same name for the output file results in an overwrite of unique information as opposed to an update. In this case it would be desirable to have unique names for each of the output files before they were uploaded to a collection in the catalog.

Currently the way to achieve uploading of unique files from different schema in the same database is to:

  1. Disable automatic upload of the TTL files when running the collector

  2. Rename each output file with a unique name after running the collector

  3. Manually upload each of the newly created TTL files.

Allocating additional memory to Docker

When running the Collector via Docker to catalog large bodies of metadata (e.g., a data source with hundreds or thousands of tables and many thousands of columns), you might exhaust the available memory in the Docker container for the collector process. To address this problem, increase the memory available to Docker. On Windows and MacOS, this is handled via a Docker desktop preference change. If you are running this on a native Linux host, the Docker host and native host are the same (so memory available to Docker is all machine memory). On a Mac, e.g., go to Docker preferences:


And select Resources > Advanced. In this example the memory allowance is set to 2 GB. Increase it to 4 GB by moving the slider for Memory:


You can also increase the memory available to the Collector container by terminating other containers running within the Docker host.

No JDBC driver found message for collectors with JDBC drivers provided

If you are using a collector for which the JDBC driver is bundled in the Collector, you might get a log message (INFO level) upon running the collector noting that no jdbc drivers were found in the directory where the Collector looks for drivers. This message can safely be ignored. This message is not an error message or a sign that anything is wrong--those messages are indicated with WARN or ERROR severity. You do not need to (nor should you) provide your own driver.

Case sensitivity

All database options passed through the Collector (the name of the database, the name of any schemas, role, user, and password, etc.) are all case-sensitive.