Troubleshooting Databricks collector issues
Collector runtime and troubleshooting
The catalog collector may run in several seconds to many minutes depending on the size and complexity of the system being crawled.
If the catalog collector runs without issues, you should see no output on the terminal, but a new file that matching *.dwec.ttl should be in the directory you specified for the output.
If there was an issue connecting or running the catalog collector, there will be either a stack trace or a *.log file. Both of those can be sent to support to investigate if the errors are not clear.
A list of common issues and problems encountered when running the collectors is available here.
Issue 1: Not all desired tables displayed after the collector run is complete
Cause: The parameters all-schemas or schema is missing from the Command line or YAML file.
Solution: Check your command or YAML file to make sure the all-schemas or schema parameter is setup properly.
Issue 2: Collector stops harvesting metadata and log messages show communication failures
The following error messages are observed in the error logs:
Communication link failure. Failed to connect to server. Reason: HTTP Response code: 504
Communication link failure. Failed to connect to server. Reason: HTTP Response code: 502
Within the Databricks Event log for the cluster, you may also see the following messages: Driver is up but is not responsive, likely due to GC or java.lang.OutOfMemoryError: GC overhead limit exceeded.
Cause: The Databricks cluster has insufficient memory. When there is a low amount of memory, garbage collection runs a lot and slows down the message. For details, see the Databricks troubleshooting article.
Solution: Change the Databricks cluster driver type to an instance that has more memory. For details, see the Databricks documentation.