Enterprise docs

Apache Spark

Apache Spark and DWCC

Metadata for an Apache Spark data source can be cataloged using the Hive metadata collector for data.world. To use it, create a Hive metadata collector connection (full instructions are here), and then point the collector at Spark.


This integration only works for Apache Spark. If you are interested in cataloging Databricks Spark or another Spark variation, please contact us.