About the Confluent Platform collector
Confluent Platform is a data streaming platform that extends Apache Kafka. Use this collector to harvest metadata from a Kafka Confluent Platform cluster running on-premise or in an environment managed by the user (or the user’s organization). The collector can optionally harvest metadata from Avro, json-schema, and Protobuf schemas stored in Confluent Schema Registry.
Important
The Kafka - Confluent Platform collector can be run on-premise using Docker or Jar files.
Note
The latest version of the Collector is 2.247. To view the release notes for this version and all previous versions, please go here.
What is cataloged
The collector catalogs the following information.
Important
Note that the collectors only harvest schemas in the Confluent Schema Registry registered under a subject that matches a topic’s key or value, according to the default TopicNameStrategy naming strategy, described in the Confluent Schema Registry documentation. Schemas in the schema registry registered under other subjects are not currently harvested.
Object | Information cataloged |
---|---|
Cluster |
|
Producer |
|
Consumer |
|
Broker |
|
Partition |
|
Schema |
|
Consumer Group |
|
Topic |
|
Relationships between objects
By default, the harvested metadata includes catalog pages for the following resource types. Each catalog page has a relationship to the other related resource types. If the metadata presentation for this data source has been customized with the help of the data.world Solutions team, you may see other resource pages and relationships.
Resource page | Relationship |
---|---|
Cluster |
|
Producer |
|
Consumer Group |
|
Consumer |
|
Broker |
|
Topic |
|
Partition |
|
Schema |
|
Versions supported
The collector supports version 6.1.0 and above of the Kafka Admin API and supports any Confluent Kafka cluster compatible with that version.
Note
For Confluent Platform version 6.2.0 and later, the collector uses the topic ID to uniquely identify topics. However, in earlier versions of Confluent Platform, unique identifiers for topics were not assigned, so the collector relies on the topic name for identification.
Authentication supported
The collector authenticates to a Kafka cluster using Simple Authentication and Security Layer (SASL), with a username/password credential. For SASL, the collector supports both PLAIN and SCRAM-SHA-512 authentication mechanism.
By default, the collector assumes that SASL is used over Secure Sockets Layer (SSL). In cases where SSL is disabled (for example, internal test clusters in Kafka), you can disable SSL for the collector. Consult the Apache Kafka documentation for more information on Kafka security.