{"id":51653,"date":"2025-08-20T04:44:42","date_gmt":"2025-08-20T04:44:42","guid":{"rendered":"https:\/\/www.devopsschool.com\/blog\/?p=51653"},"modified":"2025-08-20T04:44:42","modified_gmt":"2025-08-20T04:44:42","slug":"kafka-confluent-terminology-complete-glossary","status":"publish","type":"post","link":"https:\/\/www.devopsschool.com\/blog\/kafka-confluent-terminology-complete-glossary\/","title":{"rendered":"Kafka &amp; Confluent Terminology \u2013 Complete Glossary"},"content":{"rendered":"\n<p><br>I\u2019ll cover <strong>Apache Kafka core terms<\/strong>, <strong>Confluent Platform extensions<\/strong>, and <strong>Confluent Cloud additions<\/strong>.<\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n\n<h1 class=\"wp-block-heading\"><strong>Kafka &amp; Confluent Terminology \u2013 Complete Glossary<\/strong><\/h1>\n\n\n\n<h2 class=\"wp-block-heading\">\ud83d\udd39 <strong>Core Kafka Concepts<\/strong><\/h2>\n\n\n\n<ol class=\"wp-block-list\">\n<li><strong>Kafka Cluster<\/strong><br>A group of servers (called brokers) working together to store, process, and stream data.<\/li>\n\n\n\n<li><strong>Broker<\/strong><br>A single Kafka server that stores data and serves client requests (produce\/consume).<\/li>\n\n\n\n<li><strong>Producer<\/strong><br>An application that sends data (messages) into Kafka topics.<\/li>\n\n\n\n<li><strong>Consumer<\/strong><br>An application that reads data (messages) from Kafka topics.<\/li>\n\n\n\n<li><strong>Consumer Group<\/strong><br>A group of consumers working together to read data from a topic. Kafka ensures each message is processed by <strong>only one consumer within the group<\/strong>.<\/li>\n\n\n\n<li><strong>Topic<\/strong><br>A named channel where producers send messages and consumers read messages (like a folder or queue).<\/li>\n\n\n\n<li><strong>Partition<\/strong><br>A topic is divided into slices called partitions. Messages in a partition are ordered. Partitions allow parallelism and scalability.<\/li>\n\n\n\n<li><strong>Offset<\/strong><br>The position of a message in a partition (like a bookmark). Consumers use offsets to track what they\u2019ve read.<\/li>\n\n\n\n<li><strong>Record \/ Message<\/strong><br>A single unit of data in Kafka. It has:\n<ul class=\"wp-block-list\">\n<li><strong>Key<\/strong> (optional, used for partitioning\/order)<\/li>\n\n\n\n<li><strong>Value<\/strong> (the actual payload)<\/li>\n\n\n\n<li><strong>Headers<\/strong> (extra metadata)<\/li>\n<\/ul>\n<\/li>\n\n\n\n<li><strong>Log<\/strong><br>A partition is stored as an append-only log (new messages are always written at the end).<\/li>\n\n\n\n<li><strong>Replication<\/strong><br>Kafka keeps copies of partitions across multiple brokers for fault tolerance.<\/li>\n\n\n\n<li><strong>Leader &amp; Follower<\/strong>\n<ul class=\"wp-block-list\">\n<li><strong>Leader<\/strong>: The main replica of a partition that handles all reads\/writes.<\/li>\n\n\n\n<li><strong>Follower<\/strong>: Copies the leader\u2019s data for backup.<\/li>\n<\/ul>\n<\/li>\n\n\n\n<li><strong>ISR (In-Sync Replicas)<\/strong><br>A set of replicas that are fully caught up with the leader.<\/li>\n\n\n\n<li><strong>Retention Policy<\/strong><br>Defines how long Kafka keeps data (e.g., 7 days, forever, or until size limit).<\/li>\n\n\n\n<li><strong>Compaction<\/strong><br>A cleanup policy that keeps only the <strong>latest value per key<\/strong>, deleting older duplicates.<\/li>\n\n\n\n<li><strong>Throughput<\/strong><br>The rate at which Kafka processes messages (messages per second).<\/li>\n\n\n\n<li><strong>Latency<\/strong><br>The time it takes for a message to travel from producer \u2192 broker \u2192 consumer.<\/li>\n<\/ol>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n\n<h2 class=\"wp-block-heading\">\ud83d\udd39 <strong>Kafka Internals<\/strong><\/h2>\n\n\n\n<ol start=\"18\" class=\"wp-block-list\">\n<li><strong>ZooKeeper (Legacy)<\/strong><br>Used in older Kafka versions to manage cluster metadata and leader election. (Being replaced by <strong>KRaft<\/strong>).<\/li>\n\n\n\n<li><strong>KRaft (Kafka Raft Metadata mode)<\/strong><br>New architecture where Kafka itself manages metadata, removing the need for ZooKeeper.<\/li>\n\n\n\n<li><strong>Controller<\/strong><br>A special broker responsible for managing partition leaders.<\/li>\n\n\n\n<li><strong>Rebalancing<\/strong><br>When consumers join\/leave a group, Kafka redistributes partitions among them.<\/li>\n\n\n\n<li><strong>Coordinator<\/strong><br>The broker responsible for managing a consumer group.<\/li>\n\n\n\n<li><strong>ACL (Access Control List)<\/strong><br>Security rules defining which user\/app can access which topic or resource.<\/li>\n\n\n\n<li><strong>Quotas<\/strong><br>Limits on how much data a client can produce\/consume to prevent abuse.<\/li>\n\n\n\n<li><strong>Idempotent Producer<\/strong><br>Ensures no duplicate messages are produced even if retries happen.<\/li>\n\n\n\n<li><strong>Exactly-Once Semantics (EOS)<\/strong><br>Guarantee that messages are processed only once, even during failures.<\/li>\n\n\n\n<li><strong>Transactions<\/strong><br>A way to group multiple messages into an atomic unit of work.<\/li>\n<\/ol>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n\n<h2 class=\"wp-block-heading\">\ud83d\udd39 <strong>Confluent-Specific Terms<\/strong><\/h2>\n\n\n\n<ol start=\"28\" class=\"wp-block-list\">\n<li><strong>Confluent Platform<\/strong><br>An enterprise distribution of Kafka with additional tools for management, monitoring, and integration.<\/li>\n\n\n\n<li><strong>Confluent Cloud<\/strong><br>A fully managed Kafka service hosted by Confluent on AWS, Azure, or GCP.<\/li>\n\n\n\n<li><strong>Schema Registry<\/strong><br>Stores and enforces schemas (data formats) for messages (e.g., Avro, JSON, Protobuf) to ensure compatibility.<\/li>\n\n\n\n<li><strong>kSQL \/ ksqlDB<\/strong><br>A SQL-like engine to query, process, and transform Kafka streams in real-time.<\/li>\n\n\n\n<li><strong>Kafka Connect<\/strong><br>A framework to move data in\/out of Kafka using connectors (e.g., JDBC, S3, Elasticsearch).<\/li>\n\n\n\n<li><strong>Connector<\/strong><br>A plugin used with Kafka Connect to integrate Kafka with external systems.\n<ul class=\"wp-block-list\">\n<li><strong>Source Connector<\/strong>: Pulls data into Kafka.<\/li>\n\n\n\n<li><strong>Sink Connector<\/strong>: Pushes data out of Kafka.<\/li>\n<\/ul>\n<\/li>\n\n\n\n<li><strong>Confluent Hub<\/strong><br>A marketplace of prebuilt Kafka connectors.<\/li>\n\n\n\n<li><strong>Confluent Control Center<\/strong><br>A GUI tool for monitoring Kafka clusters, topics, connectors, and schemas.<\/li>\n\n\n\n<li><strong>Replicator<\/strong><br>A Confluent tool to copy topics from one Kafka cluster to another (useful for multi-region).<\/li>\n\n\n\n<li><strong>Confluent REST Proxy<\/strong><br>Allows producing\/consuming data using REST APIs instead of Kafka clients.<\/li>\n\n\n\n<li><strong>Confluent RBAC (Role-Based Access Control)<\/strong><br>Fine-grained access control for Kafka resources.<\/li>\n\n\n\n<li><strong>Confluent CLI<\/strong><br>A command-line tool for managing Confluent Cloud clusters, topics, and connectors.<\/li>\n\n\n\n<li><strong>Tiered Storage<\/strong><br>A Confluent feature that offloads older Kafka data to cheaper cloud storage (e.g., S3, GCS).<\/li>\n\n\n\n<li><strong>Cluster Linking<\/strong><br>A Confluent Cloud feature to link clusters across regions\/clouds for data replication.<\/li>\n\n\n\n<li><strong>Confluent Cloud Metrics API<\/strong><br>Provides usage and performance metrics for monitoring clusters.<\/li>\n<\/ol>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n\n<h2 class=\"wp-block-heading\">\ud83d\udd39 <strong>Stream Processing Terms<\/strong><\/h2>\n\n\n\n<ol start=\"43\" class=\"wp-block-list\">\n<li><strong>Kafka Streams<\/strong><br>A Java library for building real-time streaming applications on top of Kafka.<\/li>\n\n\n\n<li><strong>Stream<\/strong><br>A continuous flow of data records in Kafka.<\/li>\n\n\n\n<li><strong>Stream Processor<\/strong><br>An application that transforms or processes Kafka data in real-time.<\/li>\n\n\n\n<li><strong>Topology<\/strong><br>The workflow (graph of processors) that defines how streams are processed.<\/li>\n\n\n\n<li><strong>State Store<\/strong><br>Local storage used by stream processing apps to maintain state (e.g., counts, aggregations).<\/li>\n\n\n\n<li><strong>Global Store<\/strong><br>A replicated state store available to all stream tasks.<\/li>\n\n\n\n<li><strong>Windowing<\/strong><br>Grouping data by time intervals (e.g., 5-minute sales totals).<\/li>\n<\/ol>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n\n<h2 class=\"wp-block-heading\">\ud83d\udd39 <strong>Advanced Kafka Concepts<\/strong><\/h2>\n\n\n\n<ol start=\"50\" class=\"wp-block-list\">\n<li><strong>Reassignment<\/strong><br>Moving partitions across brokers for load balancing.<\/li>\n\n\n\n<li><strong>Throttling<\/strong><br>Slowing down producers\/consumers to avoid overwhelming the cluster.<\/li>\n\n\n\n<li><strong>Backpressure<\/strong><br>When consumers can\u2019t keep up with producers, causing slowdowns.<\/li>\n\n\n\n<li><strong>Dead Letter Queue (DLQ)<\/strong><br>A special topic where failed or invalid messages are sent for later debugging.<\/li>\n\n\n\n<li><strong>MirrorMaker 2.0<\/strong><br>Kafka\u2019s built-in tool for replicating data across clusters (open-source equivalent of Confluent Replicator).<\/li>\n\n\n\n<li><strong>Metrics &amp; JMX<\/strong><br>Kafka exposes metrics via JMX for monitoring cluster health.<\/li>\n\n\n\n<li><strong>Log Segment<\/strong><br>Each partition\u2019s log is broken into smaller files called log segments.<\/li>\n\n\n\n<li><strong>Message Key Partitioning<\/strong><br>The method Kafka uses to decide which partition a message goes to (based on key hash).<\/li>\n\n\n\n<li><strong>Rack Awareness<\/strong><br>Kafka spreads replicas across different racks\/data centers for reliability.<\/li>\n<\/ol>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n\n<p><\/p>\n","protected":false},"excerpt":{"rendered":"<p>I\u2019ll cover Apache Kafka core terms, Confluent Platform extensions, and Confluent Cloud additions. Kafka &amp; Confluent Terminology \u2013 Complete Glossary \ud83d\udd39 Core Kafka Concepts \ud83d\udd39 Kafka Internals \ud83d\udd39 Confluent-Specific Terms&#8230; <\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"_joinchat":[],"footnotes":""},"categories":[2],"tags":[],"class_list":["post-51653","post","type-post","status-publish","format-standard","hentry","category-uncategorised"],"_links":{"self":[{"href":"https:\/\/www.devopsschool.com\/blog\/wp-json\/wp\/v2\/posts\/51653","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.devopsschool.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.devopsschool.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.devopsschool.com\/blog\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.devopsschool.com\/blog\/wp-json\/wp\/v2\/comments?post=51653"}],"version-history":[{"count":1,"href":"https:\/\/www.devopsschool.com\/blog\/wp-json\/wp\/v2\/posts\/51653\/revisions"}],"predecessor-version":[{"id":51654,"href":"https:\/\/www.devopsschool.com\/blog\/wp-json\/wp\/v2\/posts\/51653\/revisions\/51654"}],"wp:attachment":[{"href":"https:\/\/www.devopsschool.com\/blog\/wp-json\/wp\/v2\/media?parent=51653"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.devopsschool.com\/blog\/wp-json\/wp\/v2\/categories?post=51653"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.devopsschool.com\/blog\/wp-json\/wp\/v2\/tags?post=51653"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}