The Confluent CCDAK exam preparation guide is designed to provide candidates with necessary information about the Apache Kafka Developer exam. It includes exam summary, sample questions, practice test, objectives and ways to interpret the exam objectives to enable candidates to assess the types of questions-answers that may be asked during the Confluent Certified Developer for Apache Kafka (CCDAK) exam.
It is recommended for all the candidates to refer the CCDAK objectives and sample questions provided in this preparation guide. The Confluent Apache Kafka Developer certification is mainly targeted to the candidates who want to build their career in Developer domain and demonstrate their expertise. We suggest you to use practice exam listed in this cert guide to get used to with exam environment and identify the knowledge areas where you need more work prior to taking the actual Confluent Certified Developer for Apache Kafka exam.
Confluent CCDAK Exam Summary:
Exam Name
|
Confluent Certified Developer for Apache Kafka |
Exam Code | CCDAK |
Exam Price | $150 USD |
Duration | 90 minutes |
Number of Questions | 60 |
Passing Score | Pass / Fail |
Recommended Training / Books |
Live Classes Self-Paced Training |
Schedule Exam | Schedule an Exam |
Sample Questions | Confluent CCDAK Sample Questions |
Recommended Practice | Confluent Certified Developer for Apache Kafka (CCDAK) Practice Test |
Confluent Apache Kafka Developer Syllabus:
Section | Objectives |
---|---|
Introductory Concepts |
- Write code to connect to a Kafka cluster - Distinguish between leaders and followers and work with replicas - Explain what a segment is and explore retention - Use the CLI to work with topics, producers, and consumers |
Working with Producers |
- Describe the work a producer performs, and the core components needed to produce messages - Create producers and specify configuration properties - Explain how to configure producers to know that Kafka receives messages - Delve into how batching works and explore batching configurations - Explore reacting to failed delivery and tuning producers with timeouts - Use the APIs for Java, C#/.NET, or Python to create a Producer |
Consumers, Groups and Partitions |
- Create and manage consumers and their property files - Illustrate how consumer groups and partitions provide scalability and fault tolerance - Explore managing consumer offsets - Tune fetch requests - Explain how consumer groups are managed and their benefits - Compare and contrast group management strategies and when you might use each - Use the API for Java, C#/.NET, or Python to create a Consumer |
Schemas and the Confluent Schema Registry |
- Describe Kafka schemas and how they work - Write an Avro compatible schema and explore using Protobuf and JSON schemas - Write schemas that can evolve - Write and read messages using schema-enabled Kafka client applications - Using Avro, the API for Java, C#/.NET, or Python, write a schema-enabled producer or consumer that leverages the Confluent Schema Registry |
Streaming and Kafka Streams |
- Develop an appreciation for what streaming applications can do for you back on the job - Describe Kafka Streams and explore steams properties and topologies - Compare and contrast steams and tables, and relate events in streams to records/messages in topics - Write an application using the Streams DSL (Domain-Specific Language) |
Introduction to Confluent ksqlDB |
- Describe how Kafka Streams and ksqlDB relate - Explore the ksqlDB CLI - Use ksqlDB to filter and transform data - Compare and contrast types of ksqlDB queries - Leverage ksqlDB to perform time-based stream operations - Write a ksqlDB query that relates data between two streams or a stream and a table |
Kafka Connect |
- List some of the components of Kafka Connect and describe how they relate - Set configurations for components of Kafka Connect - Describe connect integration and how data flows between applications and Kafka - Explore some use-cases where Kafka Connect makes development efficient - Use Kafka Connect in conjunction with other tools to process data in motion in the most efficient way - Create a Connector and import data from a database to a Kafka cluster |
Design Decisions and Considerations |
- Delve into how compaction affects consumer offsets - Explore how consumers work with offsets in scenarios outside of normal processing behavior and understand how to manipulate offsets to deal with anomalies - Evaluate decisions about consumer and partition counts and how they relate - Address decisions that arise from default key-based partitioning and consider alternative partitioning strategies - Configure producers to deliver messages without duplicates and with ordering guarantees - List ways to manage large message sizes - Describe how to work with messages in transactions and how Kafka enables transactions |
Robust Development |
- Compare and contrast error handling options with Kafka Connect, including the dead letter queue - Distinguish between various categories of testing - List considerations for stress and load test a Kafka system |