SlideShare a Scribd company logo
Paris Apache Kafka
Meetup
Florian HUSSONNOIS
Zenika
@fhussonnois
Async, Sync, Batch, Partitioner et Retries
Properties config = new Properties();
config.put("bootstrap.servers", "localhost:9092");
KafkaProducer<String, String> producer = new KafkaProducer(config);
ProducerRecord record = new ProducerRecord("my_topic", "my_key", "my_value");
producer.send(record);
producer.close();
L’appel à la méthode send()est asynchrone et retourne immédiatement
Le message est ajouté à un buffer avant d’être envoyé
//...
config.put("batch.size", 16384);
config.put("linger.ms", 1);
//... Latence entre chaque transmission de messages
Taille maximum d’un batch
List<ProducerRecord> batchRecords = new ArrayList<>();
//...
for(ProducerRecord record : batchRecords)
producer.send(record);
producer.flush();
producer.close(); Force l’envoi des messages et bloque jusqu’à leur
complétion
Future<RecordMetadata> future = producer.send(record);
RecordMetadata metadata = future.get(); // BLOCK
LOG.info("message sent to topic {}, partition {}, offset {}",
metadata.topic(),
metadata.partition(),
metadata.offset());
ProducerRecord record = new ProducerRecord("my_topic", "my_key", "my_value");
Future<RecordMetadata> future = producer.send(record, (metadata, e) -> {
if(e != null)
LOG.info("Message sent to topic {}, partition {}, offset {}",
metadata.topic(),
metadata.partition(),
metadata.offset());
else
LOG.error("Damn it!", e);
});
Configuration
config.put("partitioner.class", DefaultPartitioner.class.getName()
Implémenter un Partitioner
public interface Partitioner {
int partition(String topic,
Object key, byte[] keyBytes,
Object value, byte[] valueBytes, Cluster cluster);
}
Spécifier directement la partition cible
new ProducerRecord("my_topic", 0, "my_key", "my_value");
Acknowledgments
config.put("ack", "all "); // plus lent, messages répliqués par tous les ISR
Le Producer peut rejouer automatiquement les messages en erreurs
config.put("retries", "0 "); // désactivé
/! Peut provoquer des doublons (At-Least Once)
/! Peut changer l’ordre de publication des messages
Event Loop, Polling Model, Offset et Group
Management
Properties config = new Properties();
config.put("bootstrap.servers", "localhost:9092");
KafkaConsumer<Object, Object> consumer = new KafkaConsumer<>(config);
consumer.subscribe(Arrays.asList("topic1, topic2"));
while(true) {
ConsumerRecords<Object, Object> records = consumer.poll(1000);
records.forEach(record ->
LOG.info("key={}, value={}", record.key(), record.value()));
}
Event Loop, Polling Model
Properties config = new Properties();
config.put("bootstrap.servers", "localhost:9092");
config.put("enable.auto.commit", false); // désactive auto-commit
config.put("auto.commit.interval.ms", 100);
KafkaConsumer<Object, Object> consumer = new KafkaConsumer<>(config);
consumer.subscribe(Arrays.asList("topic1, topic2"));
while(true) {
ConsumerRecords<Object, Object> records = consumer.poll(1000);
records.forEach(record ->
LOG.info("key={}, value={}", record.key(), record.value()));
consumer.commitAsync();
}
}
while(true) {
ConsumerRecords<Object, Object> records = consumer.poll(1000);
consumer.commitSync(); // Commit offsets before processing messages.
records.forEach(record ->
LOG.info("key={}, value={}", record.key(), record.value()));
}
Paris Kafka Meetup - How to develop with Kafka
Properties config = new Properties();
config.put("bootstrap.servers", "localhost:9092");
config.put("group.id", "my_group");
KafkaConsumer<Object, Object> consumer = new KafkaConsumer<>(config);
consumer.subscribe(Arrays.asList("topic1, topic2"));
KafkaConsumer<Object, Object> consumer = new KafkaConsumer<>(config);
consumer.subscribe(Arrays.asList("topic1, topic2"), new
ConsumerRebalanceListener() {
@Override
public void onPartitionsRevoked(Collection<TopicPartition> partitions) {
//do some stuff
}
@Override
public void onPartitionsAssigned(Collection<TopicPartition> partitions) {
//do some stuff
}
});
Chaque consumer d’un groupe doit notifier le coordinateur
Uniquement possible sur un appel aux méthodes poll, commit, etc.
Déclenché si un consumer rejoint ou quitte un groupe
L’opération de « rebalance » est impactée par les paramètres :
• session.timeout.ms (30 secondes)
• heartbeat.interval.ms
Rebalance intempestif en cas de traitement d’un message trop long
ConsumerRecords<Object, Object> records = consumer.poll(1000);
if( ! records.isEmpty() ) {
consumer.pause(consumer.assignment().toArray(new TopicPartition[0]));
Future<Boolean> future = executorService.submit(() -> {
records.forEach(record -> LOG.info("key={}, value={}", record.key(), record.value()));
return true;
});
Boolean isCompleted = false;
while(!isCompleted) {
try {
isCompleted = future.get(5, TimeUnit.SECONDS); // Wait before polling
} catch (TimeoutException e) {
consumer.poll(0); // heart-beat
} catch (CancellationException |ExecutionException | InterruptedException e) {
break;
}
}
consumer.resume(consumer.assignment().toArray(new TopicPartition[0]));
consumer.commitSync();
}
ExecutorService
Se positionner à un offset spécifique
consumer.seek(new TopicPartition("my_topic", 0), 42);
consumer.seekToEnd(new TopicPartition("my_topic", 0));
consumer.seekToBeginning(new TopicPartition("my_topic", 0));
Assignements manuel
consumer.assign(Arrays.asList(new TopicPartition("my_topic", 0)));
Obtenir les métriques
consumer.metrics();
Nous recrutons ! jobs@zenika.com
@ZenikaIT
Prochain Meetup le

More Related Content

PDF
Devinsampa nginx-scripting
PDF
Using ngx_lua in UPYUN
PDF
Using Node.js to Build Great Streaming Services - HTML5 Dev Conf
PDF
Node.js streaming csv downloads proxy
PDF
RestMQ - HTTP/Redis based Message Queue
PDF
Roll Your Own API Management Platform with nginx and Lua
PDF
Lua tech talk
PDF
Devinsampa nginx-scripting
Using ngx_lua in UPYUN
Using Node.js to Build Great Streaming Services - HTML5 Dev Conf
Node.js streaming csv downloads proxy
RestMQ - HTTP/Redis based Message Queue
Roll Your Own API Management Platform with nginx and Lua
Lua tech talk

What's hot (20)

PDF
Programming with Python and PostgreSQL
KEY
Streams are Awesome - (Node.js) TimesOpen Sep 2012
PDF
What's new in Ansible 2.0
PDF
Lambda Jam 2015: Event Processing in Clojure
PDF
Redis as a message queue
PDF
Hopping in clouds: a tale of migration from one cloud provider to another
PDF
Lightweight wrapper for Hive on Amazon EMR
PDF
Redis & ZeroMQ: How to scale your application
DOCX
Winform
PDF
Application Logging in the 21st century - 2014.key
PDF
Using ngx_lua in UPYUN 2
PPTX
MySQL Audit using Percona audit plugin and ELK
PPTX
Using Cerberus and PySpark to validate semi-structured datasets
PDF
More than syntax
PDF
Jan Stępień - GraalVM: Fast, Polyglot, Native - Codemotion Berlin 2018
PPTX
Apache Spark Structured Streaming + Apache Kafka = ♡
PPTX
Elk with Openstack
PDF
glance replicator
PPTX
PPTX
2015 05 27 JSConf - concurrency and parallelism final
Programming with Python and PostgreSQL
Streams are Awesome - (Node.js) TimesOpen Sep 2012
What's new in Ansible 2.0
Lambda Jam 2015: Event Processing in Clojure
Redis as a message queue
Hopping in clouds: a tale of migration from one cloud provider to another
Lightweight wrapper for Hive on Amazon EMR
Redis & ZeroMQ: How to scale your application
Winform
Application Logging in the 21st century - 2014.key
Using ngx_lua in UPYUN 2
MySQL Audit using Percona audit plugin and ELK
Using Cerberus and PySpark to validate semi-structured datasets
More than syntax
Jan Stępień - GraalVM: Fast, Polyglot, Native - Codemotion Berlin 2018
Apache Spark Structured Streaming + Apache Kafka = ♡
Elk with Openstack
glance replicator
2015 05 27 JSConf - concurrency and parallelism final
Ad

Similar to Paris Kafka Meetup - How to develop with Kafka (6)

PPTX
Apache kafka part 2
PPTX
Introduction to Kafka and Event-Driven
PDF
Introduction to Kafka and Event-Driven
PDF
Kafka zero to hero
PDF
Apache Kafka - From zero to hero
PDF
Developing Realtime Data Pipelines With Apache Kafka
Apache kafka part 2
Introduction to Kafka and Event-Driven
Introduction to Kafka and Event-Driven
Kafka zero to hero
Apache Kafka - From zero to hero
Developing Realtime Data Pipelines With Apache Kafka
Ad

Recently uploaded (20)

PPTX
Oracle E-Business Suite: A Comprehensive Guide for Modern Enterprises
PDF
Audit Checklist Design Aligning with ISO, IATF, and Industry Standards — Omne...
PDF
Wondershare Filmora 15 Crack With Activation Key [2025
PPTX
Agentic AI : A Practical Guide. Undersating, Implementing and Scaling Autono...
PPTX
Transform Your Business with a Software ERP System
PDF
How to Choose the Right IT Partner for Your Business in Malaysia
PPTX
Lecture 3: Operating Systems Introduction to Computer Hardware Systems
PPTX
Odoo POS Development Services by CandidRoot Solutions
PPTX
Operating system designcfffgfgggggggvggggggggg
PDF
Flood Susceptibility Mapping Using Image-Based 2D-CNN Deep Learnin. Overview ...
PDF
How to Migrate SBCGlobal Email to Yahoo Easily
PDF
How Creative Agencies Leverage Project Management Software.pdf
PDF
Understanding Forklifts - TECH EHS Solution
PDF
Odoo Companies in India – Driving Business Transformation.pdf
PPTX
Agentic AI Use Case- Contract Lifecycle Management (CLM).pptx
PDF
Nekopoi APK 2025 free lastest update
PDF
Which alternative to Crystal Reports is best for small or large businesses.pdf
PDF
Digital Strategies for Manufacturing Companies
PDF
Softaken Excel to vCard Converter Software.pdf
PDF
2025 Textile ERP Trends: SAP, Odoo & Oracle
Oracle E-Business Suite: A Comprehensive Guide for Modern Enterprises
Audit Checklist Design Aligning with ISO, IATF, and Industry Standards — Omne...
Wondershare Filmora 15 Crack With Activation Key [2025
Agentic AI : A Practical Guide. Undersating, Implementing and Scaling Autono...
Transform Your Business with a Software ERP System
How to Choose the Right IT Partner for Your Business in Malaysia
Lecture 3: Operating Systems Introduction to Computer Hardware Systems
Odoo POS Development Services by CandidRoot Solutions
Operating system designcfffgfgggggggvggggggggg
Flood Susceptibility Mapping Using Image-Based 2D-CNN Deep Learnin. Overview ...
How to Migrate SBCGlobal Email to Yahoo Easily
How Creative Agencies Leverage Project Management Software.pdf
Understanding Forklifts - TECH EHS Solution
Odoo Companies in India – Driving Business Transformation.pdf
Agentic AI Use Case- Contract Lifecycle Management (CLM).pptx
Nekopoi APK 2025 free lastest update
Which alternative to Crystal Reports is best for small or large businesses.pdf
Digital Strategies for Manufacturing Companies
Softaken Excel to vCard Converter Software.pdf
2025 Textile ERP Trends: SAP, Odoo & Oracle

Paris Kafka Meetup - How to develop with Kafka

  • 1. Paris Apache Kafka Meetup Florian HUSSONNOIS Zenika @fhussonnois
  • 2. Async, Sync, Batch, Partitioner et Retries
  • 3. Properties config = new Properties(); config.put("bootstrap.servers", "localhost:9092"); KafkaProducer<String, String> producer = new KafkaProducer(config); ProducerRecord record = new ProducerRecord("my_topic", "my_key", "my_value"); producer.send(record); producer.close();
  • 4. L’appel à la méthode send()est asynchrone et retourne immédiatement Le message est ajouté à un buffer avant d’être envoyé //... config.put("batch.size", 16384); config.put("linger.ms", 1); //... Latence entre chaque transmission de messages Taille maximum d’un batch
  • 5. List<ProducerRecord> batchRecords = new ArrayList<>(); //... for(ProducerRecord record : batchRecords) producer.send(record); producer.flush(); producer.close(); Force l’envoi des messages et bloque jusqu’à leur complétion
  • 6. Future<RecordMetadata> future = producer.send(record); RecordMetadata metadata = future.get(); // BLOCK LOG.info("message sent to topic {}, partition {}, offset {}", metadata.topic(), metadata.partition(), metadata.offset());
  • 7. ProducerRecord record = new ProducerRecord("my_topic", "my_key", "my_value"); Future<RecordMetadata> future = producer.send(record, (metadata, e) -> { if(e != null) LOG.info("Message sent to topic {}, partition {}, offset {}", metadata.topic(), metadata.partition(), metadata.offset()); else LOG.error("Damn it!", e); });
  • 8. Configuration config.put("partitioner.class", DefaultPartitioner.class.getName() Implémenter un Partitioner public interface Partitioner { int partition(String topic, Object key, byte[] keyBytes, Object value, byte[] valueBytes, Cluster cluster); } Spécifier directement la partition cible new ProducerRecord("my_topic", 0, "my_key", "my_value");
  • 9. Acknowledgments config.put("ack", "all "); // plus lent, messages répliqués par tous les ISR Le Producer peut rejouer automatiquement les messages en erreurs config.put("retries", "0 "); // désactivé /! Peut provoquer des doublons (At-Least Once) /! Peut changer l’ordre de publication des messages
  • 10. Event Loop, Polling Model, Offset et Group Management
  • 11. Properties config = new Properties(); config.put("bootstrap.servers", "localhost:9092"); KafkaConsumer<Object, Object> consumer = new KafkaConsumer<>(config); consumer.subscribe(Arrays.asList("topic1, topic2")); while(true) { ConsumerRecords<Object, Object> records = consumer.poll(1000); records.forEach(record -> LOG.info("key={}, value={}", record.key(), record.value())); } Event Loop, Polling Model
  • 12. Properties config = new Properties(); config.put("bootstrap.servers", "localhost:9092"); config.put("enable.auto.commit", false); // désactive auto-commit config.put("auto.commit.interval.ms", 100); KafkaConsumer<Object, Object> consumer = new KafkaConsumer<>(config); consumer.subscribe(Arrays.asList("topic1, topic2")); while(true) { ConsumerRecords<Object, Object> records = consumer.poll(1000); records.forEach(record -> LOG.info("key={}, value={}", record.key(), record.value())); consumer.commitAsync(); } }
  • 13. while(true) { ConsumerRecords<Object, Object> records = consumer.poll(1000); consumer.commitSync(); // Commit offsets before processing messages. records.forEach(record -> LOG.info("key={}, value={}", record.key(), record.value())); }
  • 15. Properties config = new Properties(); config.put("bootstrap.servers", "localhost:9092"); config.put("group.id", "my_group"); KafkaConsumer<Object, Object> consumer = new KafkaConsumer<>(config); consumer.subscribe(Arrays.asList("topic1, topic2"));
  • 16. KafkaConsumer<Object, Object> consumer = new KafkaConsumer<>(config); consumer.subscribe(Arrays.asList("topic1, topic2"), new ConsumerRebalanceListener() { @Override public void onPartitionsRevoked(Collection<TopicPartition> partitions) { //do some stuff } @Override public void onPartitionsAssigned(Collection<TopicPartition> partitions) { //do some stuff } });
  • 17. Chaque consumer d’un groupe doit notifier le coordinateur Uniquement possible sur un appel aux méthodes poll, commit, etc. Déclenché si un consumer rejoint ou quitte un groupe L’opération de « rebalance » est impactée par les paramètres : • session.timeout.ms (30 secondes) • heartbeat.interval.ms Rebalance intempestif en cas de traitement d’un message trop long
  • 18. ConsumerRecords<Object, Object> records = consumer.poll(1000); if( ! records.isEmpty() ) { consumer.pause(consumer.assignment().toArray(new TopicPartition[0])); Future<Boolean> future = executorService.submit(() -> { records.forEach(record -> LOG.info("key={}, value={}", record.key(), record.value())); return true; }); Boolean isCompleted = false; while(!isCompleted) { try { isCompleted = future.get(5, TimeUnit.SECONDS); // Wait before polling } catch (TimeoutException e) { consumer.poll(0); // heart-beat } catch (CancellationException |ExecutionException | InterruptedException e) { break; } } consumer.resume(consumer.assignment().toArray(new TopicPartition[0])); consumer.commitSync(); } ExecutorService
  • 19. Se positionner à un offset spécifique consumer.seek(new TopicPartition("my_topic", 0), 42); consumer.seekToEnd(new TopicPartition("my_topic", 0)); consumer.seekToBeginning(new TopicPartition("my_topic", 0)); Assignements manuel consumer.assign(Arrays.asList(new TopicPartition("my_topic", 0))); Obtenir les métriques consumer.metrics();
  • 20. Nous recrutons ! jobs@zenika.com @ZenikaIT Prochain Meetup le