This document discusses Lyft's use of DynamoDB change logs to ingest real-time data into Elasticsearch. It describes how Flink jobs are used to stream data from DynamoDB streams to Kafka and then from Kafka to Elasticsearch. It addresses challenges like handling 429 errors from Elasticsearch and access control using VPC security groups. Finally, it discusses how the pipeline was designed to allow seamless upgrades of Elasticsearch without downtime by buffering changes in Kafka during migration.