Winamp Logo
Streaming Audio: Apache Kafka & Real-Time Data Cover
Streaming Audio: Apache Kafka & Real-Time Data Profile

Streaming Audio: Apache Kafka & Real-Time Data

English, Technology, 1 season, 264 episodes, 6 days, 20 hours, 8 minutes
About
Streaming Audio is a podcast from Confluent, the team that originally built Apache Kafka. Host Tim Berglund (Senior Director of Developer Advocacy, Confluent) and guests unpack a variety of topics surrounding Apache Kafka, event stream processing, and real-time data. The show covers frequently asked questions and comments about the Confluent and Kafka ecosystems—from Kafka connectors to distributed systems, data integration, Kafka deployment, and managed Apache Kafka as a service—on Twitter, YouTube, and elsewhere. Apache®️, Apache Kafka, Kafka, and the Kafka logo are either registered trademarks or trademarks of the Apache Software Foundation in the United States and/or other countries. No endorsement by The Apache Software Foundation is implied by the use of these marks.
Episode Artwork

Apache Kafka 3.5 - Kafka Core, Connect, Streams, & Client Updates

Apache Kafka® 3.5 is here with the capability of previewing migrations between ZooKeeper clusters to KRaft mode. Follow along as Danica Fine highlights key release updates.Kafka Core:KIP-833 provides an updated timeline for KRaft.KIP-866 now is preview and allows migration from an existing ZooKeeper cluster to KRaft mode.KIP-900 introduces a way to bootstrap the KRaft controllers with SCRAM credentials.KIP-903 prevents a data loss scenario by preventing replicas with stale broker epochs from joining the ISR list. KIP-915 streamlines the process of downgrading Kafka's transaction and group coordinators by introducing tagged fields.Kafka Connect:KIP-710 provides the option to use a REST API for internal server communication that can be enabled by setting `dedicated.mode.enable.internal.rest` equal to true. KIP-875 offers support for native offset management in Kafka Connect. Connect cluster administrators can now read offsets for both source and sink connectors. This KIP adds a new STOPPED state for connectors, enabling users to shut down connectors and maintain connector configurations without utilizing resources.KIP-894 makes `IncrementalAlterConfigs` API available for use in MirrorMaker 2 (MM2), adding a new use.incremental.alter.config configuration which takes values “requested,” “never,” and “required.”KIP-911 adds a new source tag for metrics generated by the `MirrorSourceConnector` to help monitor mirroring deployments.Kafka Streams:KIP-339 improves Kafka Streams' error-handling capabilities by addressing serialization errors that occur before message production and extending the interface for custom error handling. KIP-889 introduces versioned state stores in Kafka Streams for temporal join semantics in stream-to-table joins. KIP-904 simplifies table aggregation in Kafka by proposing a change in serialization format to enable one-step aggregation and reduce noise from events with old and new keys/values. KIP-914 modifies how versioned state stores are used in Kafka Streams. Versioned state stores may impact different DSL processors in varying ways, see the documentation for details.Kafka Client:KIP-881 is now complete and introduces new client-side assignor logic for rack-aware consumer balancing for Kafka Consumers. KIP-887 adds the `EnvVarConfigProvider` implementation to Kafka so custom configurations stored in environment variables can be injected into the system by providing the map returned by `System.getEnv()`.KIP 641 introduces the `RecordReader` interface to Kafka's clients module, replacing the deprecated MessageReader Scala trait. EPISODE LINKSSee release notes for Apache Kafka 3.5Read the blog to learn moreDownload and get started with Apache Kafka 3.5Watch the video version of this podcast
6/15/202311 minutes, 25 seconds
Episode Artwork

A Special Announcement from Streaming Audio

After recording 64 episodes and featuring 58 amazing guests, the Streaming Audio podcast series has amassed over 130,000 plays on YouTube in the last year. We're extremely proud of these achievements and feel that it's time to take a well-deserved break. Streaming Audio will be taking a vacation! We want to express our gratitude to you, our valued listeners, for spending 10,000 hours with us on this incredible journey.Rest assured, we will be back with more episodes! In the meantime, feel free to revisit some of our previous episodes. For instance, you can listen to Anna McDonald share her stories about the worst Apache Kafka® bugs she’s ever seen, or listen to Jun Rao offer his expert advice on running Kafka in production. And who could forget the charming backstory behind Mitch Seymour's Kafka storybook, Gently Down the Stream?These memorable episodes brought us joy, and we're thrilled to have shared them with you. As we reflect on our accomplishments with pride, we also look forward to an exciting future. Until we meet again, happy listening!EPISODE LINKSTop 6 Worst Apache Kafka JIRA BugsRunning Apache Kafka in ProductionLearn How Stream-Processing Works The Simplest Way PossibleWatch the video version of this podcastStreaming Audio Playlist Join the Confluent CommunityLearn more with Kafka tutorials, resources, and guides at Confluent DeveloperLive demo: Intro to Event-Driven Microservices with ConfluentUse PODCAST100 to get an additional $100 of free Confluent Cloud usage (details)  
4/13/20231 minute, 18 seconds
Episode Artwork

How to use Data Contracts for Long-Term Schema Management

Have you ever struggled with managing data long term, especially as the schema changes over time? In order to manage and leverage data across an organization, it’s essential to have well-defined guidelines and standards in place around data quality, enforcement, and data transfer. To get started, Abraham Leal (Customer Success Technical Architect, Confluent) suggests that organizations associate their Apache Kafka® data with a data contract (schema). A data contract is an agreement between a service provider and data consumers. It defines the management and intended usage of data within an organization. In this episode, Abraham talks to Kris about how to use data contracts and schema enforcement to ensure long-term data management.When an organization sends and stores critical and valuable data in Kafka, more often than not it would like to leverage that data in various valuable ways for multiple business units. Kafka is particularly suited for this use case, but it can be problematic later on if the governance rules aren’t established up front.With schema registry, evolution is easy due to its robust security guarantees. When managing data pipelines, you can also use GitOps automation features for an extra control layer. It allows you to be creative with topic versioning, upcasting/downcasting the data collected, and adding quality assurance steps at the end of each run to ensure your project remains reliable.Abraham explains that Protobuf and Avro are the best formats to use rather than XML or JSON because they are built to handle schema evolution. In addition, they have a much lower overhead per-record, so you can save bandwidth and data storage costs by adopting them.There’s so much more to consider, but if you are thinking about implementing or integrating with your data quality team, Abraham suggests that you use schema registry heavily from the beginning.If you have more questions, Kris invites you to join the conversation. You can also watch the KOR Financial Current talk Abraham mentions or take Danica Fine’s free course on how to use schema registry on Confluent Developer.EPISODE LINKSOS projectKOR Financial Current TalkThe Key Concepts of Schema RegistrySchema Evolution and CompatibilitySchema Registry Made Simple by Confluent Cloud ft. Magesh NandakumarKris Jenkins’ TwitterWatch the video version of this podcastStreaming Audio Playlist Join the Confluent CommunityLearn more with Kafka tutorials, resources, and guides at Confluent DeveloperLive demo: Intro to Event-Driven Microservices with ConfluentUse PODCAST100 to get an additional $100 of free Confluent Cloud usage (details)
3/21/202357 minutes, 28 seconds
Episode Artwork

How to use Python with Apache Kafka

Can you use Apache Kafka® and Python together? What’s the current state of Python support? And what are the best options to get started? In this episode, Dave Klein joins Kris to talk about all things Kafka and Python: the libraries, the tools, and the pros & cons. He also talks about the new course he just launched to support Python programmers entering the event-streaming world.Dave has been an active member of the Kafka community for many years and noticed that there were a lot of Kafka resources for Java but few for Python. So he decided to create a course to help people get started using Python and Kafka together.Historically, Java has had the most documentation, and people have often missed how good the Python support is for Kafka users. Python and Kafka are an ideal fit for machine learning applications and data engineering in general. Yet there are a lot of use cases for building, streaming, and machine learning pipelines. In fact, someone conducted a survey to find out what languages were most popular in the Kafka community and Python came in second after Java. That’s how Dave got the idea to create a course for newbies.In this course, Dave combines video lectures with code-heavy exercises to give developers a taste of what the code looks like, how to structure it, a preview of the shape of the code, and the structure of the classes and the functions so you can get hands-on practice using the library. He also covers building a producer and a consumer and using the admin client. And, of course, there is a module that covers working with the schemas supported by the Kafka library.Dave explains that Python opens up a world of opportunity and is ripe for expansion. So if you are ready to dive in, head over to developer.confluent.io to learn more about Dave’s course.EPISODE LINKSBlog: Getting Started with Python for Apache KafkaCourse: Introduction to Apache Kafka for Python DevelopersStep-by-step guide: Building a Python client application for KafkaCoding in MotionBuilding and Designing Events and Event Streams with Apache KafkaWatch the video version of this podcastKris Jenkins’ TwitterStreaming Audio Playlist Join the Confluent CommunityLearn more with Kafka tutorials, resources, and guides at Confluent DeveloperLive demo: Intro to Event-Driven Microservices with ConfluentUse PODCAST100 to get an additional $100 of free Confluent Cloud usage (details)
3/14/202331 minutes, 57 seconds
Episode Artwork

Next-Gen Data Modeling, Integrity, and Governance with YODA

In this episode, Kris interviews Doron Porat, Director of Infrastructure at Yotpo, and Liran Yogev, Director of Engineering at ZipRecruiter (formerly at Yotpo), about their experiences and strategies in dealing with data modeling at scale.Yotpo has a vast and active data lake, comprising thousands of datasets that are processed by different engines, primarily Apache Spark™. They wanted to provide users with self-service tools for generating and utilizing data with maximum flexibility, but encountered difficulties, including poor standardization, low data reusability, limited data lineage, and unreliable datasets.The team realized that Yotpo's modeling layer, which defines the structure and relationships of the data, needed to be separated from the execution layer, which defines and processes operations on the data.This separation would give programmers better visibility into data pipelines across all execution engines, storage methods, and formats, as well as more governance control for exploration and automation.To address these issues, they developed YODA, an internal tool that combines excellent developer experience, DBT, Databricks, Airflow, Looker and more, with a strong CI/CD and orchestration layer.Yotpo is a B2B, SaaS e-commerce marketing platform that provides businesses with the necessary tools for accurate customer analytics, remarketing, support messaging, and more.ZipRecruiter is a job site that utilizes AI matching to help businesses find the right candidates for their open roles.EPISODE LINKSCurrent 2022 Talk: Next Gen Data Modeling in the Open Data PlatformData Mesh 101Data Mesh Architecture: A Modern Distributed Data ModelWatch the video version of this podcastKris Jenkins’ TwitterStreaming Audio Playlist Join the Confluent CommunityLearn more with Kafka tutorials, resources, and guides at Confluent DeveloperLive demo: Intro to Event-Driven Microservices with ConfluentUse PODCAST100 to get an additional $100 of free Confluent Cloud usage (details)
3/7/202355 minutes, 55 seconds
Episode Artwork

Migrate Your Kafka Cluster with Minimal Downtime

Migrating Apache Kafka® clusters can be challenging, especially when moving large amounts of data while minimizing downtime. Michael Dunn (Solutions Architect, Confluent) has worked in the data space for many years, designing and managing systems to support high-volume applications. He has helped many organizations strategize, design, and implement successful Kafka cluster migrations between different environments. In this episode, Michael shares some tips about Kafka cluster migration with Kris, including the pros and cons of the different tools he recommends.Michael explains that there are many reasons why companies migrate their Kafka clusters. For example, they may want to modernize their platforms, move to a self-hosted cloud server, or consolidate clusters. He tells Kris that creating a plan and selecting the right tool before getting started is critical for reducing downtime and minimizing migration risks.The good news is that a few tools can facilitate moving large amounts of data, topics, schemas, applications, connectors, and everything else from one Apache Kafka cluster to another.Kafka MirrorMaker/MirrorMaker2 (MM2) is a stand-alone tool for copying data between two Kafka clusters. It uses source and sink connectors to replicate topics from a source cluster into the destination cluster.Confluent Replicator allows you to replicate data from one Kafka cluster to another. Replicator is similar to MM2, but the difference is that it’s been battle-tested.Cluster Linking is a powerful tool offered by Confluent that allows you to mirror topics from an Apache Kafka 2.4/Confluent Platform 5.4 source cluster to a Confluent Platform 7+ cluster in a read-only state, and is available as a fully-managed service in Confluent Cloud.At the end of the day, Michael stresses that coupled with a well-thought-out strategy and the right tool, Kafka cluster migration can be relatively painless. Following his advice, you should be able to keep your system healthy and stable before and after the migration is complete.EPISODE LINKSMirrorMaker 2ReplicatorCluster LinkingSchema MigrationMulti-Cluster Apache Kafka with Cluster LinkingWatch the video version of this podcastKris Jenkins’ TwitterStreaming Audio Playlist Join the Confluent CommunityLearn more with Kafka tutorials, resources, and guides at Confluent DeveloperLive demo: Intro to Event-Driven Microservices with ConfluentUse PODCAST100 to get an additional $100 of free Confluent Cloud usage (details)
3/1/20231 hour, 1 minute, 30 seconds
Episode Artwork

Real-Time Data Transformation and Analytics with dbt Labs

dbt is known as being part of the Modern Data Stack for ELT processes. Being in the MDS, dbt Labs believes in having the best of breed for every part of the stack. Oftentimes folks are using an EL tool like Fivetran to pull data from the database into the warehouse, then using dbt to manage the transformations in the warehouse. Analysts can then build dashboards on top of that data, or execute tests.It’s possible for an analyst to adapt this process for use with a microservice application using Apache Kafka® and the same method to pull batch data out of each and every database; however, in this episode, Amy Chen (Partner Engineering Manager, dbt Labs) tells Kris about a better way forward for analysts willing to adopt the streaming mindset: Reusable pipelines using dbt models that immediately pull events into the warehouse and materialize as materialized views by default.dbt Labs is the company that makes and maintains dbt. dbt Core is the open-source data transformation framework that allows data teams to operate with software engineering’s best practices. dbt Cloud is the fastest and most reliable way to deploy dbt. Inside the world of event streaming, there is a push to expand data access beyond the programmers writing the code, and towards everyone involved in the business. Over at dbt Labs they’re attempting something of the reverse— to get data analysts to adopt the best practices of software engineers, and more recently, of streaming programmers. They’re improving the process of building data pipelines while empowering businesses to bring more contributors into the analytics process, with an easy to deploy, easy to maintain platform. It offers version control to analysts who traditionally don’t have access to git, along with the ability to easily automate testing, all in the same place.In this episode, Kris and Amy explore:How to revolutionize testing for analysts with two of dbt’s core functionalitiesWhat streaming in a batch-based analytics world should look likeWhat can be done to improve workflowsHow to democratize access to data for everyone in the businessEPISODE LINKSLearn more about dbt labsAn Analytics Engineer’s Guide to StreamingPanel discussion: If Streaming Is the Answer, Why Are We Still Doing Batch?All Current 2022 sessions and slidesWatch the video version of this podcastKris Jenkins’ TwitterStreaming Audio Playlist Join the Confluent CommunityLearn more with Kafka tutorials, resources, and guides at Confluent DeveloperLive demo: Intro to Event-Driven Microservices with ConfluentUse PODCAST100 to get an additional $100 of free Confluent Cloud usage (details)   
2/22/202343 minutes, 41 seconds
Episode Artwork

What is the Future of Streaming Data?

What’s the next big thing in the future of streaming data? In this episode, Greg DeMichillie (VP of Product and Solutions Marketing, Confluent) talks to Kris about the future of stream processing in environments where the value of data lies in their ability to intercept and interpret data.Greg explains that organizations typically focus on the infrastructure containers themselves, and not on the thousands of data connections that form within. When they finally realize that they don't have a way to manage the complexity of these connections, a new problem arises: how do they approach managing such complexity? That’s where Confluent and Apache Kafka® come into play - they offer a consistent way to organize this seemingly endless web of data so they don't have to face the daunting task of figuring out how to connect their shopping portals or jump through hoops trying different ETL tools on various systems.As more companies seek ways to manage this data, they are asking some basic questions:How to do it?Do best practices exist?How can we get help?The next question for companies who have already adopted Kafka is a bit more complex: "What about my partners?” For example, companies with inventory management systems use supply chain systems to track product creation and shipping. As a result, they need to decide which emails to update, if they need to write custom REST APIs to sit in front of Kafka topics, etc. Advanced use cases like this raise additional questions about data governance, security, data policy, and PII, forcing companies to think differently about data.Greg predicts this is the next big frontier as more companies adopt Kafka internally. And because they will have to think less about where the data is stored and more about how data moves, they will have to solve problems to make managing all that data easier. If you're an enthusiast of real-time data streaming, Greg invites you to attend the Kafka Summit (London) in May and Current (Austin, TX) for a deeper dive into the world of Apache Kafka-related topics now and beyond.EPISODE LINKSWhat’s Ahead of the Future of Data Streaming?If Streaming Is the Answer, Why Are We Still Doing Batch?All Current 2022 sessions and slidesKafka Summit London 2023Current 2023Watch the video version of this podcastKris Jenkins’ TwitterStreaming Audio Playlist Join the Confluent CommunityLearn more with Kafka tutorials, resources, and guides at Confluent DeveloperLive demo: Intro to Event-Driven Microservices with ConfluentUse PODCAST100 to get an additional $100 of free Confluent Cloud usage (details)   
2/15/202341 minutes, 29 seconds
Episode Artwork

What can Apache Kafka Developers learn from Online Gaming?

What can online gaming teach us about making large-scale event management more collaborative in real-time? Ben Gamble (Developer Relations Manager, Aiven)  has come to the world of real-time event streaming from an usual source: the video games industry. And if you stop to think about it, modern online games are complex, distributed real-time data systems with decades of innovative techniques to teach us.In this episode, Ben talks with Kris about integrating gaming concepts with Apache Kafka®. Using Kafka’s state management stream processing, Ben has built systems that can handle real-time event processing at a massive scale, including interesting approaches to conflict resolution and collaboration.Building latency into a system is one way to mask data processing time. Ben says that you can efficiently hide latency issues and prioritize performance improvements by setting an initial target and then optimizing from there. If you measure before optimizing, you can add an extra layer to manage user expectations better. Tricks like adding a visual progress bar give the appearance of progress but actually hide latency and improve the overall user experience.To effectively handle challenging activities, like resolving conflicts and atomic edits, Ben suggests “slicing” (or nano batching) to break down tasks into small, related chunks. Slicing allows each task to be evaluated separately, thus producing timely outcomes that resolve potential background conflicts without the user knowing.Ben also explains how he uses pooling to make collaboration seamless. Pooling is a process that links open requests with potential matches. Similar to booking seats on an airplane, seats are assigned when requests are made. As these types of connections are handled through a Kafka event stream, the initial open requests are eventually fulfilled when seats become available.According to Ben, real-world tools that facilitate collaboration (such as Google Docs and Slack) work similarly. Just like multi-player gaming systems, multiple users can comment or chat in real-time and users perceive instant responses because of the techniques ported over from the gaming world.As Ben sees it, the proliferation of these types of concepts across disciplines will also benefit a more significant number of collaborative systems. Despite being long established for gamers, these patterns can be implemented in more business applications to improve the user experience significantly.EPISODE LINKSGoing Multiplayer With Kafka—Current 2022Building a Dependable Real-Time Betting App with Confluent Cloud and AblyEvent Streaming PatternsWatch the video version of this podcastKris Jenkins’ TwitterStreaming Audio Playlist Join the Confluent CommunityLearn more with Kafka tutorials, resources, and guides at Confluent DeveloperLive demo: Intro to Event-Driven Microservices with ConfluentUse PODCAST100 to get an additional $100 of free Confluent Cloud usage (details)   
2/8/202355 minutes, 32 seconds
Episode Artwork

Apache Kafka 3.4 - New Features & Improvements

Apache Kafka® 3.4 is released! In this special episode, Danica Fine (Senior Developer Advocate, Confluent), shares highlights of the Apache Kafka 3.4 release. This release introduces new KIPs in Kafka Core, Kafka Streams, and Kafka Connect.In Kafka Core:KIP-792 expands the metadata each group member passes to the group leader in its JoinGroup subscription to include the highest stable generation that consumer was a part of. KIP-830 includes a new configuration setting that allows you to disable the JMX reporter for environments where it’s not being used. KIP-854 introduces changes to clean up producer IDs more efficiently, to avoid excess memory usage. It introduces a new timeout parameter that affects the expiry of producer IDs and updates the old parameter to only affect the expiry of transaction IDs.KIP-866 (early access) provides a bridge to migrate between existing Zookeeper clusters to new KRaft mode clusters, enabling the migration of existing metadata from Zookeeper to KRaft. KIP-876 adds a new property that defines the maximum amount of time that the server will wait to generate a snapshot; the default is 1 hour.KIP-881, an extension of KIP-392, makes it so that consumers can now be rack-aware when it comes to partition assignments and consumer rebalancing. In Kafka Streams:KIP-770 updates some Kafka Streams configs and metrics related to the record cache size.KIP-837 allows users to multicast result records to every partition of downstream sink topics and adds functionality for users to choose to drop result records without sending.And finally, for Kafka Connect:KIP-787 allows users to run MirrorMaker2 with custom implementations for the Kafka resource manager and makes it easier to integrate with your ecosystem.Tune in to learn more about the Apache Kafka 3.4 release!EPISODE LINKSSee release notes for Apache Kafka 3.4Read the blog to learn moreDownload Apache Kafka 3.4 and get startedWatch the video version of this podcastJoin the Community 
2/7/20235 minutes, 13 seconds
Episode Artwork

How to use OpenTelemetry to Trace and Monitor Apache Kafka Systems

How can you use OpenTelemetry to gain insight into your Apache Kafka® event systems? Roman Kolesnev, Staff Customer Innovation Engineer at Confluent, is a member of the Customer Solutions & Innovation Division Labs team working to build business-critical OpenTelemetry applications so companies can see what’s happening inside their data pipelines. In this episode, Roman joins Kris to discuss tracing and monitoring in distributed systems using OpenTelemetry. He talks about how monitoring each step of the process individually is critical to discovering potential delays or bottlenecks before they happen; including keeping track of timestamps, latency information, exceptions, and other data points that could help with troubleshooting.Tracing each request and its journey to completion in Kafka gives companies access to invaluable data that provides insight into system performance and reliability. Furthermore, using this data allows engineers to quickly identify errors or anticipate potential issues before they become significant problems. With greater visibility comes better control over application health - all made possible by OpenTelemetry's unified APIs and services.As described on the OpenTelemetry.io website, "OpenTelemetry is a Cloud Native Computing Foundation incubating project. Formed through a merger of the OpenTracing and OpenCensus projects." It provides a vendor-agnostic way for developers to instrument their applications across different platforms and programming languages while adhering to standard semantic conventions so the traces/information can be streamed to compatible systems following similar specs.By leveraging OpenTelemetry, organizations can ensure their applications and systems are secure and perform optimally. It will quickly become an essential tool for large-scale organizations that need to efficiently process massive amounts of real-time data. With its ability to scale independently, robust analytics capabilities, and powerful monitoring tools, OpenTelemetry is set to become the go-to platform for stream processing in the future.Roman explains that the OpenTelemetry APIs for Kafka are still in development and unavailable for open source. The code is complete and tested but has never run in production. But if you want to learn more about the nuts and bolts, he invites you to connect with him on the Confluent Community Slack channel. You can also check out Monitoring Kafka without instrumentation with eBPF - Antón Rodríguez to learn more about a similar approach for domain monitoring.EPISODE LINKSOpenTelemetry java instrumentationOpenTelemetry collectorDistributed Tracing for Kafka with OpenTelemetryMonitoring Kafka without instrumentation with eBPFKris Jenkins' TwitterWatch the videoJoin the Confluent CommunityLearn more with Kafka tutorials, resources, and guides at Confluent DeveloperLive demo: Intro to Event-Driven Microservices with ConfluentUse PODCAST100 to get $100 of free Confluent Cloud usage (details)   
2/1/202350 minutes, 1 second
Episode Artwork

What is Data Democratization and Why is it Important?

Data democratization allows everyone in an organization to have access to the data they need, and the necessary tools needed to use this data effectively. In short, data democratization enables better business decisions. In this episode, Rama Ryali, a Senior IT and Data Executive, chats with Kris Jenkins about the importance of data democratization in modern systems.Rama explains that tech has unprecedented control over data and ignores basic business needs. Tech’s influence has largely gone unchecked and has led to a disconnect that often forces businesses to hire outside vendors for help turning their data into information they can use. In his role at RightData, Rama worked closely with Marketing, Sales, Customers, and Leadership to develop a no-code unified data platform that is accessible to everyone and fosters data democratization.So what is data democracy anyway? Rama explains that data democratization is the process of making data more accessible and open to a wider audience in a unified, no-code UI. It involves making sure that data is available to people who need it, regardless of their technical expertise or background. This enables businesses to make data-driven decisions faster and reduces the costs associated with acquiring, processing, and storing information. In addition, by allowing more people access to data, organizations can better collaborate and access tools that allow them to gain valuable insights into their operations and gain a competitive edge in the marketplace.In a perfect world, complicated tools supported by SQL, Excel, etc., with static views of data, will be replaced by a UI that anyone can use to analyze real-time streaming data. Kris coined a phase, “data socialization,” which describes the way that these types of tools can enable human connections across all areas of the organization, not just tech.Rama acknowledges that Excel, SQL, and other dev-heavy platforms will never go away, but the future of data democracy will allow businesses to unlock the maximum value of data through an iterative, democratic process where people talk about what the data is, what matters to other people, and how to transmit it in a way that makes sense.EPISODE LINKSRightData LinkedInThe 5 W’s of Metadata by Rama RyaliReal-Time Machine Learning and Smarter AI with Data StreamingWatch the video version of this podcastKris Jenkins’ TwitterStreaming Audio Playlist Join the Confluent CommunityLearn more with Kafka tutorials, resources, and guides at Confluent DeveloperLive demo: Intro to Event-Driven Microservices with ConfluentUse PODCAST100 to get an additional $100 of free Confluent Cloud usage (details)  
1/26/202347 minutes, 27 seconds
Episode Artwork

Git for Data: Managing Data like Code with lakeFS

Is it possible to manage and test data like code? lakeFS is an open-source data version control tool that transforms object storage into Git-like repositories, offering teams a way to use the same workflows for code and data. In this episode, Kris sits down with guest Adi Polak, VP of DevX at Treeverse, to discuss how lakeFS can be used to facilitate better management and testing of data.At its core, lakeFS provides teams with better data management. A theoretical data engineer on a large team runs a script to delete some data, but a bug in the script accidentally deletes a lot more data than intended. Application engineers can checkout the main branch, effectively erasing their mistakes, but without a tool like lakeFS, this data engineer would be in a lot of trouble.Polak is quick to explain that lakeFS isn’t built on Git. The source code behind an application is usually a few dozen mega bytes, while lakeFS is designed to handle petabytes of data; however, it does use Git-like semantics to create and access versions so adoption is quick and simple.Another big challenge that lakeFS helps teams tackle is reproducibility. Troubleshooting when and where a corruption in the data first appeared can be a tricky task for a data engineer, when data is constantly updating. With lakeFS, engineers can refer to snapshots to see where the product was corrupted, and rollback to that exact state.lakeFS also assists teams with reprocessing of historical data. With lakeFS data can be reprocessed on an isolated branch, before merging, to ensure the reprocessed data is exposed atomically. It also makes it easier to access the different versions of reprocessed data using any tag or a historical commit ID.Tune in to hear more about the benefits of lakeFS.EPISODE LINKSAdi Polak's TwitterlakeFS Git-for-data GitHub repo What is a Merkle Tree?If Streaming Is the Answer, Why Are We Still Doing Batch?Current 2022 sessions and slidesSign up for updates on Current 2023Watch the video version of this podcastKris Jenkins’ TwitterStreaming Audio Playlist Join the Confluent CommunityLearn more with Kafka tutorials, resources, and guides at Confluent DeveloperLive demo: Intro to Event-Driven Microservices with ConfluentUse PODCAST100 to get an additional $100 of free Confluent Cloud usage (details)
1/19/202330 minutes, 42 seconds
Episode Artwork

Using Kafka-Leader-Election to Improve Scalability and Performance

How does leader election work in Apache Kafka®? For the past 2 ½ years, Adithya Chandra, Staff Software Engineer at Confluent, has been working on Kafka scalability and performance, specifically partition leader election. In this episode, he gives Kris Jenkins a deep dive into the power of leader election in Kafka replication, why we need it, how it works, what can go wrong, and how it's being improved.Adithya explains that you can configure a certain number of replicas to be distributed across Kafka brokers and then set one of them as the elected leader - the others become followers. This leader-based model proves efficient because clients only have to write to the leader, who handles the replication process internally.But what happens when a broker goes offline, when a replica reassignment occurs, or when a broker shuts down? Adithya explains that when these triggers occur, one of the followers becomes the elected leader, and all the other replicas take their cue from the new leader. This failover reassignment ensures that messages are replicated effectively and efficiently with multiple copies across different brokers.Adithya explains how you can select a broker as the preferred election leader. The preferred leader then becomes the new leader in failure events. This reduces latency and ensures messages consistently write to the same broker for easier tracking and debugging.Leader failover cannot cover all failures, Adithya says. If a broker can’t be reached externally but can talk to other brokers in the cluster, leader failover won’t be triggered. If a broker experiences transient disk or network issues, the leader election process might fail, and the broker will not be elected as a leader. In both cases, manual intervention is required.Leadership priority is an important feature of Confluent Cloud that allows you to prioritize certain brokers over others and specify which broker is most likely to become the leader in case of a failover. This way, we can prioritize certain brokers to ensure that the most reliable broker handles more important and sensitive replication tasks. Additionally, this feature ensures that replication remains consistent and available even in an unexpected failure event.Improvements to this component of Kafka will enable it to be applied to a wide variety of scenarios. On-call engineers can use it to mitigate single-broker performance issues while debugging. Network and storage health solutions can use it to prioritize brokers. Adithya explains that preferred leader election and leadership failover ensure data is available and consistent during failure scenarios so that Kafka replication can run smoothly and efficiently.EPISODE LINKSData Plane: Replication ProtocolOptimizing Cloud-Native Apache Kafka Performance ft. Alok Nikhil and Adithya ChandraWatch the videoKris Jenkins’ TwitterJoin the Confluent CommunityLearn more with Kafka tutorials, resources, and guides at Confluent DeveloperLive demo: Intro to Event-Driven Microservices with ConfluentUse PODCAST100 to get an additional $100 of free Confluent Cloud usage (details)
1/12/202351 minutes, 6 seconds
Episode Artwork

Real-Time Machine Learning and Smarter AI with Data Streaming

Are bad customer experiences really just data integration problems? Can real-time data streaming and machine learning be democratized in order to deliver a better customer experience? Airy, an open-source data-streaming platform, uses Apache Kafka® to help business teams deliver better results to their customers. In this episode, Airy CEO and co-founder Steffen Hoellinger explains how his company is expanding the reach of stream-processing tools and ideas beyond the world of programmers.Airy originally built Conversational AI (chatbot) software and other customer support products for companies to engage with their customers in conversational interfaces. Asynchronous messaging created a large amount of traffic, so the company adopted Kafka to ingest and process all messages & events in real time.In 2020, the co-founders decided to open source the technology, positioning Airy as an open source app framework for conversational teams at large enterprises to ingest and process conversational and customer data in real time. The decision was rooted in their belief that all bad customer experiences are really data integration problems, especially at large enterprises where data often is siloed and not accessible to machine learning models and human agents in real time.(Who hasn’t had the experience of entering customer data into an automated system, only to have the same data requested eventually by a human agent?)Airy is making data streaming universally accessible by supplying its clients with real-time data and offering integrations with standard business software. For engineering teams, Airy can reduce development time and increase the robustness of solutions they build.Data is now the cornerstone of most successful businesses, and real-time use cases are becoming more and more important. Open-source app frameworks like Airy are poised to drive massive adoption of event streaming over the years to come, across companies of all sizes, and maybe, eventually, down to consumers.EPISODE LINKSLearn how to deploy Airy Open Source - or sign up for an Airy Cloud test instanceGoogle Case Study about Airy & TEDi, a 2,000 store retailerBecome an Expert in Conversational EngineeringSupercharging conversational AI with human agent feedback loopsIntegrating all Communication and Customer Data with Airy and ConfluentHow to Build and Deploy Scalable Machine Learning in Production with Apache KafkaReal-Time Threat Detection Using Machine Learning and Apache KafkaWatch the videoLearn more with Kafka tutorials, resources, and guides at Confluent DeveloperLive demo: Intro to Event-Driven Microservices with ConfluentUse PODCAST100 to get $100 of free Confluent Cloud usage (details) 
1/5/202338 minutes, 56 seconds
Episode Artwork

The Present and Future of Stream Processing

The past year saw new trends emerge in the world of data streaming technologies, as well as some unexpected and novel use cases for Apache Kafka®. New reflections on the future of stream processing and when companies should adopt microservice architecture inspired several talks at this year’s industry conferences. In this episode, Kris is joined by his colleagues Danica Fine, Senior Developer Advocate, and Robin Moffatt, Principal Developer Advocate, for an end-of-year roundtable on this year’s developments and what they want to see in the year to come.Robin and Danica kick things off with a discussion of the year’s memorable conferences. Talk submissions for Kafka Summit London and Current 2022 featuring topics were noticeably more varied than previous years, with fewer talks focused on the basics of Kafka implementation. Many abstracts featured interesting and unusual use cases, in addition to detailed explanations on what went wrong and how others could avoid the same issues.The conferences also made clear that a lot of companies are adopting or considering stream-processing solutions. Are we close to a future where streaming is a part of everything we do? Is there anything helping streaming become more mainstream? Will stream processing replace batch?On the other hand, a lot of in-demand talks focused on the importance of understanding the best practices supporting data mesh and understanding the nuances of the system and configurations. Danica identifies this as her big hope for next year: No more Kafka developers pursuing quick fixes. “No more band aid fixes. I want as many people as possible to understand the nuances of the levers that they're pulling for Kafka, whatever project they're building.”Kris and Robin agree that what will make them happy in 2023 is seeing broader, more diverse client libraries for Kafka. “Getting away from this idea that Kafka is largely a Java shop, which is nonsense, but there is that perception.”Streaming Audio returns in January 2023.EPISODE LINKSPut Your Data To Work: Top 5 Data Technology Trends for 2023Write What You Know: Turning Your Apache Kafka Knowledge into a Technical TalkCommon Apache Kafka Mistakes to AvoidPractical Data Pipeline: Build a Plant Monitoring System with ksqlDBIf Streaming Is the Answer, Why Are We Still Doing Batch?View sessions and slides from Current 2022Watch the video version of this podcastKris Jenkins’ TwitterStreaming Audio Playlist Join the Confluent CommunityLearn more with Kafka tutorials, resources, and guides at Confluent DeveloperLive demo: Intro to Event-Driven Microservices with ConfluentUse PODCAST100 to get an additional $100 of free Confluent Cloud usage (details)
12/28/202231 minutes, 19 seconds
Episode Artwork

Top 6 Worst Apache Kafka JIRA Bugs

Entomophiliac, Anna McDonald (Principal Customer Success Technical Architect, Confluent) has seen her fair share of Apache Kafka® bugs. For her annual holiday roundup of the most noteworthy Kafka bugs, Anna tells Kris Jenkins about some of the scariest, most surprising, and most enlightening corner cases that make you ask, “Ah, so that’s how it really works?”She shares a lot of interesting details about how batching works, the replication protocol, how Kafka’s networking stack dances with Linux’s one, and which is the most important Scala class to read, if you’re only going to read one.In particular, Anna gives Kris details about a bug that he’s been thinking about lately – sticky partitioner (KAFKA-10888). When a Kafka producer sends several records to the same partition at around the same time, the partition can get overloaded. As a result, if too many records get processed at once, they can get stuck causing an unbalanced workload. Anna goes on to explain that the fix required keeping track of the number of offsets/messages written to each partition, and then batching to force more balanced distributions.She found another bug that occurs when Kafka server triggers TCP Congestion Control in some conditions (KAFKA-9648). Anna explains that when Kafka server restarts and then executes the preferred replica leader, lots of replica leaders trigger cluster metadata updates. Then, all clients establish a server connection at the same time that lots TCP requests are waiting in the TCP sync queue.The third bug she talks about (KAFKA-9211), may cause TCP delays after upgrading…. Oh, that’s a nasty one. She goes on to tell Kris about a rare bug (KAFKA-12686) in Partition.scala where there’s a race condition between the handling of an AlterIsrResponse and a LeaderAndIsrRequest. This rare scenario involves the delay of AlterIsrResponse when lots of ISR and leadership changes occur due to broker restarts.Bugs five (KAFKA-12964) and six (KAFKA-14334) are no better, but you’ll have to plug in your headphones and listen in to explore the ghoulish adventures of Anna McDonald as she gives a nightmarish peek into her world of JIRA bugs. It’s just what you might need this holiday season!EPISODE LINKSKAFKA-10888: Sticky partition leads to uneven product msg, resulting in abnormal delays in some partitionsKAFKA-9648: Add configuration to adjust listen backlog size for AcceptorKAFKA-9211: Kafka upgrade 2.3.0 may cause tcp delay ack(Congestion Control)KAFKA-12686: Race condition in AlterIsr response handlingKAFKA-12964: Corrupt segment recovery can delete new producer state snapshotsKAFKA-14334: DelayedFetch purgatory not completed when appending as followerOptimizing for Low Latency and High ThroughputDiagnose and Debug Apache Kafka IssuesWatch the videoJoin the Confluent CommunityLearn more with Kafka tutorials, resources, and guides at Confluent DeveloperUse PODCAST100 to get $100 of free Confluent Cloud usage (details
12/21/20221 hour, 10 minutes, 58 seconds
Episode Artwork

Learn How Stream-Processing Works The Simplest Way Possible

Could you explain Apache Kafka® in ways that a small child could understand? When Mitch Seymour, author of Mastering Kafka Streams and ksqlDB, wanted a way to communicate the basics of Kafka and event-based stream processing, he decided to author a children’s book on the subject, but it turned into something with a far broader appeal.Mitch conceived the idea while writing a traditional manuscript for engineers and technicians interested in building stream processing applications. He wished he could explain what he was writing about to his 2-year-old daughter, and contemplated the best way to introduce the concepts in a way anyone could grasp.Four months later, he had completed the illustration book: Gently Down the Stream: A Gentle Introduction to Apache Kafka. It tells the story of a family of forest-dwelling Otters, who discover that they can use a giant river to communicate with each other. When more Otter families move into the forest, they must learn to adapt their system to handle the increase in activity.This accessible metaphor for how streaming applications work is accompanied by Mitch’s warm, painterly illustrations.For his second book, Seymour collaborated with the researcher and software developer Martin Kleppmann, author of Designing Data-Intensive Applications. Kleppmann admired the illustration book and proposed that the next book tackle a gentle introduction to cryptography. Specifically, it would introduce the concepts behind symmetric-key encryption, key exchange protocols, and the Diffie-Hellman algorithm, a method for exchanging secret information over a public channel.Secret Colors tells the story of a pair of Bunnies preparing to attend a school dance, who eagerly exchange notes on potential dates. They realize they need a way of keeping their messages secret, so they develop a technique that allows them to communicate without any chance of other Bunnies intercepting their messages.Mitch’s latest illustration book is—A Walk to the Cloud: A Gentle Introduction to Fully Managed Environments.  In the episode, Seymour discusses his process of creating the books from concept to completion, the decision to create his own publishing company to distribute these books, and whether a fourth book is on the way. He also discusses the experience of illustrating the books side by side with his wife, shares his insights on how editing is similar to coding, and explains why a concise set of commands is equally desirable in SQL queries and children’s literature.EPISODE LINKSMinimizing Software Speciation with ksqlDB and Kafka StreamsGently Down the Stream: A Gentle Introduction to Apache KafkaSecret ColorsA Walk to the Cloud: A Gentle Introduction to Fully Managed EnvironmentsApache Kafka On the Go: Kafka Concepts for BeginnersApache Kafka 101 courseWatch the videoJoin the Confluent CommunityLearn more with Kafka tutorials, resources, and guides at Confluent DeveloperUse PODCAST100 to get an additional $100 of free Confluent Cloud usage (details)
12/20/202231 minutes, 29 seconds
Episode Artwork

Building and Designing Events and Event Streams with Apache Kafka

What are the key factors to consider when developing event-driven architecture? When properly designed, events can connect existing systems with a common language and allow data exchange in near real time. They also help reduce complexity by providing a single source of truth that eliminates the need to synchronize data between different services or applications. They enable dynamic behavior, allowing each service or application to respond quickly to changes in its environment. Using events, developers can create systems that are more reliable, responsive, and easier to maintain.In this podcast, Adam Bellemare, staff technologist at Confluent, discusses the four dimensions of events and designing event streams along with best practices, and an overview of a new course he just authored. This course, called Introduction to Designing Events and Event Streams, walks you through the process of properly designing events and event streams in any event-driven architecture.Adam explains that the goal of the course is to provide you with a foundation for designing events and event streams. Along with hands-on exercises and best practices, the course explores the four dimensions of events and event stream design and applies them to real-world problems. Most importantly, he talks to Kris about the key factors to consider when deciding what events to write, what events to publish, and how to structure and design them to trigger actions like broadcasting messages to other services or storing results in a database.How you design and implement events and event streams significantly affect not only what you can do today, but how you scale in the future. Head over to Introduction to Designing Events and Event Streams to learn everything you need to know about building an event-driven architecture.EPISODE LINKSIntroduction to Designing Events and Event StreamsPractical Data Mesh: Building Decentralized Data Architecture with Event StreamsThe Data Dichotomy: Rethinking the Way We Treat Data and ServicesCoding in Motion: Sound & Vision—Build a Data Streaming App with JavaScript and Confluent CloudUsing Event-Driven Design with Apache Kafka Streaming Applications ft. Bobby CalderwoodWatch the video version of this podcastKris Jenkins’ TwitterStreaming Audio Playlist Join the Confluent CommunityLearn more with Kafka tutorials, resources, and guides at Confluent DeveloperLive demo: Intro to Event-Driven Microservices with ConfluentUse PODCAST100 to get an additional $100 of free Confluent Cloud usage (details)   
12/15/202253 minutes, 6 seconds
Episode Artwork

Rethinking Apache Kafka Security and Account Management

Is there a better way to manage access to resources without compromising security? New employees need access to a variety of resources within a company's tech stack. But manually granting access can be error-prone. And when employees leave, their access must be revoked, thus potentially introducing security risks if an admin misses one. In this podcast, Kris Jenkins talks to Anuj Sawani (Security Product Manager, Confluent) about the centralized identity management system he helped build to integrate with Apache Kafka® to prevent common identity management headaches and security risks.With 12+ years of experience building cybersecurity products for enterprise companies, Anuj Sawani explains how he helped build out KIP-768 (Secured OAuth support in Kafka) that supports a unified identity mechanism that spans across cloud and on-premises (hybrid scenarios).Confluent Cloud customers wanted a single identity to access all their services. The manual process required managing different sets of identity stores across the ecosystem. Anuj goes on to explain how Identity and Access Management (IAM) using cloud-native authentication protocols, such as OAuth or OpenID Connect, solves this problem by centralizing identity and minimizing security risks.Anuj emphasizes that sticking with industry standards is key because it makes integrating with other systems easy. With OAuth now supported in Kafka, this means performing client upgrades, configuring identity providers, etc. to ensure the applications can leverage new capabilities. Some examples of how to do this are to use centralized identities for client/broker connections.As Anuj continues to build and enhance features, he hopes to recommend this unified solution to other technology vendors because it makes integration much easier. The goal is to create a web of connectors that support the same standards. The future is bright, as other organizations are researching supporting OAuth and similar industry standards. Anuj is looking forward to the evolution and applying it to other use cases and scenarios.EPISODE LINKSIntroduction to Confluent Cloud SecurityKIP-768: Secured OAuth support in Apache KafkaConfluent Cloud Documentation: OAuth 2.0 SupportApache Kafka Security Best PracticesSecurity for Real-Time Data Stream Processing with Confluent CloudWatch the video version of this podcastKris Jenkins’ TwitterStreaming Audio Playlist Join the Confluent CommunityLearn more with Kafka tutorials, resources, and guides at Confluent DeveloperLive demo: Intro to Event-Driven Microservices with ConfluentUse PODCAST100 to get an additional $100 of free Confluent Cloud usage (details)
12/8/202241 minutes, 23 seconds
Episode Artwork

Real-time Threat Detection Using Machine Learning and Apache Kafka

Can we use machine learning to detect security threats in real-time? As organizations increasingly rely on distributed systems, it is becoming more important to analyze the traffic that passes through those systems quickly. Confluent Hackathon ’22 finalist, Géraud Dugé de Bernonville (Data Consultant, Zenika Bordeaux), shares how his team used TensorFlow (machine learning) and Neo4j (graph database) to analyze and detect network traffic data in real-time. What started as a research and development exercise turned into ZIEM, a full-blown internal project using ksqlDB to manipulate, export, and visualize data from Apache Kafka®.Géraud and his team noticed that large amounts of data passed through their network, and they were curious to see if they could detect threats as they happened. As a hackathon project, they built ZIEM, a network mapping and intrusion detection platform that quickly generates network diagrams. Using Kafka, the system captures network packets, processes the data in ksqlDB, and uses a Neo4j Sink Connector to send it to a Neo4j instance. Using the Neo4j browser, users can see instant network diagrams showing who's on the network, allowing them to detect anomalies quickly in real time.The Ziem project was initially conceived as an experiment to explore the potential of using Kafka for data processing and manipulation. However, it soon became apparent that there was great potential for broader applications (banking, security, etc.). As a result, the focus shifted to developing a tool for exporting data from Kafka, which is helpful in transforming data for deeper analysis, moving it from one database to another, or creating powerful visualizations.Géraud goes on to talk about how the success of this project has helped them better understand the potential of using Kafka for data processing. Zenika plans to continue working to build a pipeline that can handle more robust visualizations, expose more learning opportunities, and detect patterns.EPISODE LINKSZiem Project on GitHub ksqlDB 101 courseksqlDB Fundamentals: How Apache Kafka, SQL, and ksqlDB Work together ft. Simon AuburyReal-Time Stream Processing, Monitoring, and Analytics with Apache KafkaApplication Data Streaming with Apache Kafka and SwimWatch the video version of this podcastKris Jenkins’ TwitterStreaming Audio Playlist Join the Confluent CommunityLearn more with Kafka tutorials, resources, and guides at Confluent DeveloperLive demo: Intro to Event-Driven Microservices with ConfluentUse PODCAST100 to get an additional $100 of free Confluent Cloud usage (details)  
11/29/202229 minutes, 18 seconds
Episode Artwork

Improving Apache Kafka Scalability and Elasticity with Tiered Storage

What happens when you need to store more than a few petabytes of data? Rittika Adhikari (Software Engineer, Confluent) discusses how her team implemented tiered storage, a method for improving the scalability and elasticity of data storage in Apache Kafka®. She also explores the motivating factors for building it in the first place: cost, performance, and manageability. Before Tiered Storage, there was no real way to retain Kafka data indefinitely. Because of the tight coupling between compute and storage, users were forced to use different tools to access cold and hot data. Additionally, the cost of re-replication was prohibitive because Kafka had to process large amounts of data rather than small hot sets.As a member of the Kafka Storage Foundations team, Rittika explains to Kris Jenkins how her team initially considered a Kafka data lake but settled on a more cost-effective method – tiered storage. With tiered storage, one tier handles elasticity and throughput for long-term storage, while the other tier is dedicated to high-cost, low-latency, short-term storage. Before, re-replication impacted all brokers, slowing down performance because it required more replication cycles. By decoupling compute and storage, they now only replicate the hot set rather than weeks of data. Ultimately, this tiered storage method broke down the barrier between compute and storage by separating data into multiple tiers across the cloud. This allowed for better scalability and elasticity that reduced operational toil. In preparation for a broader rollout to customers who heavily rely on compacted topics, Rittika’s team will be implementing tier compaction to support tiering of compacted topics. The goal is to have the partition leader perform compaction. This will substantially reduce compaction costs (CPU/disk) because the number of replicas compacting is significantly smaller. It also protects the broker resource consumption through a new compaction algorithm and throttling. EPISODE LINKSJun Rao explains: What is Tiered Storage?Enabling Tiered StorageInfinite Storage in Confluent PlatformKafka Storage and Processing FundamentalsKIP-405: Kafka Tiered StorageOptimizing Apache Kafka’s Internals with Its Co-Creator Jun RaoWatch the video version of this podcastKris Jenkins’ TwitterStreaming Audio Playlist Join the Confluent CommunityLearn more with Kafka tutorials, resources, and guides at Confluent DeveloperLive demo: Intro to Event-Driven Microservices with ConfluentUse PODCAST100 to get an additional $100 of free Confluent Cloud usage (details)  
11/22/202229 minutes, 32 seconds
Episode Artwork

Decoupling with Event-Driven Architecture

In principle, data mesh architecture should liberate teams to build their systems and gather data in a distributed way, without having to explicitly coordinate. Data is the thing that can and should decouple teams, but proper implementation has its challenges.In this episode, Kris talks to Florian Albrecht (Solution Architect, Hermes Germany) about Galapagos, an open-source DevOps software tool for Apache Kafka® that Albrecht created with his team at Hermes, a German parcel delivery company. After Hermes chose Kafka to implement company-wide event-driven architecture, Albrecht’s team created rules and guidelines on how to use and really make the most out of Kafka. But the hands-off approach wasn’t leading to greater independence, so Albrecht’s team tried something different to documentation— they encoded the rules as software.This method pushed the teams to stop thinking in terms of data and to start thinking in terms of events. Previously, applications copied data from one point to another, with slight changes each time. In the end, teams with conflicting data were left asking when the data changed and why, with a real impact on customers who might be left wondering when their parcel was redirected and how. Every application would then have to be checked to find out when exactly the data was changed. Event architecture terminates this cycle. Events are immutable and changes are registered as new domain-specific events. Packaged together as event envelopes, they can be safely copied to other applications, and can provide significant insights. No need to check each application to find out when manually entered or imported data was changed—the complete history exists in the event envelope. More importantly, no more time-consuming collaborations where teams help each other to interpret the data. Using Galapagos helped the teams at Hermes to switch their thought process from raw data to event-driven. Galapagos also empowers business teams to take charge of their own data needs by providing a protective buffer. When specific teams,  providers of data or events, want to change something, Galapagos enforces a method which will not kill the production applications already reading the data. Teams can add new fields which existing applications can ignore, but a previously required field that an application could be relying on won’t be changeable. Business partners using Galapagos found they were better prepared to give answers to their developer colleagues, allowing different parts of the business to communicate in ways they hadn’t before. Through Galapagos, Hermes saw better success decoupling teams.EPISODE LINKSA Guide to Data MeshPractical Data Mesh ebookGalapagos GitHubFlorian Albrecht GitHubWatch the videoJoin the Confluent CommunityLearn more with Kafka tutorials, resources, and guides at Confluent DeveloperLive demo: Intro to Event-Driven Microservices with ConfluentUse PODCAST100 to get $100 of free Confluent Cloud usage (details)   
11/15/202238 minutes, 38 seconds
Episode Artwork

If Streaming Is the Answer, Why Are We Still Doing Batch?

Is real-time data streaming the future, or will batch processing always be with us? Interest in streaming data architecture is booming, but just as many teams are still happily batching away. Batch processing is still simpler to implement than stream processing, and successfully moving from batch to streaming requires a significant change to a team’s habits and processes, as well as a meaningful upfront investment. Some are even running dbt in micro batches to simulate an effect similar to streaming, without having to make the full transition. Will streaming ever fully take over?In this episode, Kris talks to a panel of industry experts with decades of experience building and implementing data systems. They discuss the state of streaming adoption today, if streaming will ever fully replace batch, and whether it even could (or should). Is micro batching the natural stepping stone between batch and streaming? Will there ever be a unified understanding on how data should be processed over time? Is the lack of agreement on best practices for data streaming an insurmountable obstacle to widespread adoption? What exactly is holding teams back from fully adopting a streaming model?Recorded live at Current 2022: The Next Generation of Kafka Summit, the panel includes Adi Polak (Vice President of Developer Experience, Treeverse), Amy Chen (Partner Engineering Manager, dbt Labs), Eric Sammer (CEO, Decodable), and Tyler Akidau (Principal Software Engineer, Snowflake).EPISODE LINKSdbt LabsDecodablelakeFSSnowflakeView sessions and slides from Current 2022Stream Processing vs. Batch Processing: What to KnowFrom Batch to Real-Time: Tips for Streaming Data Pipelines with Apache Kafka ft. Danica FineWatch the video version of this podcastKris Jenkins’ TwitterStreaming Audio Playlist Join the Confluent CommunityLearn more with Kafka tutorials, resources, and guides at Confluent DeveloperLive demo: Intro to Event-Driven Microservices with ConfluentUse PODCAST100 to get an additional $100 of free Confluent Cloud usage (details)   
11/9/202243 minutes, 58 seconds
Episode Artwork

Security for Real-Time Data Stream Processing with Confluent Cloud

Streaming real-time data at scale and processing it efficiently is critical to cybersecurity organizations like SecurityScorecard. Jared Smith, Senior Director of Threat Intelligence, and Brandon Brown, Senior Staff Software Engineer, Data Platform at SecurityScorecard, discuss their journey from using RabbitMQ to open-source Apache Kafka® for stream processing. As well as why turning to fully-managed Kafka on Confluent Cloud is the right choice for building real-time data pipelines at scale. SecurityScorecard mines data from dozens of digital sources to discover security risks and flaws with the potential to expose their client’ data. This includes scanning and ingesting data from a large number of ports to identify suspicious IP addresses, exposed servers, out-of-date endpoints, malware-infected devices, and other potential cyber threats for more than 12 million companies worldwide.To allow real-time stream processing for the organization, the team moved away from using RabbitMQ to open-source Kafka for processing a massive amount of data in a matter of milliseconds, instead of weeks or months. This makes the detection of a website’s security posture risk happen quickly for constantly evolving security threats. The team relied on batch pipelines to push data to and from Amazon S3 as well as expensive REST API based communication carrying data between systems. They also spent significant time and resources on open-source Kafka upgrades on Amazon MSK.Self-maintaining the Kafka infrastructure increased operational overhead with escalating costs. In order to scale faster, govern data better, and ultimately lower the total cost of ownership (TOC), Brandon, lead of the organization’s Pipeline team, pivoted towards a fully-managed, cloud-native approach for more scalable streaming data pipelines, and for the development of a new Automatic Vendor Detection (AVD) product. Jared and Brandon continue to leverage the Cloud for use cases including using PostgreSQL and pushing data to downstream systems using CSC connectors, increasing data governance and security for streaming scalability, and more.EPISODE LINKSSecurityScorecard Case StudyBuilding Data Pipelines with Apache Kafka and ConfluentWatch the video version of this podcastKris Jenkins’ TwitterStreaming Audio Playlist Join the Confluent CommunityLearn more with Kafka tutorials, resources, and guides at Confluent DeveloperLive demo: Intro to Event-Driven Microservices with ConfluentUse PODCAST100 to get an additional $100 of free Confluent Cloud usage (details)   
11/3/202248 minutes, 33 seconds
Episode Artwork

Running Apache Kafka in Production

What are some recommendations to consider when running Apache Kafka® in production? Jun Rao, one of the original Kafka creators, as well as an ongoing committer and PMC member, shares the essential wisdom he's gained from developing Kafka and dealing with a large number of Kafka use cases.Here are 6 recommendations for maximizing Kafka in production:1. Nail Down the Operational PartWhen setting up your cluster, in addition to dealing with the usual architectural issues, make sure to also invest time into alerting, monitoring, logging, and other operational concerns. Managing a distributed system can be tricky and you have to make sure that all of its parts are healthy together.  This will give you a chance at catching cluster problems early, rather than after they have become full-blown crises. 2. Reason Properly About Serialization and Schemas Up FrontAt the Kafka API level, events are just bytes, which gives your application the flexibility to use various serialization mechanisms. Avro has the benefit of decoupling schemas from data serialization, whereas Protobuf is often preferable to those practiced with remote procedure calls; JSON Schema is user friendly but verbose. When you are choosing your serialization, it's a good time to reason about schemas, which should be well-thought-out contracts between your publishers and subscribers. You should know who owns a schema as well as the path for evolving that schema over time.3. Use Kafka As a Central Nervous System Rather Than As a Single ClusterTeams typically start out with a single, independent Kafka cluster, but they could benefit, even from the outset, by thinking of Kafka more as a central nervous system that they can use to connect disparate data sources. This enables data to be shared among more applications. 4. Utilize Dead Letter Queues (DLQs)DLQs can keep service delays from blocking the processing of your messages. For example, instead of using a unique topic for each customer to which you need to send data (potentially millions of topics),  you may prefer to use a shared topic, or a series of shared topics that contain all of your customers. But if you are sending to multiple customers from a shared topic and one customer's REST API is down—instead of delaying the process entirely—you can have that customer's events divert into a dead letter queue. You can then process them later from that queue.5. Understand Compacted TopicsBy default in Kafka topics, data is kept by time. But there is also another type of topic, a compacted topic, which stores data by key and replaces old data with new data as it comes in. This is particularly useful for working with data that is updateable, for example, data that may be coming in through a change-data-capture log. A practical example of this would be a retailer that needs to update prices and product descriptions to send out to all of its locations. 6. Imagine New Use Cases Enabled by Kafka's Recent Evolution The biggest recent change in Kafka's history is its migration to the cloud. By using Kafka there, you can reserve your engineering talent for business logic. The unlimited storage enabled by the cloud also means that you can truly keep data forever at reasonable cost, and thus you don't have to build a separate system for your historical data needs.EPISODE LINKSKafka Internals 101 Watch in videoKris Jenkins' TwitterUse PODCAST100 to get an additional $100 of free Confluent Cloud usage (details)   
10/27/202258 minutes, 44 seconds
Episode Artwork

Build a Real Time AI Data Platform with Apache Kafka

Is it possible to build a real-time data platform without using stateful stream processing? Forecasty.ai is an artificial intelligence platform for forecasting commodity prices, imparting insights into the future valuations of raw materials for users. Nearly all AI models are batch-trained once, but precious commodities are linked to ever-fluctuating global financial markets, which require real-time insights. In this episode, Ralph Debusmann (CTO, Forecasty.ai) shares their journey of migrating from a batch machine learning platform to a real-time event streaming system with Apache Kafka® and delves into their approach to making the transition frictionless. Ralph explains that Forecasty.ai was initially built on top of batch processing, however, updating the models with batch-data syncs was costly and environmentally taxing. There was also the question of scalability—progressing from 60 commodities on offer to their eventual plan of over 200 commodities. Ralph observed that most real-time systems are non-batch, streaming-based real-time data platforms with stateful stream processing, using Kafka Streams, Apache Flink®, or even Apache Samza. However, stateful stream processing involves resources, such as teams of stream processing specialists to solve the task. With the existing team, Ralph decided to build a real-time data platform without using any sort of stateful stream processing. They strictly keep to the out-of-the-box components, such as Kafka topics, Kafka Producer API, Kafka Consumer API, and other Kafka connectors, along with a real-time database to process data streams and implement the necessary joins inside the database. Additionally, Ralph shares the tool he built to handle historical data, kash.py—a Kafka shell based on Python; discusses issues the platform needed to overcome for success, and how they can make the migration from batch processing to stream processing painless for the data science team. EPISODE LINKSKafka Streams 101 courseThe Difference Engine for Unlocking the Kafka Black BoxGitHub repo: kash.pyWatch the video version of this podcastKris Jenkins’ TwitterStreaming Audio Playlist Join the Confluent CommunityLearn more with Kafka tutorials, resources, and guides at Confluent DeveloperLive demo: Intro to Event-Driven Microservices with ConfluentUse PODCAST100 to get an additional $100 of free Confluent Cloud usage (details)   
10/20/202237 minutes, 18 seconds
Episode Artwork

Optimizing Apache JVMs for Apache Kafka

Java Virtual Machines (JVMs) impact Apache Kafka® performance in production. How can you optimize your event-streaming architectures so they process more Kafka messages using the same number of JVMs? Gil Tene (CTO and Co-Founder, Azul) delves into JVM internals and how developers and architects can use Java and optimized JVMs to make real-time data pipelines more performant and more cost effective, with use cases.Gil has deep roots in Java optimization, having started out building large data centers for parallel processing, where the goal was to get a finite set of hardware to run the largest possible number of JVMs. As the industry evolved, Gil switched his primary focus to software, and throughout the years, has gained particular expertise in garbage collection (the C4 collector) and JIT compilation. The OpenJDK distribution Gil's company Azul releases, Zulu, is widely used throughout the Java world, although Azul's Prime build version can run Kafka up to forty-percent faster than the open version—on identical hardware. Gil relates that improvements in JVMs aren't yielded with a single stroke or in one day, but are rather the result of many smaller incremental optimizations over time, i.e. "half-percent" improvements that accumulate. Improving a JVM starts with a good engineering team, one that has thought significantly about how to make JVMs better. The team must continuously monitor metrics, and Gil mentions that his team tests optimizations against 400-500 different workloads (one of his favorite things to get into the lab is a new customer's workload). The quality of a JVM can be measured on response times, the consistency of these response times including outliers, as well as the level and number of machines that are needed to run it. A balance between performance and cost efficiency is usually a sweet spot for customers.Throughout the podcast, Gil goes into depth on optimization in theory and practice, as well as Azul's use of JIT compilers, as they play a key role in improving JVMs. There are always tradeoffs when using them: You want a JIT compiler to strike a balance between the work expended optimizing and the benefits that come from that work. Gil also mentions a new innovation Azul has been working on that moves JIT compilation to the cloud, where it can be applied to numerous JVMs simultaneously.EPISODE LINKSA Guide on Increasing Kafka Event Streaming PerformanceBetter Kafka Performance Without Changing Any CodeWatch the video version of this podcastKris Jenkins’ TwitterStreaming Audio Playlist Join the Confluent CommunityLearn more with Kafka tutorials, resources, and guides at Confluent DeveloperLive demo: Intro to Event-Driven Microservices with ConfluentUse PODCAST100 to get an additional $100 of free Confluent Cloud usage (details)   
10/13/20221 hour, 11 minutes, 42 seconds
Episode Artwork

Apache Kafka 3.3 - KRaft, Kafka Core, Streams, & Connect Updates

Apache Kafka® 3.3 is released! With over two years of development, KIP-833 marks KRaft as production ready for new AK 3.3 clusters only. On behalf of the Kafka community, Danica Fine (Senior Developer Advocate, Confluent) shares highlights of this release, with KIPs from Kafka Core, Kafka Streams, and Kafka Connect. To reduce request overhead and simplify client-side code, KIP-709 extends the OffsetFetch API requests to accept multiple consumer group IDs. This update has three changes, including extending the wire protocol, response handling changes, and enhancing the AdminClient to use the new protocol. Log recovery is an important process that is triggered whenever a broker starts up after an unclean shutdown. And since there is no way to know the log recovery progress other than checking if the broker log is busy, KIP-831 adds metrics for the log recovery progress with `RemainingLogsToRecover` and `RemainingSegmentsToRecover`for each recovery thread. These metrics allow the admin to monitor the progress of the log recovery.Additionally, updates on Kafka Core also include KIP-841: Fenced replicas should not be allowed to join the ISR in KRaft. KIP-835: Monitor KRaft Controller Quorum Health. KIP-859: Add metadata log processing error-related metrics. KIP-834 for Kafka Streams added the ability to pause and resume topologies. This feature lets you reduce rescue usage when processing is not required or modifying the logic of Kafka Streams applications, or when responding to operational issues. While KIP-820 extends the KStream process with a new processor API. Previously, KIP-98 added support for exactly-once delivery guarantees with Kafka and its Java clients. In the AK 3.3 release, KIP-618 offers the Exactly-Once Semantics support to Confluent’s source connectors. To accomplish this, a number of new connectors and worker-based configurations have been introduced, including `exactly.once.source.support`, `transaction.boundary`, and more. Image attribution: Apache ZooKeeper™: https://zookeeper.apache.org/ and Raft logo:  https://raft.github.io/  EPISODE LINKSSee release notes for Apache Kafka 3.3.0 and Apache Kafka 3.3.1 for the full list of changesRead the blog to learn moreDownload Apache Kafka 3.3 and get startedWatch the video version of this podcast
10/3/20226 minutes, 42 seconds
Episode Artwork

Application Data Streaming with Apache Kafka and Swim

How do you set data applications in motion by running stateful business logic on streaming data? Capturing key stream processing events and cumulative statistics that necessitate real-time data assessment, migration, and visualization remains as a gap—for event-driven systems and stream processing frameworks according to Fred Patton (Developer Evangelist, Swim Inc.) In this episode, Fred explains streaming applications and how it contrasts with stream processing applications. Fred and Kris also discuss how you can use Apache Kafka® and Swim for a real-time UI for streaming data.Swim's technology facilitates relationships between streaming data from distributed sources and complex UIs, managing backpressure cumulatively, so that front ends don't get overwhelmed. They are focused on real-time, actionable insights, as opposed to those derived from historical data. Fred compares Swim's functionality to the speed layer in the Lambda architecture model, which is specifically concerned with serving real-time views. For this reason, when sending your data to Swim, it is common to also send a copy to a data warehouse that you control. Web agent—a data entity in the Swim ecosystem, can be as small as a single cellphone or as large as a whole cellular network. Web agents communicate with one another as well as with their subscribers, and each one is a URI that can be called by a browser or the command line. Swim has been designed to instantaneously accommodate requests at widely varying levels of granularity, each of which demands a completely different volume of data. Thus, as you drill down, for example, from a city view on a map into a neighborhood view, the Swim system figures out which web agent is responsible for the view you are requesting, as well as the other web agents needed to show it.Fred also shares an example where they work with a telephony company that requires real-time statuses for a network infrastructure with thousands of cell towers servicing millions of devices. Along with a use case for a transportation company needing to transform raw edge data into actionable insights for its connected vehicle customers. Future plans for Swim include porting more functionality to the cloud, which will enable additional automation, so that, for example, a customer just has to provide database and Kafka cluster connections, and Swim can automatically build out infrastructure. EPISODE LINKSSwim Cellular Network SimulatorContinuous Intelligence - Streaming Apps That Are Always in SyncUsing Swim with Apache KafkaSwim DeveloperWatch the video version of this podcastKris Jenkins’ TwitterStreaming Audio Playlist Join the Confluent CommunityLearn more with Kafka tutorials, resources, and guides at Confluent DeveloperLive demo: Intro to Event-Driven Microservices with ConfluentUse PODCAST100 to get an additional $100 of free Confluent Cloud usage (details)   
10/3/202239 minutes, 10 seconds
Episode Artwork

International Podcast Day - Apache Kafka Edition | Streaming Audio Special

What’s your favorite podcast? Would you like to find some new ones? In celebration of International Podcast Day, Kris Jenkins invites 12 experts from the Apache Kafka® community to talk about their favorite podcasts. Unlike other episodes where guests educate developers and tell stories about Kafka, its surrounding technological ecosystem, or the Cloud, this special episode provides a glimpse into what these guests have learned through listening to podcasts that you might also find interesting. Through a virtual international tour, Kris chatted with Bill Bejeck (Integration Architect, Confluent), Nikoleta Verbeck (Senior Solutions Engineer, CSID, Confluent), Ben Stopford (Lead Technologist, OCTO, Confluent), Noelle Gallagher (Video Producer, Editor), Danica Fine (Senior Developer Advocate, Confluent), Tim Berglund (VP, Developer Relations, StarTree), Ben Ford (Founder and CEO, Commando Development), Jeff Bean (Group Manager, Technical Marketing, Confluent), Domenico Fioravanti (Director of Engineering, Therapie Clinic), Francesco Tisiot (Senior Developer Advocate, Aiven), Robin Moffatt (Principal, Developer Advocate, Confluent), and Simon Aubury (Principal Data Engineer, ThoughtWorks). They share recommendations covering a wide range of topics such as building distributed systems, travel, data engineering, greek mythology, data mesh, economics, and music and the arts. EPISODE LINKSCommon Apache Kafka Mistakes to AvoidFlink vs Kafka Streams/ksqlDBWhy Data Mesh ft. Ben StopfordPractical Data Pipeline ft. Danica FineWhat Could Go Wrong with a Kafka JDBC Connector?Intro to Kafka Connect: Core Components and Architecture ft. Robin MoffattServerless Stream Processing with Apache Kafka ft. Bill BejeckScaling an Apache Kafka-Based Architecture at Therapie ClinicEvent-Driven Systems and Agile OperationsReal-Time Stream Processing, Monitoring, and Analytics with Apache KafkaWatch the video version of this podcastKris Jenkins’ TwitterStreaming Audio Playlist Join the Confluent CommunityLearn more with Kafka tutorials, resources, and guides at Confluent DeveloperUse PODCAST100 to get an additional $100 of free Confluent Cloud usage (details)   
9/30/20221 hour, 2 minutes, 22 seconds
Episode Artwork

How to Build a Reactive Event Streaming App - Coding in Motion

How do you build an event-driven application that can react to real-time data streams as they happen? Kris Jenkins (Senior Developer Advocate, Confluent) will be hosting another fun, hands-on programming workshop—Coding in Motion: Watching the River Flow, to demonstrate how you can build a reactive event streaming application with Apache Kafka®, ksqlDB using Python.As a developer advocate, Kris often speaks at conferences, and the presentation will be available on-demand through the organizer’s YouTube channel. The desire to read comments and be able to interact with the community motivated Kris to set up a real-time event streaming application that would notify him on his mobile phone. During the workshop, Kris will demonstrate the end-to-end process of using Python to process and stream data from YouTube’s REST API into a Kafka topic, analyze the data with ksqlDB, and then stream data out via Telegram. After the workshop, you’ll be able to use the recipe to build your own event-driven data application.  EPISODE LINKSCoding in Motion: Building a Reactive Data Streaming AppWatch the video version of this podcastKris Jenkins’ TwitterStreaming Audio Playlist Join the Confluent CommunityLearn more with Kafka tutorials, resources, and guides at Confluent DeveloperLive demo: Intro to Event-Driven Microservices with ConfluentUse PODCAST100 to get an additional $100 of free Confluent Cloud usage (details)   
9/20/20221 minute, 26 seconds
Episode Artwork

Real-Time Stream Processing, Monitoring, and Analytics With Apache Kafka

Processing real-time event streams enables countless use cases big and small. With a day job designing and building highly available distributed data systems, Simon Aubury (Principal Data Engineer, Thoughtworks) believes stream-processing thinking can be applied to any stream of events. In this episode, Simon shares his Confluent Hackathon ’22 winning project—a wildlife monitoring system to observe population trends over time using a Raspberry Pi, along with Apache Kafka®, Kafka Connect, ksqlDB, TensorFlow Lite, and Kibana. He used the system to count animals in his Australian backyard and perform trend analysis on the results. Simon also shares ideas on how you can use these same technologies to help with other real-world challenges.Open-source, object detection models for TensorFlow, which appropriately are collected into "model zoos," meant that Simon didn't have to provide his own object identification as part of the project, which would have made it untenable. Instead, he was able to utilize the open-source models, which are essentially neural nets pretrained on relevant data sets—in his case, backyard animals.Simon's system, which consists of around 200 lines of code, employs a Kafka producer running a while loop, which connects to a camera feed using a Python library. For each frame brought down, object masking is applied in order to crop and reduce pixel density, and then the frame is compared to the models mentioned above. A Python dictionary containing probable found objects is sent to a Kafka broker for processing; the images themselves aren't sent. (Note that Simon's system is also capable of alerting if a specific, rare animal is detected.) On the broker, Simon uses ksqlDB and windowing to smooth the data in case the frames were inconsistent for some reason (it may look back over thirty seconds, for example, and find the highest number of animals per type). Finally, the data is sent to a Kibana dashboard for analysis, through a Kafka Connect sink connector. Simon’s system is an extremely low-cost system that can simulate the behaviors of more expensive, proprietary systems. And the concepts can easily be applied to many other use cases. For example, you could use it to estimate traffic at a shopping mall to gauge optimal opening hours, or you could use it to monitor the queue at a coffee shop, counting both queued patrons as well as impatient patrons who decide to leave because the queue is too long.EPISODE LINKSReal-Time Wildlife Monitoring with Apache KafkaWildlife Monitoring GithubksqlDB Fundamentals: How Apache Kafka, SQL, and ksqlDB Work TogetherEvent-Driven Architecture - Common Mistakes and Valuable LessonsMotion in Motion: Building an End-to-End Motion Detection and Alerting System with Apache Kafka and ksqlDBWatch the video version of this podcastKris Jenkins’ TwitterLearn more on Confluent DeveloperUse PODCAST100 to get $100 of free Confluent Cloud usage (details)   
9/15/202234 minutes, 7 seconds
Episode Artwork

Reddit Sentiment Analysis with Apache Kafka-Based Microservices

How do you analyze Reddit sentiment with Apache Kafka® and microservices? Bringing the fresh perspective of someone who is both new to Kafka and the industry, Shufan Liu, nascent Developer Advocate at Confluent, discusses projects he has worked on during his summer internship—a Cluster Linking extension to a conceptual data pipeline project, and a microservice-based Reddit sentiment-analysis project. Shufan demonstrates that it’s possible to quickly get up to speed with the tools in the Kafka ecosystem and to start building something productive early on in your journey.Shufan's Cluster Linking project extends a demo by Danica Fine (Senior Developer Advocate, Confluent) that uses a Kafka-based data pipeline to address the challenge of automatic houseplant watering. He discusses his contribution to the project and shares details in his blog—Data Enrichment in Existing Data Pipelines Using Confluent Cloud.The second project Shufan presents is a sentiment analysis system that gathers data from a given subreddit, then assigns the data a sentiment score. He points out that its results would be hard to duplicate manually by simply reading through a subreddit—you really need the assistance of AI. The project consists of four microservices:A user input service that collects requests in a Kafka topic, which consist of the desired subreddit, along with the dates between which data should be collectedAn API polling service that fetches the requests from the user input service, collects the relevant data from the Reddit API, then appends it to a new topicA sentiment analysis service that analyzes the appended topic from the API polling service using the Python library NLTK; it calculates averages with ksqlDBA results-displaying service that consumes from a topic with the calculationsInteresting subreddits that Shufan has analyzed for sentiment include gaming forums before and after key releases; crypto and stock trading forums at various meaningful points in time; and sports-related forums both before the season and several games into it. EPISODE LINKSData Enrichment in Existing Data Pipelines Using Confluent CloudWatch the video version of this podcastKris Jenkins’ TwitterStreaming Audio Playlist Join the Confluent CommunityLearn more with Kafka tutorials, resources, and guides at Confluent DeveloperLive demo: Intro to Event-Driven Microservices with ConfluentUse PODCAST100 to get an additional $100 of free Confluent Cloud usage (details) 
9/8/202235 minutes, 23 seconds
Episode Artwork

Capacity Planning Your Apache Kafka Cluster

How do you plan Apache Kafka® capacity and Kafka Streams sizing for optimal performance? When Jason Bell (Principal Engineer, Dataworks and founder of Synthetica Data), begins to plan a Kafka cluster, he starts with a deep inspection of the customer's data itself—determining its volume as well as its contents: Is it JSON, straight pieces of text, or images? He then determines if Kafka is a good fit for the project overall, a decision he bases on volume, the desired architecture, as well as potential cost.Next, the cluster is conceived in terms of some rule-of-thumb numbers. For example, Jason's minimum number of brokers for a cluster is three or four. This means he has a leader, a follower and at least one backup.  A ZooKeeper quorum is also a set of three. For other elements, he works with pairs, an active and a standby—this applies to Kafka Connect and Schema Registry. Finally, there's Prometheus monitoring and Grafana alerting to add. Jason points out that these numbers are different for multi-data-center architectures.Jason never assumes that everyone knows how Kafka works, because some software teams include specialists working on a producer or a consumer, who don't work directly with Kafka itself. They may not know how to adequately measure their Kafka volume themselves, so he often begins the collaborative process of graphing message volumes. He considers, for example, how many messages there are daily, and whether there is a peak time. Each industry is different, with some focusing on daily batch data (banking), and others fielding incredible amounts of continuous data (IoT data streaming from cars).  Extensive testing is necessary to ensure that the data patterns are adequately accommodated. Jason sets up a short-lived system that is identical to the main system. He finds that teams usually have not adequately tested across domain boundaries or the network. Developers tend to think in terms of numbers of messages, but not in terms of overall network traffic, or in how many consumers they'll actually need, for example. Latency must also be considered, for example if the compression on the producer's side doesn't match compression on the consumer's side, it will increase.Kafka Connect sink connectors require special consideration when Jason is establishing a cluster. Failure strategies need to well thought out, including retries and how to deal with the potentially large number of messages that can accumulate in a dead letter queue. He suggests that more attention should generally be paid to the Kafka Connect elements of a cluster, something that can actually be addressed with bash scripts.Finally, Kris and Jason cover his preference for Kafka Streams over ksqlDB from a network perspective. EPISODE LINKSCapacity Planning and Sizing for Kafka StreamsTales from the Frontline of Apache Kafka DevOpsWatch the video version of this podcastKris Jenkins’ TwitterStreaming Audio Playlist Join the Confluent CommunityLearn more on Confluent DeveloperUse PODCAST100 to get $100 of free Cloud usage (details)  
8/30/20221 hour, 1 minute, 54 seconds
Episode Artwork

Streaming Real-Time Sporting Analytics for World Table Tennis

Reimagining a data architecture to provide real-time data flow for sporting events can be complicated, especially for organizations with as much data as World Table Tennis (WTT). Vatsan Rama (Director of IT, ITTF Group) shares why real-time data is essential in the sporting world and how his team reengineered their data system in 18 months, moving from a solely on-premises infrastructure to a cloud-native data system that uses Confluent Cloud with Apache Kafka® as its central nervous system. World Table Tennis is a business created by the International Table Tennis Federation (ITTF) to manage the official professional Table Tennis series of events and its commercial rights. World Table Tennis is also leading the sport digital transformation and commercializes its software application for real-time event scoring worldwide. Previously, ITTF scoring was processed manually with a desktop-based, on-venue results system (OVR) —an on-premises solution to process match data that calculated rankings and records, then sent event information to other systems, such as scoreboards.  To provide match status in real-time, which makes the sport more engaging for fans and adds a competitive edge for players, Vatsan reengineered their OVR system to allow instant data sync between on-premises competition systems with the Cloud. The redesign started by establishing an event-driven architecture with Kafka that consolidates all legacy data sources, including records in Excel along with some handwritten forms (some dating back 90 years, even including records from the 1930 World Championship). To reduce operational overhead and maintenance, the team decided to stream data through fully managed Kafka as a service on Azure, for a scalable, distributed infrastructure. Vatsan shares that multiple table tennis events can run in parallel globally, and every time an umpire marks scores in a table, the data moves from the venue into Confluent Cloud, and then the score and rankings are sent to betting organizations and individuals on their mobile apps. EPISODE LINKSEvent Processing ApplicationFully Managed Apache Kafka on AzureWatch the video version of this podcastKris Jenkins’ TwitterStreaming Audio Playlist Join the Confluent CommunityLearn more with Kafka tutorials, resources, and guides at Confluent DeveloperLive demo: Intro to Event-Driven Microservices with ConfluentUse PODCAST100 to get an additional $100 of free Confluent Cloud usage (details)  
8/25/202234 minutes, 29 seconds
Episode Artwork

Real-Time Event Distribution with Data Mesh

Inheriting software in the banking sector can be challenging. Perhaps the only thing harder is inheriting software built by a committee of banks. How do you keep it running, while improving it, refactoring it, and planning a bigger future for it? In this episode, Jean-Francois Garet (Technical Architect, Symphony) shares his experience at Symphony as he helps it evolve from an inherited, monolithic, single-tenant architecture to an event mesh for seamless event-streaming microservices. He talks about the journey they’ve taken so far, and the foundations they’ve laid for a modern data mesh.Symphony is the leading markets’ infrastructure and technology platform, which provides a full communication stack (chat, voice and video meetings, file and screen sharing) for the financial industry. Jean-Francois shares that its initial system was inherited from one of the founding institutions—and features the highest level of security to ensure confidentiality of business conversations, coupled with compliance with regulations covering financial transactions. However, its stacks are monolithic and single tenant. To modernize Symphony's architecture for real-time data, Jean-Francois and team have been exploring various approaches over the last four years. They started breaking down the monolith into microservices, and also made a move towards multitenancy by setting up an event mesh. However, they experienced a mix of success and failure in both attempts. To continue the evolution of the system, while maintaining business deliveries, the team started to focus on event streaming for asynchronous communications, as well as connecting the microservices for real-time data exchange. As they had prior Apache Kafka® usage in the company, the team decided to go with managed Kafka on the cloud as their streaming platform. The team has a set of principles in mind for the development of their event-streaming functionality: Isolate product domainsReach eventual consistency with event streamingClear contracts for the event streams, for both producers and consumers Multiregion and global data sharingJean-Francois shares that data mesh is ultimately what they are hoping to achieve with their platform—to provide governance around data and make data available as a product for self service. As of now, though, their focus is achieving real-time event streams with event mesh.  EPISODE LINKSThe Definitive Guide to Building a Data Mesh with Event StreamsData Mesh 101What is Data Mesh? ft. Zhamak DehghaniData Mesh ArchitectureWatch the video version of this podcastKris Jenkins’ TwitterStreaming Audio Playlist Join the Confluent CommunityLearn more with Kafka tutorials, resources, and guides at Confluent DeveloperUse PODCAST100 to get an additional $100 of free Confluent Cloud usage (details) 
8/18/202248 minutes, 59 seconds
Episode Artwork

Apache Kafka Security Best Practices

Security is a primary consideration for any system design, and Apache Kafka® is no exception. Out of the box, Kafka has relatively little security enabled. Rajini Sivaram (Principal Engineer, Confluent, and co-author of “Kafka: The Definitive Guide” ) discusses how Kafka has gone from a system that included no security to providing an extensible and flexible platform for any business to build a secure messaging system. She shares considerations, important best practices, and features Kafka provides to help you design a secure modern data streaming system. In order to build a secure Kafka installation, you need to securely authenticate your users. Whether you are using Kerberos (SASL/GSSAPI), SASL/PLAIN, SCRAM, or OAUTH. Verifying your users can authenticate, and non-users can’t, is a primary requirement for any connected system.But authentication is only one part of the security story. We also need to address other areas. Kafka added support for fine-grained access control using ACLs with a pluggable authorizer several years ago. Over time, this was extended to support prefixed ACLs to make ACLs more manageable in large organizations. Now on its second generation authorizer, Kafka is easily extendable to support other forms of authorization, like integrating with a corporate LDAP server to provide group or role-based access control.Even if you’ve set up your system to use secure authentication and each user is authorized using a series of ACLs if the data is viewable by anyone listening, how secure is your system? That’s where encryption comes in. Using TLS Kafka can encrypt your data-in-transit.Security has gone from a nice-to-have to being a requirement of any modern-day system. Kafka has followed a similar path from zero security to having a flexible and extensible system that helps companies of any size pick the right security path for them. Be sure to also check out the newest Apache Kafka Security course on Confluent Developer for an in-depth explanation along with other recommendations. EPISODE LINKSAn Introduction to Apache Kafka Security: Securing Real-Time Data StreamsKafka Security courseKafka: The Definitive Guide v2Security OverviewWatch the video version of this podcastKris Jenkins’ TwitterStreaming Audio Playlist Join the Confluent CommunityLearn more with Kafka tutorials, resources, and guides at Confluent DeveloperLive demo: Intro to Event-Driven Microservices with ConfluentUse PODCAST100 to get an additional $100 of free Confluent Cloud usage (details)   
8/11/202239 minutes, 10 seconds
Episode Artwork

What Could Go Wrong with a Kafka JDBC Connector?

Java Database Connectivity (JDBC) is the Java API used to connect to a database. As one of the most popular Kafka connectors, it's important to prevent issues with your integrations. In this episode, we'll cover how a JDBC connection works, and common issues with your database connection. Why the Kafka JDBC Connector? When it comes to streaming database events into Apache Kafka®, the JDBC connector usually represents the first choice for its flexibility and the ability to support a wide variety of databases without requiring custom code. As an experienced data analyst, Francesco Tisiot (Senior Developer Advocate, Aiven) delves into his experience of streaming Kafka data pipeline with JDBC source connector and explains what could go wrong. He discusses alternative options available to avoid these problems, including the Debezium source connector for real-time change data capture. The JDBC connector is a Java API for Kafka Connect, which streams data between databases and Kafka. If you want to stream data from a rational database into Kafka, once per day or every two hours, the JDBC connector is a simple, batch processing connector to use. You can tell the JDBC connector which query you’d like to execute against the database, and then the connector will take the data into Kafka. The connector works well with out-of-the-box basic data types, however, when it comes to a database-specific data type, such as geometrical columns and array columns in PostgresSQL, these don’t represent well with the JDBC connector. Perhaps, you might not have any results in Kafka because the column is not within the connector’s supporting capability. Francesco shares other cases that would cause the JDBC connector to go wrong, such as: Infrequent snapshot timesOut-of-order eventsNon-incremental sequencesHard deletesTo help avoid these problems and set up a reliable source of events for your real-time streaming pipeline, Francesco suggests other approaches, such as the Debezium source connector for real-time change data capture. The Debezium connector has enhanced metadata, timestamps of the operation, access to all logs,  and provides sequence numbers for you to speak the language of a DBA. They also talk about the governance tool, which Francesco has been building, and how streaming Game of Thrones sentiment analysis with Kafka started his current role as a developer advocate. EPISODE LINKSKafka Connect Deep Dive – JDBC Source ConnectorJDBC Source Connector: What could go wrong?Metadata parser Debezium DocumentationDatabase Migration with Apache Kafka and Apache Kafka ConnectWatch the video version of this podcastFrancesco Tisiot’s TwitterKris Jenkins’ TwitterStreaming Audio Playlist Join the Confluent CommunityLearn more on Confluent Developer
8/4/202241 minutes, 10 seconds
Episode Artwork

Apache Kafka Networking with Confluent Cloud

Setting up a reliable cloud networking for your Apache Kafka® infrastructure can be complex. There are many factors to consider—cost, security, scalability, and availability. With immense experience building cloud-native Kafka solutions on Confluent Cloud, Justin Lee (Principal Solutions Engineer, Enterprise Solutions Engineering, Confluent) and Dennis Wittekind (Customer Success Technical Architect, Customer Success Engineering, Confluent) talk about the different networking options on Confluent Cloud, including AWS Transit Gateway, AWS, and Azure Private Link, and discuss when and why you might choose one over the other. In order to build a secure cloud-native Kafka network, you need to consider information security and compliance requirements. These requirements may vary depending on your industry, location, and regulatory environment. For example, in financial organizations, transaction data or personal identifiable information (PII) may not be accessible over the internet. In this case, your network architecture may require private networking, which means you have to choose between private endpoints or a peering connection between your infrastructure and your Kafka clusters in the cloud.What are the differences between different networking solutions? Dennis and Justin talk about some of the benefits and drawbacks of different network architectures. For example, Transit Gateways offered by AWS are often a good fit for organizations with large, disparate network architectures, while Private Link is sometimes preferred for its security benefits. We also discuss the management overhead involved in administering different network architectures.Dennis and Justin also highlight their recently launched course on Confluent Developer—the Confluent Cloud Networking course. This hands-on course covers basic networking and cloud computing concepts that will offer support for you to get a clearer picture of the configurations and collaborate with the networking teams.EPISODE LINKSCloud Networking courseManage NetworkingWatch the video version of this podcastKris Jenkins’ TwitterStreaming Audio Playlist Join the Confluent CommunityLearn more with Kafka tutorials, resources, and guides at Confluent DeveloperLive demo: Intro to Event-Driven Microservices with ConfluentUse PODCAST100 to get an additional $100 of free Confluent Cloud usage (details) 
7/28/202237 minutes, 22 seconds
Episode Artwork

Event-Driven Systems and Agile Operations

How do the principles of chaotic, agile operations in the military apply to software development and event-driven systems? As a former Royal Marine, Ben Ford (Founder and CEO, Commando Development) is also a software developer, with many years of experience building event streaming architectures across financial services and startups. He shares principles that the military employs in chaotic conditions as well as how these can be applied to event-streaming and agile development.According to Ben, the operational side of the military is very emergent and reactive based on situations, like real-time, event-driven systems. Having spent the last five years researching, adapting, and applying these principles to technology leadership, he identifies a parallel in these concepts and operations ranging from DevOps to organizational architecture, and even when developing data streaming applications.One of the concepts Ben and Kris talk through is Colonel John Boyd’s OODA loop, which includes four cycles:  Observe: the observation of the incoming events and informationOrient: the orientation stage involves reflecting on the events and how they are applied to your current situation Decide: the decision on what is the expected path to take. Then test and identify the potential outcomesAct: the action based on the decision, while also involves testing in generating further observationsThis concept of feedback loop helps to put in context and quickly make the most appropriate decision while understanding that changes can be made as more data becomes available. Ben and Kris also chat through their experience of building an event system together during the early days before the release of Apache Kafka® and more. EPISODE LINKSBuilding Real-Time Data Systems the Hard WayMission CtrlMission Command: The Doctrine of EmpowermentWatch the video version of this podcastKris Jenkins’ TwitterStreaming Audio Playlist Join the Confluent CommunityLearn more with Kafka tutorials, resources, and guides at Confluent DeveloperLive demo: Intro to Event-Driven Microservices with ConfluentUse PODCAST100 to get an additional $100 of free Confluent Cloud usage (details)   
7/21/202253 minutes, 22 seconds
Episode Artwork

Streaming Analytics and Real-Time Signal Processing with Apache Kafka

Imagine you can process and analyze real-time event streams for intelligence to mitigate cyber threats or keep soldiers constantly alerted to risks and precautions they should take based on events. In this episode, Jeffrey Needham (Senior Solutions Engineer, Advanced Technology Group, Confluent) shares use cases on how Apache Kafka® can be used for real-time signal processing to mitigate risk before it arises. He also explains the classic Kafka transactional processing defaults and the distinction between transactional and analytic processing. Jeffrey is part of the customer solutions and innovations division (CSID), which involves designing event streaming platforms and innovations to improve productivity for organizations by pushing the envelope of Kafka for real-time signal processing. What is signal intelligence? Jeffrey explains that it's not always affiliated with the military. Signal processing improves your operational or situational awareness by understanding the petabyte datasets of clickstream data, or the telemetry coming in from sensors, which could be the satellite or sensor arrays along a water pipeline. That is, bringing in event data from external sources to analyze, and then finding the pattern in the series of events to make informed decisions. Conventional On-Line Analytical Processing (OLAP) or data warehouse platforms evolved out of the transaction processing model. However, when analytics or even AI processing is applied to any data set, these algorithms never look at a single column or row, but look for patterns within millions of rows of transactionally derived data. Transaction-centric solutions are designed to update and delete specific rows and columns in an “ACID” compliant manner, which makes them inefficient and usually unaffordable at scale because this capability is less critical when the analytic goal is to look for a pattern within millions or even billions of these rows.Kafka was designed as a step forward from classic transaction processing technologies, which can also be configured in a way that’s optimized for signal processing high velocities of noisy or jittery data streams, in order to make sense, in real-time, of a dynamic, non-transactional environment.With its immutable, write-append commit logs, Kafka functions as a flight data recorder, which remains resilient even when network communications, or COMMs, are poor or nonexistent. Jeffrey shares the disconnected edge project he has been working on—smart soldier, which runs Kafka on a Raspberry Pi and x64-based handhelds. These devices are ergonomically integrated on each squad member to provide real-time visibility into the soldiers’ activities or situations. COMMs permitting, the topic data is then mirrored upstream and aggregated at multiple tiers—mobile command post, battalion, HQ—to provide ever-increasing views of the entire battlefield, or whatever the sensor array is monitoring, including the all important supply chain. Jeffrey also shares a couple of other use cases on how Kafka can be used for signal intelligence, including cybersecurity and protecting national critical infrastructure.EPISODE LINKSUsing Kafka for Analytic ProcessingWatch the video version of this podcastStreaming Audio Playlist Learn more on Confluent DeveloperUse PODCAST100 to get $100 of free Confluent Cloud usage (details)   
7/14/20221 hour, 6 minutes, 33 seconds
Episode Artwork

Blockchain Data Integration with Apache Kafka

How is Apache Kafka® relevant to blockchain technology and cryptocurrency? Fotios Filacouris (Staff Solutions Engineer, Confluent) has been working with Kafka for close to five years, primarily designing architectural solutions for financial services, he also has expertise in the blockchain. In this episode, he joins Kris to discuss how blockchain and Kafka are complementary, and he also highlights some of the use cases he has seen emerging that use Kafka in conjunction with traditional, distributed ledger technology (DLT) as well as blockchain technologies. According to Fotios, Kafka and the notion of blockchain share many traits, such as immutability, replication, distribution, and the decoupling of applications. This complementary relationship means that they can function well together if you are looking to extend the functionality of a given DLT through sidechain or off-chain activities, such as analytics, integrations with traditional enterprise systems, or even the integration of certain chains and ledgers. Based on Fotios’ observations, Kafka has become an essential piece of the puzzle in many blockchain-related use cases, including settlement, logging, analytics and risk, and volatility calculations. For example, a bitcoin trading application may use Kafka Streams to provide analytics on top of the price action of various crypto assets. Fotios has also seen use cases where a crypto platform leverages Kafka as its infrastructure layer for real-time logging and analytics. EPISODE LINKSModernizing Banking Architectures with Apache KafkaNew Kids On the BloqWatch the video version of this podcastKris Jenkins’ TwitterStreaming Audio Playlist Join the Confluent CommunityLearn more with Kafka tutorials, resources, and guides at Confluent DeveloperLive demo: Intro to Event-Driven Microservices with ConfluentUse PODCAST100 to get an additional $100 of free Confluent Cloud usage (details)   
7/7/202250 minutes, 59 seconds
Episode Artwork

Automating Multi-Cloud Apache Kafka Cluster Rollouts

To ensure safe and efficient deployment of Apache Kafka® clusters across multiple cloud providers, Confluent rolled out a large scale cluster management solution.Rashmi Prabhu (Staff Software Engineer & Eng Manager, Fleet Management Platform, Confluent) and her team have been building the Fleet Management Platform for Confluent Cloud. In this episode, she delves into what Fleet Management is, and how the cluster management service streamlines Kafka operations in the cloud while providing a seamless developer experience. When it comes to performing operations at large scale on the cloud, manual processes work well if the scenario involves only a handful of clusters. However, as a business grows, a cloud footprint may potentially scale 10x, and will require upgrades to a significantly larger cluster fleet.d. Additionally, the process should be automated, in order to accelerate feature releases while ensuring safe and mature operations. Fleet Management lets you manage and automate software rollouts and relevant cloud operations within the Kafka ecosystem at scale—including cloud-native Kafka, ksqlDB, Kafka Connect, Schema Registry, and other cloud-native microservices. The automation service can consistently operate applications across multiple teams, and can also manage Kubernetes infrastructure at scale. The existing Fleet Management stack can successfully handle thousands of concurrent upgrades in the Confluent ecosystem.When building out the Fleet Management Platform, Rashmi and the team kept these key considerations in mind: Rollout Controls and DevX: Wide deployment and distribution of changes across the fleet of target assets; improved developer experience for ease of use, with rollout strategy support, deployment policies, a dynamic control workflow, and manual approval support on an as-needed basis. Safety: Built-in features where security and safety of the fleet are the priority with access control, and audits on operations: There is active monitoring and paced rollouts, as well as automated pauses and resumes to reduce the time to react upon failure. There’s also an error threshold, and controls to allow a healthy balance of risk vs. pace. Visibility: A close to real time, wide-angle view of the fleet state, along with insights into workflow progress, historical operations on the clusters, live notification on workflows, drift detection across assets, and so much more.EPISODE LINKSOptimize Fleet ManagementSoftware Engineer - Fleet Management Watch the video version of this podcastKris Jenkins’ TwitterStreaming Audio Playlist Join the Confluent CommunityLearn more with Kafka tutorials, resources, and guides at Confluent DeveloperLive demo: Intro to Event-Driven Microservices with ConfluentUse PODCAST100 to get an additional $100 of free Confluent Cloud usage (details)
6/30/202248 minutes, 29 seconds
Episode Artwork

Common Apache Kafka Mistakes to Avoid

What are some of the common mistakes that you have seen with Apache Kafka® record production and consumption? Nikoleta Verbeck (Principal Solutions Architect at Professional Services, Confluent) has a role that specifically tasks her with performance tuning as well as troubleshooting Kafka installations of all kinds. Based on her field experience, she put together a comprehensive list of common issues with recommendations for building, maintaining, and improving Kafka systems that are applicable across use cases.Kris and Nikoleta begin by discussing the fact that it is common for those migrating to Kafka from other message brokers to implement too many producers, rather than the one per service. Kafka is thread safe and one producer instance can talk to multiple topics, unlike with traditional message brokers, where you may tend to use a client per topic. Monitoring is an unabashed good in any Kafka system. Nikoleta notes that it is better to monitor from the start of your installation as thoroughly as possible, even if you don't think you ultimately will require so much detail, because it will pay off in the long run. A major advantage of monitoring is that it lets you predict your potential resource growth in a more orderly fashion, as well as helps you to use your current resources more efficiently. Nikoleta mentions the many dashboards that have been built out by her team to accommodate leading monitoring platforms such as Prometheus, Grafana, New Relic, Datadog, and Splunk. They also discuss a number of useful elements that are optional in Kafka so people tend to be unaware of them. Compression is the first of these, and Nikoleta absolutely recommends that you enable it. Another is producer callbacks, which you can use to catch exceptions. A third is setting a `ConsumerRebalanceListener`, which notifies you about rebalancing events, letting you prepare for any issues that may result from them.  Other topics covered in the episode are batching and the `linger.ms` Kafka producer setting, how to figure out your units of scale, and the metrics tool Trogdor.EPISODE LINKS5 Common Pitfalls when Using Apache KafkaKafka Internals courselinger.ms producer configs.Fault Injection—TrogdorFrom Apache Kafka to Performance in Confluent CloudKafka CompressionInterface ConsumerRebalanceListenerWatch the video version of this podcastNikoleta Verbeck’s TwitterKris Jenkins’ TwitterStreaming Audio Playlist Join the Confluent CommunityLearn more on Confluent DeveloperUse PODCAST100 to get $100 of free Confluent Cloud usage (details)  
6/23/20221 hour, 9 minutes, 43 seconds
Episode Artwork

Tips For Writing Abstracts and Speaking at Conferences

A well-written abstract is your ticket to conferences, but how do you write an excellent synopsis that will get accepted? As an experienced conference speaker, Robin Moffatt (Principal Developer Advocate, Confluent) often writes presentations that help the developer community to understand Apache Kafka® and its ecosystem. He is also the Program Committee Chair for Kafka Summit and Current 2022: The Next Generation of Kafka Summit. Having seen hundreds of conference submissions, Robin shares best practices for crafting abstracts that stand out, as well as tips for speaking at conferences. So you want to answer the call for papers? Before writing your abstract, Robin and Kris recommend identifying a topic that you are enthusiastic about, or a topic that can be useful to others. Oftentimes, attendees go to conferences to learn about a given technology, which they may not have extensive knowledge of yet—so a fundamental topic is a good basis for a conference talk.  Once you’ve identified the topic you are interested in, there are key components to an effective write up:Title: Come up with an enticing title that lets the conference organizers and audiences understand the content at a glance. There is a chance that a great topic could be rejected due to a poor title.Abstract: Summarize the topic you plan to talk about in the proper format and length. Usually, a polished abstract has three short paragraphs consisting of approximately 200 words.It’s essential to spend quality time writing and refining your abstract, while keeping two audience groups in mind—the program committee and the conference attendees. Robin shares that when reviewing submissions, the program committees have a few standards in mind, such as if the topic fits into the overall conference theme, and whether attendees would be interested in the talk. Then if the abstract is accepted, the attendees themselves will decide if they’ll attend a particular session based on the agenda and the brief. Robin and Kris also discuss why you should submit to a conference in the first place and also give tips for preparing your talk once you are accepted. If you are a new speaker or just someone interested in getting feedback on your abstract, Robin and the conference committees for Current 2022: The Next Generation of Kafka Summit will be hosting office hours to provide feedback.EPISODE LINKSCurrent 2022: How to Become a SpeakerHow to Win at the Conference Abstract Submission GameCollection: How to Write a Good Conference AbstractPreparing a New TalkSo How Do you Make Those Cool Diagrams?Syntax Highlighting Code For Presentation Slides Watch Video VersionTwitter: Robin Moffatt | Kris JenkinsJoin the Confluent CommunityUse PODCAST100 to get $100 of Confluent Cloud usage (details)
6/16/202248 minutes, 56 seconds
Episode Artwork

How I Became a Developer Advocate

What is a developer advocate and how do you become one? In this episode, we have seasoned developer advocates, Kris Jenkins (Senior Developer Advocate, Confluent) and Danica Fine (Senior Developer Advocate, Confluent) answer the question by diving into how they got into the world of developer relations, what they enjoyed the most about their roles, and how you can become one.Developer advocacy is at the heart of a developer community—helping developers and software engineers to get the most out of a given technology by providing support in form of blog posts, podcasts, conference talks, video tutorials, meetups, and other mediums.   Before stepping into the world of developer relations, both Danica and Kris were hands-on developers. While dedicating professional time, Kris also devoted personal time to supporting fellow developers, such as running local meetups, writing blogs, and organizing hackathons.While Danica found her calling after learning more about Apache Kafka® and successfully implemented a mission-critical application for a financial services company—transforming 2,000 lines of codes into Kafka Streams. She enjoys building and sharing her knowledge with the community to make technology as accessible and as fun as possible.Additionally, the duo previews their developer advocacy trip to Singapore and Australia in mid-June, where they will attend local conferences and host in-person meetups on Kafka and event streaming. EPISODE LINKSIn-person meetup: Singapore | Sydney | MelbourneCoding in Motion: Building a Data Streaming App with JavaScript Practical Data Pipeline: Build a Plant Monitoring System with ksqlDBHow to Build a Strong Developer Community ft. Robin Moffatt and Ale MurrayDesigning Event-Driven SystemsWatch the video version of this podcastDanica Fine’s TwitterKris Jenkins’ TwitterStreaming Audio Playlist Join the Confluent CommunityLearn more with Kafka tutorials, resources, and guides at Confluent DeveloperUse PODCAST100 to get an additional $100 of free Confluent Cloud usage (details)   
6/9/202229 minutes, 48 seconds
Episode Artwork

Data Mesh Architecture: A Modern Distributed Data Model

Data mesh isn’t software you can download and install, so how do you build a data mesh? In this episode, Adam Bellemare (Staff Technologist, Office of the CTO, Confluent) discusses his data mesh proof of concept and how it can help you conceptualize the ways in which implementing a data mesh could benefit your organization.Adam begins by noting that while data mesh is a type of modern data architecture, it is only partially a technical issue. For instance, it encompasses the best way to enable various data sets to be stored and made accessible to other teams in a distributed organization. Equally, it’s also a social issue—getting the various teams in an organization to commit to publishing high-quality versions of their data and making them widely available to everyone else. Adam explains that the four data mesh concepts themselves provide the language needed to start discussing the necessary social transitions that must take place within a company to bring about a better, more effective, and efficient data strategy.The data mesh proof of concept created by Adam's team showcases the possibilities of an event-stream based data mesh in a fully functional model. He explains that there is no widely accepted way to do data mesh, so it's necessarily opinionated. The proof of concept demonstrates what self-service data discovery looks like—you can see schemas, data owners, SLAs, and data quality for each data product. You can also model an app consuming data products, as well as publish your own data products.In addition to discussing data mesh concepts and the proof of concept, Adam also shares some experiences with organizational data he had as a staff data platform engineer at Shopify. His primary focus was getting their main ecommerce data into Apache Kafka® topics from sharded MySQL—using Kafka Connect and Debezium. He describes how he really came to appreciate the flexibility of having access to important business data within Kafka topics. This allowed people to experiment with new data combinations, letting them come up with new products, novel solutions, and different ways of looking at problems. Such data sharing and experimentation certainly lie at the heart of data mesh.Adam has been working in the data space for over a decade, with experience in big-data architecture, event-driven microservices, and streaming data platforms. He’s also the author of the book “Building Event-Driven Microservices.”EPISODE LINKSThe Definitive Guide to Building a Data Mesh with Event StreamsWhat is data mesh? Saxo Bank’s Best Practices for Distributed Domain-Driven Architecture Founded on the Data MeshWatch the video version of this podcastKris Jenkins’ TwitterJoin the Confluent CommunityLearn more with Kafka tutorials at Confluent DeveloperLive demo: Intro to Event-Driven Microservices with ConfluentUse PODCAST100 to get an additional $100 of Confluent Cloud usage (details)
6/2/202248 minutes, 42 seconds
Episode Artwork

Flink vs Kafka Streams/ksqlDB: Comparing Stream Processing Tools

Stream processing can be hard or easy depending on the approach you take, and the tools you choose. This sentiment is at the heart of the discussion with Matthias J. Sax (Apache Kafka® PMC member; Software Engineer, ksqlDB and Kafka Streams, Confluent) and Jeff Bean (Sr. Technical Marketing Manager, Confluent). With immense collective experience in Kafka, ksqlDB, Kafka Streams, and Apache Flink®, they delve into the types of stream processing operations and explain the different ways of solving for their respective issues.The best stream processing tools they consider are Flink along with the options from the Kafka ecosystem: Java-based Kafka Streams and its SQL-wrapped variant—ksqlDB. Flink and ksqlDB tend to be used by divergent types of teams, since they differ in terms of both design and philosophy.Why Use Apache Flink?The teams using Flink are often highly specialized, with deep expertise, and with an absolute focus on stream processing. They tend to be responsible for unusually large, industry-outlying amounts of both state and scale, and they usually require complex aggregations. Flink can excel in these use cases, which potentially makes the difficulty of its learning curve and implementation worthwhile.Why use ksqlDB/Kafka Streams?Conversely, teams employing ksqlDB/Kafka Streams require less expertise to get started and also less expertise and time to manage their solutions. Jeff notes that the skills of a developer may not even be needed in some cases—those of a data analyst may suffice. ksqlDB and Kafka Streams seamlessly integrate with Kafka itself, as well as with external systems through the use of Kafka Connect. In addition to being easy to adopt, ksqlDB is also deployed on production stream processing applications requiring large scale and state.There are also other considerations beyond the strictly architectural. Local support availability, the administrative overhead of using a library versus a separate framework, and the availability of stream processing as a fully managed service all matter. Choosing a stream processing tool is a fraught decision partially because switching between them isn't trivial: the frameworks are different, the APIs are different, and the interfaces are different. In addition to the high-level discussion, Jeff and Matthias also share lots of details you can use to understand the options, covering employment models, transactions, batching, and parallelism, as well as a few interesting tangential topics along the way such as the tyranny of state and the Turing completeness of SQL.EPISODE LINKSThe Future of SQL: Databases Meet Stream ProcessingBuilding Real-Time Event Streams in the Cloud, On PremisesKafka Streams 101 courseksqlDB 101 courseWatch the video version of this podcastKris Jenkins’ TwitterStreaming Audio Playlist Join the Confluent CommunityLearn more on Confluent DeveloperUse PODCAST100 for additional $100 of  Confluent Cloud usage (details)
5/26/202255 minutes, 55 seconds
Episode Artwork

Practical Data Pipeline: Build a Plant Monitoring System with ksqlDB

Apache Kafka® isn’t just for day jobs according to Danica Fine (Senior Developer Advocate, Confluent). It can be used to make life easier at home, too!Building out a practical Apache Kafka® data pipeline is not always complicated—it can be simple and fun. For Danica, the idea of building a Kafka-based data pipeline sprouted with the need to monitor the water level of her plants at home. In this episode, she explains the architecture of her hardware-oriented project and discusses how she integrates, processes, and enriches data using ksqlDB and Kafka Connect, a Raspberry Pi running Confluent's Python client, and a Telegram bot. Apart from the script on the Raspberry Pi, the entire project was coded within Confluent Cloud.Danica's model Kafka pipeline begins with moisture sensors in her plants streaming data that is requested by an endless for-loop in a Python script on her Raspberry Pi. The Pi in turn connects to Kafka on Confluent Cloud, where the plant data is sent serialized as Avro. She carefully modeled her data, sending an ID along with a timestamp, a temperature reading, and a moisture reading. On Confluent Cloud, Danica enriches the streaming plant data, which enters as a ksqlDB stream, with metadata such as moisture threshold levels, which is stored in a ksqlDB table.She windows the streaming data into 12-hour segments in order to avoid constant alerts when a threshold has been crossed. Alerts are sent at the end of the 12-hour period if a threshold has been traversed for a consistent time period within it (one hour, for example). These are sent to the Telegram API using Confluent Cloud's HTTP Sink Connector, which pings her phone when a plant's moisture level is too low.Potential future project improvement plans include visualizations, adding another Telegram bot to register metadata for new plants, adding machine learning to anticipate watering needs, and potentially closing the loop by pushing data backto the Raspberry Pi, which could power a visual indicator on the plants themselves. EPISODE LINKSApache Kafka at Home: A Houseplant Alerting System with ksqlDBGitHub: raspberrypi-houseplantsData Pipelines 101Tips for Streaming Data Pipelines ft. Danica FineMotion in Motion: Building an End-to-End Motion Detection and Alerting System with Apache Kafka and ksqlDBWatch the video version of this podcastDanica Fine's TwitterKris Jenkins’ TwitterStreaming Audio Playlist Join the Confluent CommunityLearn more on Confluent DeveloperLive demo: Intro to Event-Driven Microservices with ConfluentUse PODCAST100 to get an additional $100 of free Confluent Cloud usage (details)   
5/19/202233 minutes, 56 seconds
Episode Artwork

Apache Kafka 3.2 - New Features & Improvements

Apache Kafka® 3.2 delivers new  KIPs in three different areas of the Kafka ecosystem: Kafka Core, Kafka Streams, and Kafka Connect. On behalf of the Kafka community, Danica Fine (Senior Developer Advocate, Confluent), shares release highlights.More than half of the KIPs in the new release concern Kafka Core. KIP-704 addresses unclean leader elections by allowing for further communication between the controller and the brokers. KIP-764 takes on the problem of a large number of client connections in a short period of time during preferred leader election by adding the configuration `socket.listen.backlog.size`. KIP-784 adds an error code field to the response of the `DescribeLogDirs` API, and KIP-788 improves network traffic by allowing you to set the pool size of network threads individually per listener on Kafka brokers. Finally, in accordance with the imminent KRaft protocol, KIP-801 introduces a built-in `StandardAuthorizer` that doesn't depend on ZooKeeper. There are five KIPs related to Kafka Streams in the AK 3.2 release. KIP-708 brings rack-aware standby assignment by tag, which improves fault tolerance. Then there are three projects related to Interactive Queries v2: KIP-796 specifies an improved interface for Interactive Queries; KIP-805 allows state to be queried over a specific range; and KIP-806 adds two implementations of the Query interface, `WindowKeyQuery` and `WindowRangeQuery`.The final Kafka Streams project, KIP-791, enhances `StateStoreContext` with `recordMetadata`,which may be accessed from state stores.Additionally, this Kafka release introduces Kafka Connect-related improvements, including KIP-769, which extends the `/connect-plugins` API, letting you list all available plugins, and not just connectors as before.  KIP-779 lets `SourceTasks` handle producer exceptions according to `error.tolerance`, rather than instantly killing the entire connector by default. Finally, KIP-808 lets you specify precisions with respect to TimestampConverter single message transforms. Tune in to learn more about the Apache Kafka 3.2 release!EPISODE LINKSApache Kafka 3.2 release notes Read the blog to learn moreDownload Apache Kafka 3.2.0Watch the video version of this podcast
5/17/20226 minutes, 54 seconds
Episode Artwork

Scaling Apache Kafka Clusters on Confluent Cloud ft. Ajit Yagaty and Aashish Kohli

How much can Apache Kafka® scale horizontally, and how can you automatically balance, or rebalance data to ensure optimal performance?You may require the flexibility to scale or shrink your Kafka clusters based on demand. With experience engineering cluster elasticity and capacity management features for cloud-native Kafka, Ajit Yagaty (Confluent Cloud Control Plane Engineering) and Aashish Kohli (Confluent Cloud Product Management) join Kris Jenkins in this episode to explain how the architecture of Confluent Cloud supports elasticity. Kris suggests that optimal elasticity is like water from a faucet—you should be able to quickly obtain as many resources as you need, but at the same time you don't want the slightest amount to go wasted. But how do you specify the amount of capacity by which to adjust, and how do you know when it's necessary?Aashish begins by explaining how elasticity on Confluent Cloud has come a long way since the early days of scaling via support tickets. It's now self-serve and can be accomplished by dialing up or down a desired number of CKUs, or Confluent Units of Kafka. A CKU corresponds to a specific amount of Kafka resources and has been made to be consistent across all three major clouds. You can specify the number of CKUs you need via API, CLI or Confluent Cloud UI. Ajit explains in detail how, once your request has been made, cluster resizing is a two-step process. First, capacity is added, and then your data is rebalanced. Rebalancing data on the cluster is critical to ensuring that optimal performance is derived from the available capacity. The amount of time it takes to resize a Kafka cluster depends on the number of CKUs being added or removed, as well as the amount of data to be rebalanced. Of course, to request more or fewer CKUs in the first place, you have to know when it's necessary for your Kafka cluster(s). This can be challenging as clusters emit a large variety of metrics. Fortunately, there is a single composite metric that you can monitor to help you decide, as Ajit imparts on the episode.  Other topics covered by the trio include an in-depth explanation of how Confluent Cloud achieves elasticity under the hood (separate control and data planes, along with some Kafka dogfooding), future plans for autoscaling elasticity, scenarios where elasticity is critical, and much more.EPISODE LINKSHow to Elastically Scale Apache Kafka Clusters on Confluent CloudShrink a Dedicated Kafka Cluster in Confluent CloudElastic Apache Kafka Clusters in Confluent CloudWatch the video version of this podcastKris Jenkins’ TwitterStreaming Audio Playlist Join the Confluent CommunityLearn more with Kafka tutorials, resources, and guides at Confluent DeveloperLive demo: Intro to Event-Driven Microservices with ConfluentUse PODCAST100 to get an additional $100 of free Confluent Cloud usage (details)   
5/11/202249 minutes, 7 seconds
Episode Artwork

Streaming Analytics on 50M Events Per Day with Confluent Cloud at Picnic

What are useful practices for migrating a system to Apache Kafka® and Confluent Cloud, and why use Confluent to modernize your architecture?Dima Kalashnikov (Technical Lead, Picnic Technologies) is part of a small analytics platform team at Picnic, an online-only, European grocery store that processes around 45 million customer events and five million internal events daily. An underlying goal at Picnic is to try and make decisions as data-driven as possible, so Dima's team collects events on all aspects of the company—from new stock arriving at the warehouse, to customer behavior on their websites, to statistics related to delivery trucks. Data is sent to internal systems and to a data warehouse.Picnic recently migrated from their existing solution to Confluent Cloud for several reasons:Ecosystem and community: Picnic liked the tooling present in the Kafka ecosystem. Since being a small team means they aren't able to devote extra time to building boilerplate-type code such as connectors for their data sources or functionality for extensive monitoring capabilities. Picnic also has analysts that use SQL so appreciated the processing capabilities of ksqlDB. Finally, they found that help isn't hard to locate if one gets stuck.Monitoring: They wanted better monitoring; specifically they found it challenging to measure for SLAs with their former system as they couldn't easily detect the positions of consumers in their streams.Scaling and data retention times: Picnic is growing so they needed to scale horizontally without having to worry about manual reassignment. They also hit a wall with their previous streaming solution with respect to the length of time they could save data, which is a serious issue for a company that makes data-first decisions. Cloud: Another factor of being a small team is that they don't have resources for extensive maintenance of their tooling.Dima's team was extremely careful and took their time with the migration. They ran a pilot system simultaneously with the old system, in order to make sure it could achieve their fundamental performance goals: complete stability, zero data loss, and no performance degradation. They also wanted to check it for costs.The pilot was successful and they actually have a second, IoT pilot in the works that uses Confluent Cloud and Debezium to track the robotics data emanating from their automatic fulfillment center. And it's a lot of data, Dima mentions that the robots in the center generate data sets as large as their customer events streams. EPISODE LINKSPicnic Analytics Platform: Migration from AWS Kinesis to Confluent CloudPicnic Modernizes Data Architecture with ConfluentData Engineer: Event Streaming PlatformWatch this podcast in videoKris Jenkins’ TwitterStreaming Audio Playlist Join the Confluent CommunityLearn more with Kafka  resources on Confluent DeveloperLive demo: Event-Driven Microservices with ConfluentUse PODCAST100 to get $100 of free Confluent Cloud usageBuilding Data Streaming App | Coding In Motion
5/5/202234 minutes, 41 seconds
Episode Artwork

Build a Data Streaming App with Apache Kafka and JS - Coding in Motion

Coding is inherently enjoyable and experimental. With the goal of bringing fun into programming, Kris Jenkins (Senior Developer Advocate, Confluent) hosts a new series of hands-on workshops—Coding in Motion, to teach you how to use Apache Kafka® and data streaming technologies for real-life use cases. In the first episode, Sound & Vision, Kris walks you through the end-to-end process of building a real-time, full-stack data streaming application from scratch using Kafka and JavaScript/TypeScript. During the workshop, you’ll learn to stream musical MIDI data into fully-managed Kafka using Confluent Cloud, then process and transform the raw data stream using ksqlDB. Finally, the enriched data streams will be pushed to a web server to display data in a 3D graphical visualization. Listen to Kris previews the first episode of Coding in Motion: Sound & Vision and join him in the workshop premiere to learn more. EPISODE LINKSCoding in Motion Workshop: Build a Streaming App for Sound & VisionWatch the video version of this podcastKris Jenkins’ TwitterStreaming Audio Playlist Join the Confluent CommunityLearn more with Kafka tutorials, resources, and guides at Confluent DeveloperLive demo: Intro to Event-Driven Microservices with ConfluentUse PODCAST100 to get an additional $100 of free Confluent Cloud usage (details)   
5/3/20222 minutes, 3 seconds
Episode Artwork

Optimizing Apache Kafka's Internals with Its Co-Creator Jun Rao

You already know Apache Kafka® is a distributed event streaming system for setting your data in motion, but how does its internal architecture work? No one can explain Kafka’s internal architecture better than Jun Rao, one of its original creators and Co-Founder of Confluent. Jun has an in-depth understanding of Kafka that few others can claim—and he shares that with us in this episode, and in his new Kafka Internals course on Confluent Developer. One of Jun's goals in publishing the Kafka Internals course was to cover the evolution of Kafka since its initial launch. In line with that goal, he discusses the history of Kafka development, including the original thinking behind some of its design decisions, as well as how its features have been improved to better meet its key goals of durability, scalability, and real-time data. With respect to its initial design, Jun relates how Kafka was conceived from the ground up as a distributed system, with compute and storage always maintained as separate entities, so that they could scale independently. Additionally, he shares that Kafka was deliberately made for high throughput since many of the popular messaging systems at the time of its invention were single node, but his team needed to process large volumes of non-transactional data, such as application metrics, various logs, click streams, and IoT information.As regards the evolution of its features, in addition to others, Jun explains these two topics at great length:Consumer rebalancing protocol: The original "stop the world" approach to Kafka's consumer rebalancing—although revolutionary at the time of its launch, was eventually improved upon to take a more incremental approach.Cluster metadata: Moving from the external ZooKeeper to the built-in KRaft protocol allows for better scaling by a factor of ten. according to Jun, and it also means you only need to worry about running a single binary.The Kafka Internals course consists of eleven concise modules, each dense with detail—covering Kafka fundamentals in technical depth. The course also pairs with four hands-on exercise modules led by Senior Developer Advocate Danica Fine. EPISODE LINKSKafka Internals courseHow Apache Kafka Works: An Introduction to Kafka’s InternalsCoding in Motion Workshop: Build a Streaming AppWatch the video version of this podcastKris Jenkins’ TwitterStreaming Audio Playlist Join the Confluent CommunityLearn more with Kafka tutorials, resources, and guides at Confluent DeveloperLive demo: Intro to Event-Driven Microservices with ConfluentUse PODCAST100 to get an additional $100 of free Confluent Cloud usage (details)   
4/28/202248 minutes, 54 seconds
Episode Artwork

Using Event-Driven Design with Apache Kafka Streaming Applications ft. Bobby Calderwood

What is event modeling and how does it differ from standard data modeling?In this episode of Streaming Audio, Bobby Calderwood, founder of Evident Systems and creator of oNote observes that at the dawn of the computer age, due to the fact that memory and computing power were expensive, people began to move away from time-and-narrative-oriented record-keeping systems (in the manner of a ship's log or a financial ledger) to systems based on aggregation. Such data-model systems, still dominant today, only retain the current state generated from their inputs, with the inputs themselves going lost. A converse approach to the reductive data-model system is the event-model system, which is enabled by tools like Apache Kafka®, and which effectively saves every bit of activity that the system generates. The event model actually marks a return, in a sense, to the earlier, narrative-like recording methods.To further illustrate, Bobby uses a chess example to show the distinction between the data model and the event model. In a chess context, the event modeling system would retain each move in the game from beginning to end, such that any moment in the game could be derived by replaying the sequence of moves. Conversely, chess based on the data model would save only the current state of the game, destructively mutating the data structure to reflect it. The event model maintains an immutable log of all of a system's activity, which means that teams downstream from the transactions team have access to all of the system's data, not just the end transactions, and they can analyze the data as they wish in order to make their own conclusions. Thus there can be several read models over the same body of events. Bobby has found that non-programming stakeholding teams tend to intuitively comprehend the event model better than other data paradigms, given its natural narrative form.    Transitioning from the data model to the event model, however, can be challenging. Bobby’s oNote—event modeling platform aims to help by providing a digital canvas that allows a system to be visually redesigned according to the event model. oNote generates Avro schema based on its models, and also uses Avro to generate runtime code.EPISODE LINKSEvent Sourcing and Event Storage with Apache KafkaoNoteEvent ModelingToward a Functional Programming Analogy for MicroservicesEvent-Driven Architecture - Common Mistakes and Valuable Lessons ft. Simon AuburyWatch the video version of this podcastCoding in Motion Workshop: Build a Streaming AppKris Jenkins’ TwitterJoin the Confluent CommunityLearn more with Kafka tutorials, resources, and guides at Confluent DeveloperLive demo: Intro to Event-Driven Microservices with ConfluentUse PODCAST100 to get an additional $100 of free Confluent Cloud usage (details)
4/21/202251 minutes, 9 seconds
Episode Artwork

Monitoring Extreme-Scale Apache Kafka Using eBPF at New Relic

New Relic runs one of the larger Apache Kafka® installations in the world, ingesting circa 125 petabytes a month, or approximately three billion data points per minute. Anton Rodriguez is the architect of the system, responsible for hundreds of clusters and thousands of clients, some of them implemented in non-standard technologies. In addition to the large volume of servers, he works with many teams, which must all work together when issues arise.Monitoring New Relic's large Kafka installation is critical and of course challenging, even for a company that itself specializes in monitoring. Specific obstacles include determining when rebalances are happening, identifying particularly old consumers, measuring consumer lag, and finding a way to observe all producing and consuming applications.One way that New Relic has improved the monitoring of its architecture is by directly consuming metrics from the Linux kernel using its new eBPF technology, which lets programs run inside the kernel without changing source code or adding additional modules (the open-source tool Pixie enables access to eBPF in a Kafka context). eBPF is very low impact, so doesn’t affect services, and it allows New Relic to see what’s happening at the network level—and to take action as necessary.EPISODE LINKSMonitoring Kafka Without Instrumentation Using eBPFWhat Is eBPF and Why Does It Matter for Observability?Kafka MonitoringKafka Summit: Monitoring Kafka Without Instrumentation Using eBPFWatch the video version of this podcastKris Jenkins’ TwitterStreaming Audio Playlist Join the Confluent CommunityLearn more with Kafka tutorials, resources, and guides at Confluent DeveloperLive demo: Intro to Event-Driven Microservices with ConfluentUse PODCAST100 to get an additional $100 of free Confluent Cloud usage (details)   
4/13/202238 minutes, 25 seconds
Episode Artwork

Confluent Platform 7.1: New Features + Updates

Confluent Platform 7.1 expands upon its already innovative features, adding improvements in key areas that benefit data consistency, allow for increased speed and scale, and enhance resilience and reliability.Previously, the Confluent Platform 7.0 release introduced Cluster Linking, which enables you to bridge on-premises and cloud clusters, among other configurations. Maintaining data quality standards across multiple environments can be challenging though. To assist with this problem, CP 7.1 adds Schema Linking, which lets you share consistent schemas across your clusters—synced in real time.Confluent for Kubernetes lets you build your own private-cloud Apache Kafka® service. Now you can enhance the global resilience of your architecture by employing to multiple regions. With the new release you can also configure custom volumes attached to Confluent deployments and you can declaratively define and manage the new Schema Links. As of this release, Confluent for Kubernetes now supports the full feature set of the Confluent Platform. Tiered Storage was released in Confluent Platform 6.0, and it offers immense benefits for a cluster by allowing the offloading of older topic data out of the broker and into slower, long-term object storage. The reduced amount of local data makes maintenance, scaling out, recovery from failure, and adding brokers all much quicker. CP 7.1 adds compatibility for object storage using Nutanix, NetApp, MinIO, and Dell, integrations that have been put through rigorous performance and quality testing.Health+ was introduced in CP 6.2—offers intelligent cloud-based alerting and monitoring tools in a dashboard. New as of CP 7.1, you can choose to be alerted when anomalies in broker latency are detected, when there is an issue with your connectors linking Kafka and external systems, as well as when a ksqlDB query will interfere with a continuous, real-time processing stream. Shipping with CP 7.1 is ksqlDB 0.23, which adds support for pull queries against streams as opposed to only against tables—a milestone development that greatly helps when debugging since a subset of messages within a topic can now be inspected. ksqlDB 0.23 also supports custom schema selection, which lets you choose a specific schema ID when you create a new stream or table, rather than use the latest registered schema. A number of additional smaller enhancements are also included in the release.EPISODE LINKSDownload Confluent Platform 7.1Check out the release notesRead the Confluent Platform 7.1 blog postWatch the video version of this podcastJoin the Confluent CommunityLearn more with Kafka tutorials, resources, and guides at Confluent DeveloperLive demo: Intro to Event-Driven Microservices with ConfluentUse PODCAST100 to get $100 of free Confluent Cloud usage (details)
4/12/202210 minutes, 1 second
Episode Artwork

Scaling an Apache Kafka Based Architecture at Therapie Clinic

Scaling Apache Kafka® can be tricky, let alone scaling a team. When he was first hired, Domenico Fioravanti of Therapie Clinic was given the challenging task of assembling a sizable tech team from scratch, while simultaneously building a scalable and decoupled architecture from the ground up. In addition, he wanted to deliver value to the company from day one. One way that Domenico ultimately accomplished these goals was by focusing on managed solutions in order to avoid large investments in engineering know-how. Another way was to deliver quickly to production by using the existing knowledge of his team.Domenico's biggest initial priority was to make a real-time reporting dashboard that collated data generated by third-party systems, such as call centers and front-of-house software solutions that managed bookings and transactions. (Before Domenico's arrival, all reporting had been done by aggregating data from different sources through an expensive, manual, error-prone, and slow process—which tended to result in late and incomplete insights.)Establishing an initial stack with AWS and a BI/analytics tool only took a month and required minimal DevOps resources, but Domenico's team ended up wanting to leverage their efforts to free up third-party data for more than just the reporting/data insights use case.So they began considering Apache Kafka® as a central repository for their data. For Kafka itself, they investigated Amazon MSK vs. Confluent, carefully weighing setup and time costs, maintenance costs, limitations, security, availability, risks, migration costs, Kafka updates frequency, observability, and errors and troubleshooting needs.Domenico's team settled on Confluent Cloud and built the following stack:AWS AppSync, a managed GraphQL layer to interact with and abstract third-party APIs (data sources)AWS Lambdas for extracting data and producing to Kafka topicsKafka topics for the raw as well as transformed dataKafka Streams for data transformationKafka Redshift sink connector for loading data​​AWS Redshift as the destination cloud data warehouse Looker for business intelligence and big data analytics This stack allowed the company's data to be consumed by multiple teams in a scalable way. Eventually, DynamoDB was added and by the end of a year, along with a scalable architecture, Domenico had successfully grown his staff to 45 members on six teams.EPISODE LINKSConfluent’s Data Streaming Platform Can Save Over $2.5M vs. Self-Managing Apache KafkaAccelerate Your Cloud Data Warehouse Migration and Modernization with ConfluentWatch the video version of this podcastKris Jenkins' TwitterStreaming Audio Playlist Join the Confluent CommunityLearn more with Kafka tutorials, resources, and guides at Confluent DeveloperLive demo: Intro to Event-Driven Microservices with ConfluentUse PODCAST100 to get an additional $100 of free Confluent Cloud usage (details)  
4/7/20221 hour, 10 minutes, 56 seconds
Episode Artwork

Bridging Frontend and Backend with GraphQL and Apache Kafka ft. Gerard Klijs

What is GraphQL? And how can you combine GraphQL with Apache Kafka® to query data in real time?With over 10 years of experience as a backend engineer, Gerard Klijs is a Confluent Community Catalyst, a contributor to several GraphQL libraries, and also a creator and maintainer of a Rust library to use Confluent Schema Registry with Java client. In this episode, he explains why you want to use Kafka with GraphQL and how they work together to bridge the gap between backend and frontend to make data more easily accessible in the frontend.  As an alternative to REST, GraphQL is an open source programming language developed by Meta, which lets you pull data from multiple data sources via a single API call. GraphQL lets you migrate and deprecate data easily. For example, if you have a `name` field, which you later decided to replace by `firstName` and `lastName`, you can group the field names together and monitor the server for query requests. If there are no additional query requests for the deprecated field, then it can be removed from the server.Usually, GraphQL is used in the frontend with a server implemented in Node.js, while Kafka is often used as an integration layer between backend components. When it comes to connecting Kafka with GraphQL, the use cases might not seem as vast at first glance, but Gerard thinks that it is due to unfamiliarity and misconceptions on how the two can work together. For example, some may think Kafka is merely a message bus and GraphQL is for graph databases.Gerard also talks about the backend for frontend (BFF) pattern as well as tips on working with GraphQL. EPISODE LINKSGetting Started with GraphQL and Apache KafkaKafka and GraphQL: Misconceptions and ConnectionsGerard Klijs GithubWatch the video version of this podcastKris Jenkins TwitterStreaming Audio Playlist Join the Confluent CommunityLearn more with Kafka tutorials, resources, and guides at Confluent DeveloperLive demo: Intro to Event-Driven Microservices with ConfluentUse PODCAST100 to get an additional $100 of free Confluent Cloud usage (details)  
3/29/202223 minutes, 13 seconds
Episode Artwork

Building Real-Time Data Governance at Scale with Apache Kafka ft. Tushar Thole

Data availability, usability, integrity, and security are words that we sometimes hear a lot. But what do they actually look like when put into practice? That’s where data governance comes in. This becomes especially tricky when working with real-time data architectures.Tushar Thole (Senior Manager, Engineering, Trust & Security, Confluent) focuses on delivering features for software-defined storage, software-defined networking (SD-WAN), security, and cloud-native domains. In this episode, he shares the importance of real-time data governance and the product portfolio—Stream Governance, which his team has been building to fostering the collaboration and knowledge sharing necessary to become an event-centric business while remaining compliant within an ever-evolving landscape of data regulations. With the increase of data volume, variety, and velocity, data governance is mandatory for trustworthy, usable, accurate, and accessible data across organizations, especially with distributed data in motion. When it comes to choosing a tool to govern real-time distributed data, there is often a paradox of choice. Some tools are built for handling data at rest, while open source alternatives lack features and are not managed services that can be integrated with the Apache Kafka® ecosystem natively. To solve governance use cases by delivering high-quality data assets, Tushar and his team have been taking Confluent Schema Registry, considered the de facto metadata management standard for the ecosystem, to the next level. This approach to governance allows organizations to scale Kafka operations for real-time observability with security and quality. The fully managed, cloud-native Stream Governance framework is based on three key workflows: Stream catalog: Search and discover data in a self-service fashionStream lineage: Understand the complex data relationships with interactive, end-to-end maps of event streams Stream quality: Deliver trusted, high-quality event streams to the organization Tushar also shares use cases around data governance and sheds light on the Stream Governance roadmap. EPISODE LINKSStream Governance – How it WorksData Mess to Data Mesh | Jay KrepsDemo: Stream GovernanceData Governance for Real Time DataWatch the video version of this podcastKris Jenkins TwitterStreaming Audio Playlist Join the Confluent CommunityLearn more with Kafka tutorials, resources, and guides at Confluent DeveloperLive demo: Intro to Event-Driven Microservices with ConfluentUse PODCAST100 to get an additional $100 of free Confluent Cloud usage (details)  
3/22/202242 minutes, 58 seconds
Episode Artwork

Handling 2 Million Apache Kafka Messages Per Second at Honeycomb

How many messages can Apache Kafka® process per second? At Honeycomb, it's easily over one million messages.  In this episode,  get a taste of how Honeycomb uses Kafka on massive scale. Liz Fong-Jones (Principal Developer Advocate, Honeycomb) explains how Honeycomb manages Kafka-based telemetry ingestion pipelines and scales Kafka clusters. And what is Honeycomb? Honeycomb is an observability platform that helps you visualize, analyze, and improve cloud application quality and performance. Their data volume has grown by a factor of 10 throughout the pandemic, while the total cost of ownership has only gone up by 20%. But how, you ask? As a developer advocate for site reliability engineering (SRE) and observability, Liz works alongside the platform engineering team on optimizing infrastructure for reliability and cost. Two years ago, the team was facing the prospect of growing from 20 Kafka brokers to 200 Kafka brokers as data volume increased. The challenge was to scale and shuffle data between the number of brokers while maintaining cost efficiency.The Honeycomb engineering team has experimented with using sc1 or st1 EBS hard disks to store the majority of longer-term archives and keep only the latest hours of data on NVMe instance storage. However, this approach to cost reduction was not ideal, which resulted in needing to keep data that is older than 24 hours on SSD. The team began to explore and adopt Zstandard compression to decrease bandwidth and disk size; however, the clusters were still struggling to keep up. When Confluent Platform 6.0 rolled out Tiered Storage, the team saw it as a feature to help them break away from being storage bound. Before bringing the feature into production, the team did a proof of concept, which helped them gain confidence as they watched Kafka tolerate broker death and reduce latencies in fetching historical data. Tiered Storage now shrinks their clusters significantly so that they can hold on to local NVMe SSD and the tiered data is only stored once on Amazon S3, rather than consuming SSD on all replicas. In combination with the AWS Im4gn instance, Tiered Storage allows the team to scale for long-term growth. Honeycomb also saved 87% on the cost per megabyte of Kafka throughput by optimizing their Kafka clusters.EPISODE LINKSTiered StorageIntroducing Confluent Platform 6.0Scaling Kafka at HoneycombWatch the video version of this podcastKris Jenkins TwitterStreaming Audio Playlist Join the Confluent CommunityLearn more with Kafka tutorials, resources, and guides at Confluent DeveloperLive demo: Intro to Event-Driven Microservices with ConfluentUse PODCAST100 to get an additional $100 of free Confluent Cloud usage (details)  
3/15/202241 minutes, 36 seconds
Episode Artwork

Why Data Mesh? ft. Ben Stopford

With experience in data infrastructure and distributed data technologies, author of the book “Designing Event-Driven Systems” Ben Stopford (Lead Technologist, Office of the CTO, Confluent) explains the data mesh paradigm, differences between traditional data warehouses and microservices, as well as how you can get started with data mesh.   Unlike standard data architecture, data mesh is about moving data away from a monolithic data warehouse into distributed data systems. Doing so will allow data to be available as a product—this is also one of the four principles of data mesh: Data ownership by domainData as a productData available everywhere for self-serviceData governed wherever it isThese four principles are technology agnostic, which means that they don’t restrict you to a programming language, Apache Kafka®, or other databases. Data mesh is all about building point-to-point architecture that lets you evolve and accommodate real-time data needs with governance tools.Fundamentally, data mesh is more than a technological shift. It’s a mindset shift that requires cultural adaptation of product thinking—treating data as a product instead of data as an asset or resource. Data mesh invests ownership of data by the people who create it with requirements that ensure quality and governance. Because data mesh consists of a map of interconnections, it’s important to have governance tools in place to identify data sources and provide data discovery capabilities. There are many ways to implement data mesh, event streaming being one of them. You can ingest data sets from across organizations and sources into your own data system. Then you can use stream processing to trigger an application response to the data set. By representing each data product as a data stream, you can tag it with sub-elements and secondary dimensions to enable data searchability. If you are using a managed service like Confluent Cloud for data mesh, you can visualize how data flows inside the mesh through a stream lineage graph. Ben also discusses the importance of keeping data architecture as simple as you can to avoid derivatives of data products.EPISODE LINKSData Mesh 101 courseData Mesh 101 with Live Walkthrough ExerciseIntroduction and Guide to Data MeshThe Definitive Guide to Building a Data Mesh with Event StreamsWhat is Data Mesh, and How Does it Work? ft. Zhamak DehghaniDesigning Event-Driven SystemsWatch the video version of this podcastJoin the Confluent CommunityLearn more with Kafka tutorials, resources, and guides at Confluent DeveloperLive demo: Intro to Event-Driven Microservices with ConfluentUse PODCAST100 to get an additional $100 of free Confluent Cloud usage (details)
3/10/202244 minutes, 42 seconds
Episode Artwork

Serverless Stream Processing with Apache Kafka ft. Bill Bejeck

What is serverless?Having worked as a software engineer for over 15 years and as a regular contributor to Kafka Streams, Bill Bejeck (Integration Architect, Confluent) is an Apache Kafka® committer and author of “Kafka Streams in Action.” In today’s episode, he explains what serverless and the architectural concepts behind it are. To clarify, serverless doesn’t mean you can run an application without a server—there are still servers in the architecture, but they are abstracted away from your application development. In other words, you can focus on building and running applications and services without any concerns over infrastructure management. Using a cloud provider such as Amazon Web Services (AWS) enables you to allocate machine resources on demand while handling provisioning, maintenance, and scaling of the server infrastructure. There are a few important terms to know when implementing serverless functions with event stream processors: Functions as a service (FaaS)Stateless stream processingStateful stream processingServerless commonly falls into the FaaS cloud computing service category—for example, AWS Lambda is the classic definition of a FaaS offering. You have a greater degree of control to run a discrete chunk of code in response to certain events, and it lets you write code to solve a specific issue or use case. Stateless processing is simpler in comparison to stateful processing, which is more complex as it involves keeping the state of an event stream and needs a key-value store. ksqlDB allows you to perform both stateless and stateful processing, but its strength lies in stateful processing to answer complex questions while AWS Lambda is better suited for stateless processing tasks. By integrating ksqlDB with AWS Lambda together, they deliver serverless event streaming and analytics at scale.EPISODE LINKSWhat is Serverless?Serverless Stream Processing with Apache Kafka, AWS Lambda, and ksqlDBStateful Serverless Architectures with ksqlDB and AWS Lambda Serverless GitHub repositoryKafka Streams in ActionWatch the video version of this podcastJoin the Confluent CommunityLearn more with Kafka tutorials, resources, and guides at Confluent DeveloperLive demo: Intro to Event-Driven Microservices with ConfluentUse PODCAST100 to get an additional $100 of free Confluent Cloud usage (details)
3/3/202242 minutes, 23 seconds
Episode Artwork

The Evolution of Apache Kafka: From In-House Infrastructure to Managed Cloud Service ft. Jay Kreps

When it comes to Apache Kafka®, there’s no one better to tell the story than Jay Kreps (Co-Founder and CEO, Confluent), one of the original creators of Kafka. In this episode, he talks about the evolution of Kafka from in-house infrastructure to a managed cloud service and discusses what’s next for infrastructure engineers who used to self-manage the workload. Kafka started out at LinkedIn as a distributed stream processing framework and was core to their central data pipeline. At the time, the challenge was to address scalability for real-time data feeds. The social media platform’s initial data system was built on Apache™Hadoop®, but the team later realized that operationalizing and scaling the system required a considerable amount of work. When they started re-engineering the infrastructure, Jay observed a big gap in data streaming—on one end, data was being looked at constantly for analytics, while on the other end, data was being looked at once a day—missing real-time data interconnection. This ushered in efforts to build a distributed system that connects applications, data systems, and organizations for real-time data. That goal led to the birth of Kafka and eventually a company around it—Confluent.Over time, Confluent progressed from focussing solely on Kafka as a software product to a more holistic view—Kafka as a complete central nervous system for data, integrating connectors and stream processing with a fully-managed cloud service.Now as organizations make a similar shift from in-house infrastructure to fully-managed services, Jay outlines five guiding points to keep in mind: Cloud-native systems abstract away operational efforts for you without infrastructure concernsIt’s important to have a complete ecosystem for Kafka, including connectors, a SQL layer, and data governanceA distributed system should allow data to be accessible everywhere and across organizationsIdentifying a reliable storage infrastructure layer that is dependable, such as Amazon S3 is criticalCost-effective models mean sustainability and systems that are easy to build aroundEPISODE LINKSBuilding Real-Time Data Systems the Hard WayKris Jenkins TwitterThe Hitchhiker’s Guide to the GalaxyHedonic treadmillWatch the video version of this podcastJoin the Confluent CommunityLearn more with Kafka tutorials, resources, and guides at Confluent DeveloperLive demo: Intro to Event-Driven Microservices with ConfluentUse PODCAST100 to get an additional $100 of free Confluent Cloud usage (details)
2/24/202246 minutes, 32 seconds
Episode Artwork

What’s Next for the Streaming Audio Podcast ft. Kris Jenkins

Meet your new host of the Streaming Audio podcast: Kris Jenkins (Senior Developer Advocate, Confluent)! In this preview, Kris shares a few highlights from forthcoming episodes to look forward to, spanning topics from data mesh, cloud-native technologies, and serverless Apache Kafka®, to data modeling. As a developer advocate, Kris is endlessly fascinated about software design, functional programming, real-time systems, and electronic music. He is a veteran software developer and engineer, with a broad background from roles such as CTO of a Java/Oracle gold exchange and contract developer of several Haskell/PureScript-based event systems.There is still a raft of data streaming narratives to tell and many community experts to feature. We’ll cover what’s new and emerging, real-life Kafka use cases, and how people are currently using managed Kafka as a service, as well as the latest in the data streaming spaceIf there’s a subject you’d like to see covered on the show or if you know someone who should be featured, let us know via the Confluent Community engagement form. EPISODE LINKSGet involved in the Confluent CommunitySubscribe on Apple PodcastSubscribe on SpotifySubscribe on AndroidListen and Subscribe on PodLinkWatch the video version of this podcastJoin the Confluent CommunityLearn more with Kafka tutorials, resources, and guides at Confluent DeveloperLive demo: Intro to Event-Driven Microservices with ConfluentUse PODCAST100 to get an additional $100 of free Confluent Cloud usage (details)
2/16/20222 minutes, 39 seconds
Episode Artwork

On to the Next Chapter ft. Tim Berglund

After nearly 200 podcast episodes of Streaming Audio, Tim Berglund bids farewell in his last episode as host of the show. Tim reflects on the many great memories with guests who have appeared on the segment—and each for its own reasons. He has covered a wide variety of topics, ranging from Apache Kafka® fundamentals, microservices, event stream processing, use cases, to cloud-native Kafka, data mesh, and more. As Tim mentions, the Streaming Audio podcast will continue on to explore all things about Kafka and the cloud while featuring new voices and topics. You can subscribe to the Streaming Audio podcast on your podcast platform of choice to get the latest updates and news. Thank you for listening and stay tuned. EPISODE LINKSI Interviewed Nearly 200 Apache Kafka Experts and I learned These 10 ThingsWatch the video version of this podcastJoin the Confluent CommunityLearn more with Kafka tutorials, resources, and guides at Confluent DeveloperLive demo: Intro to Event-Driven Microservices with ConfluentUse PODCAST100 to get an additional $100 of free Confluent Cloud usage (details)
2/3/20226 minutes, 45 seconds
Episode Artwork

Intro to Event Sourcing with Apache Kafka ft. Anna McDonald

What is event sourcing and how does it work?Event sourcing is often used interchangeably with event-driven architecture and event stream processing. However, Anna McDonald (Principal Customer Success Technical Architect, Confluent) explains it's a specific category of its own—an event streaming pattern. Anna is passionate about event-driven architectures and event patterns. She’s a tour de force in the Apache Kafka® community and is the presenter of the Event Sourcing and Event Storage with Apache Kafka course on Confluent Developer. In this episode, she previews the course by providing an overview of what event sourcing is and what you need to know in order to build event-driven systems. Event sourcing is an architectural design pattern, which defines the approach to handling data operations that are driven by a sequence of events. The pattern ensures that all changes to an application state are captured and stored as an immutable sequence of events, known as a log of events. The events are persisted in an event store, which acts as the system of record. Unlike traditional databases where only the latest status is saved, an event-based system saves all events into a database in sequential order. If you find a past event is incorrect, you can replay each event from a certain timestamp up to the present to recreate the latest status of data. Event sourcing is commonly implemented with a command query responsibility segregation (CQRS) system to perform data computation tasks in response to events. To implement CQRS with Kafka, you can use Kafka Connect, along with a database, or alternatively use Kafka with the streaming database ksqlDB.In addition, Anna also shares about:Data at rest and data in motion techniques for event modelingThe differences between event streaming and event sourcingHow CQRS, change data capture (CDC), and event streaming help you leverage event-driven systemsThe primary qualities and advantages of an event-based storage systemUse cases for event sourcing and how it integrates with your systemsEPISODE LINKSEvent Sourcing courseEvent Streaming in 3 MinutesIntroducing Derivative Event SourcingMeetup: Event Sourcing and Apache KafkaWatch the video version of this podcastJoin the Confluent CommunityLearn more with Kafka tutorials, resources, and guides at Confluent DeveloperLive demo: Intro to Event-Driven Microservices with ConfluentUse PODCAST100 to get an additional $100 of free Confluent Cloud usage (details)
2/1/202230 minutes, 14 seconds
Episode Artwork

Expanding Apache Kafka Multi-Tenancy for Cloud-Native Systems ft. Anna Povzner and Anastasia Vela

In an effort to make Apache Kafka® cloud native, Anna Povzener (Principal Engineer, Confluent) and Anastasia Vela (Software Engineer I, Confluent) have been working to expand multi-tenancy to cloud-native systems with automated capacity planning and scaling in Confluent Cloud. They explain how cloud-native data systems are different from legacy databases and share the technical requirements needed to create multi-tenancy for managed Kafka as a service. As a distributed system, Kafka is designed to support multi-tenant systems by: Isolating data with authentication, authorization, and encryptionIsolating user namespacesIsolating performance with quotasTraditionally, Kafka’s multi-tenant capabilities are used in on-premises data centers to make data available and accessible across the company—a single company would run a multi-tenant Kafka cluster with all its workloads to stream data across organizations. Some processes behind setting up multi-tenant Kafka clusters are manual with the requirement to over-provision resources and capacity in order to protect the cluster from unplanned demand increases. When Kafka is on cloud instances, you have the ability to scale cloud resources on the fly for any unplanned workloads to meet expectations instantaneously. To shift multi-tenancy to the cloud, Anna and Anastasia identify the following as essential for the architectural design: Abstractions: requires minimal operational complexity of a cloud servicePay-per-use model: requires the system to use only the minimum required resources until additional is necessary Uptime and performance SLA/SLO: requires support for unknown and unpredictable workloads with minimal operational workload while protecting the cluster from distributed denial-of-service (DDoS) attacksCost-efficiency: requires a lower cost of ownership You can also read more about the shift from on-premises to cloud-native, multi-tenant services in Anna and Anastasia’s publication on the Confluent blog. EPISODE LINKSFrom On-Prem to Cloud-Native: Multi-Tenancy in Confluent CloudCloud-Native Apache KafkaJoin the Confluent CommunityLearn more with Kafka tutorials, resources, and guides at Confluent DeveloperLive demo: Intro to Event-Driven Microservices with ConfluentUse PODCAST100 to get an additional $100 of free Confluent Cloud usage (details)Watch the video version of this podcast
1/27/202231 minutes, 1 second
Episode Artwork

Apache Kafka 3.1 - Overview of Latest Features, Updates, and KIPs

Apache Kafka® 3.1 is here with exciting new features and improvements! On behalf of the Kafka community, Danica Fine (Senior Developer Advocate, Confluent) shares release highlights that you won’t want to miss, including foreign-key joins in Kafka Streams and improvements that will provide consistency for Kafka latency metrics. KAFKA-13439 deprecates the eager protocol, which has been the default since Kafka 2.4—it’s advised to upgrade your applications to the cooperative protocol as the eager protocol will no longer be supported in future releases. Previously, foreign-key joins in Kafka Streams only worked if both primary and foreign-key tables were joined. This release adds support for foreign-key joins on tables with custom partitioners, which will be passed in as part of a new `TableJoined` object, comparable to the existing `Joined` and `StreamJoined` objects. With the goal of making Kafka more intuitive, KIP-773 enhances naming consistency for three new client metrics with millis and nanos. For example, `io-waittime-total` is reintroduced as `io-wait-time-ns-total`. The previously introduced metrics without ns will be deprecated but available for backward compatibility. KIP-768 continues the work started in KIP-255 to implement the necessary interfaces for a production-grade way to connect to an OpenID identity provider for authentication and token retrieval. This update provides an out-of-the-box implementation of an `AuthenticateCallbackHandler` that can be used to communicate with OAuth/OIDC. Additionally, this Kafka release introduces two new metrics for active brokers specifically, `ActiveBrokerCount` and `FenceBrokerCount`. These two metrics expose the number of active brokers in the cluster known by the controller and the number of fenced brokers known by the controller. Tune in to learn more about the Apache Kafka 3.1 release! EPISODE LINKSApache Kafka 3.1 release notes Read the blog to learn moreDownload Apache Kafka 3.1Watch the video version of this podcast
1/24/20224 minutes, 43 seconds
Episode Artwork

Optimizing Cloud-Native Apache Kafka Performance ft. Alok Nikhil and Adithya Chandra

Maximizing cloud Apache Kafka® performance isn’t just about running data processes on cloud instances. There is a lot of engineering work required to set and maintain a high-performance standard for speed and availability. Alok Nikhil (Senior Software Engineer, Confluent) and Adithya Chandra (Staff Software Engineer II, Confluent) share about their efforts on how to optimize Kafka on Confluent Cloud and the three guiding principles that they follow whether you are self-managing Kafka  or working on a cloud-native system: Know your users and plan for their workloadsInfrastructure matters for performance as well as cost efficiency Effective observability—you can’t improve what you don’t see A large part of setting and achieving performance standards is about understanding that workloads vary and come with unique requirements. There are different dimensions for performance, such as the number of partitions and the number of connections. Alok and Adithya suggest starting by identifying the workload patterns that are the most important to your business objectives for simulation, reproduction, and using the results to optimize the software.   When identifying workloads, it’s essential to determine the infrastructure that you’ll need to support the given workload economically. Infrastructure optimization is as important as performance optimization. It's best practice to know the infrastructure that you have available to you and choose the appropriate hardware, operating system, and JVM to allocate the processes so that workloads run efficiently. With the necessary infrastructure patterns in place, it’s crucial to monitor metrics to ensure that your application is running as expected consistently with every release. Having the right observability metrics and logs allows you to identify and troubleshoot issues relatively quickly. Profiling and request sampling also help you dive deeper into performance issues, particularly, during incidents. Alok and Adithya’s team uses tooling such as the async-profiler for profiling CPU cycles, heap allocations, and lock contention.Alok and Adithya summarize their learnings and processes used for optimizing managed Kafka as a service, which can be applicable to your own cloud-native applications. You can also read more about their journey on the Confluent blog. EPISODE LINKSSpeed, Scale, Storage: Our Journey from Apache Kafka to Performance in Confluent CloudCloud-Native Apache KafkaJoin the Confluent CommunityLearn more with Kafka tutorials, resources, and guides at Confluent DeveloperLive demo: Intro to Event-Driven Microservices with ConfluentUse PODCAST100 to get an additional $100 of free Confluent Cloud usage (details)Watch the video version of this podcast
1/20/202230 minutes, 40 seconds
Episode Artwork

From Batch to Real-Time: Tips for Streaming Data Pipelines with Apache Kafka ft. Danica Fine

Implementing an event-driven data pipeline can be challenging, but doing so within the context of a legacy architecture is even more complex. Having spent three years building a streaming data infrastructure and being on the first team at a financial organization to implement Apache Kafka® event-driven data pipelines, Danica Fine (Senior Developer Advocate, Confluent) shares about the development process and how ksqlDB and Kafka Connect became instrumental to the implementation.By moving away from batch processing to streaming data pipelines with Kafka, data can be distributed with increased data scalability and resiliency. Kafka decouples the source from the target systems, so you can react to data as it changes while ensuring accurate data in the target system. In order to transition from monolithic micro-batching applications to real-time microservices that can integrate with a legacy system that has been around for decades, Danica and her team started developing Kafka connectors to connect to various sources and target systems. Kafka connectors: Building two major connectors for the data pipeline, including a source connector to connect the legacy data source to stream data into Kafka, and another target connector to pipe data from Kafka back into the legacy architecture. Algorithm: Implementing Kafka Streams applications to migrate data from a monolithic architecture to a stream processing architecture. Data join: Leveraging Kafka Connect and the JDBC source connector to bring in all data streams to complete the pipeline.Streams join: Using ksqlDB to join streams—the legacy data system continues to produce streams while the Kafka data pipeline is another stream of data. As a final tip, Danica suggests breaking algorithms into process steps. She also describes how her experience relates to the data pipelines course on Confluent Developer and encourages anyone who is interested in learning more to check it out. EPISODE LINKSData Pipelines courseIntroduction to Streaming Data Pipelines with Apache Kafka and ksqlDBGuided Exercise on Building Streaming Data PipelinesMigrating from a Legacy System to Kafka StreamsWatch the video version of this podcastJoin the Confluent CommunityLearn more with Kafka tutorials, resources, and guides at Confluent DeveloperLive demo: Intro to Event-Driven Microservices with ConfluentUse PODCAST100 to get an additional $100 of free Confluent Cloud usage (details)
1/13/202229 minutes, 50 seconds
Episode Artwork

Real-Time Change Data Capture and Data Integration with Apache Kafka and Qlik

Getting data from a database management system (DBMS) into Apache Kafka® in real time is a subject of ongoing innovation. John Neal (Principal Solution Architect, Qlik) and Adam Mayer (Senior Technical Producer Marketing Manager, Qlik) explain how leveraging change data capture (CDC) for data ingestion into Kafka enables real-time data-driven insights. It can be challenging to ingest data in real time. It is even more challenging when you have multiple data sources, including both traditional databases and mainframes, such as SAP and Oracle. Extracting data in batch for transfer and replication purposes is slow, and often incurs significant performance penalties. However, analytical queries are often even more resource intensive and are prohibitively expensive to run on production transactional databases. CDC enables the capture of source operations as a sequence of incrementing events, converting the data into events to be written to Kafka. Once this data is available in the Kafka topics, it can be used for both analytical and operational use cases. Data can be consumed and modeled for analytics by individual groups across your organization. Meanwhile, the same Kafka topics can be used to help power microservice applications and help ensure data governance without impacting your production data source. Kafka makes it easy to integrate your CDC data into your data warehouses, data lake, NoSQL database, microservices, and any other system. Adam and John highlight a few use cases where they see real-time Kafka data ingestion, processing, and analytics moving the needle—including real-time customer predictions, supply chain optimizations, and operational reporting. Finally, Adam and John cap it off with a discussion on how capturing and tracking data changes are critical for your machine learning model to enrich data quality. EPISODE LINKSFast Track Business Insights with Data in MotionWatch the video version of this podcastJoin the Confluent CommunityLearn more with Kafka tutorials, resources, and guides at Confluent DeveloperLive demo: Intro to Event-Driven Microservices with ConfluentUse PODCAST100 to get an additional $100 of free Confluent Cloud usage (details)
1/6/202234 minutes, 51 seconds
Episode Artwork

Modernizing Banking Architectures with Apache Kafka ft. Fotios Filacouris

It’s been said that financial services organizations have been early Apache Kafka® adopters due to the strong delivery guarantees and scalability that Kafka provides. With experience working and designing architectural solutions for financial services, Fotios Filacouris (Senior Solutions Engineer, Enterprise Solutions Engineering, Confluent) joins Tim to discuss how Kafka and Confluent help banks build modern architectures, highlighting key emerging use cases from the sector. Previously, Kafka was often viewed as a simple pipe that connected databases together, which allows for easy and scalable data migration. As the Kafka ecosystem evolves with added components like ksqlDB, Kafka Streams, and Kafka Connect, the implementation of Kafka goes beyond being just a pipe—it’s an intelligent pipe that enables real-time, actionable data insights.Fotios shares a couple of use cases showcasing how Kafka solves the problems that many banks are facing today. One of his customers transformed retail banking by using Kafka as the architectural base for storing all data permanently and indefinitely. This approach enables data in motion and a better user experience for frontend users while scrolling through their transaction history by eliminating the need to download old statements that have been offloaded in the cloud or a data lake. Kafka also provides the best of both worlds with increased scalability and strong message delivery guarantees that are comparable to queuing middleware like IBM MQ and TIBCO. In addition to use cases, Tim and Fotios talk about deploying Kafka for banks within the cloud and drill into the profession of being a solutions engineer. EPISODE LINKSWatch the video version of this podcastJoin the Confluent CommunityLearn more with Kafka tutorials, resources, and guides at Confluent DeveloperLive demo: Intro to Event-Driven Microservices with ConfluentUse PODCAST100 to get an additional $100 of free Confluent Cloud usage (details)
12/28/202134 minutes, 59 seconds
Episode Artwork

Running Hundreds of Stream Processing Applications with Apache Kafka at Wise

What’s it like building a stream processing platform with around 300 stateful stream processing applications based on Kafka Streams? Levani Kokhreidze (Principal Engineer, Wise) shares his experience building such a platform that the business depends on for multi-currency movements across the globe. He explains how his team uses Kafka Streams for real-time money transfers at Wise, a fintech organization that facilitates international currency transfers for 11 million customers. Getting to this point and expanding the stream processing platform is not, however, without its challenges. One of the major challenges at Wise is to aggregate, join, and process real-time event streams to transfer currency instantly. To accomplish this, the Wise relies on Apache Kafka® as an event broker, as well as Kafka Streams, the accompanying Java stream processing library. Kafka Streams lets you build event-driven microservices for processing streams, which can then be deployed alongside the Kafka cluster of your choice. Wise also uses the Interactive Queries feature in Kafka streams, to query internal application state at runtime. The Wise stream processing platform has gradually moved them away from a monolithic architecture to an event-driven microservices model with around 400 total microservices working together. This has given Wise the ability to independently shape and scale each service to better serve evolving business needs. Their stream processing platform includes a domain-specific language (DSL) that provides libraries and tooling, such as Docker images for building your own stream processing applications with governance. With this approach, Wise is able to store 50 TB of stateful data based on Kafka Streams running in Kubernetes. Levani shares his own experiences in this journey with you and provides you with guidance that may help you follow in Wise’s footsteps. He covers how to properly delegate ownership and responsibilities for sourcing events from existing data stores, and outlines some of the pitfalls they encountered along the way. To cap it all off, Levani also shares some important lessons in organization and technology, with some best practices to keep in mind. EPISODE LINKSKafka Streams 101 courseReal-Time Stream Processing with Kafka Streams ft. Bill BejeckWatch the video version of this podcastJoin the Confluent CommunityLearn more with Kafka tutorials, resources, and guides at Confluent DeveloperLive demo: Intro to Event-Driven Microservices with ConfluentUse PODCAST100 to get an additional $100 of free Confluent Cloud usage (details)
12/21/202131 minutes, 8 seconds
Episode Artwork

Lessons Learned From Designing Serverless Apache Kafka ft. Prachetaa Raghavan

You might call building and operating Apache Kafka® as a cloud-native data service synonymous with a serverless experience. Prachetaa Raghavan (Staff Software Developer I, Confluent) spends his days focused on this very thing. In this podcast, he shares his learnings from implementing a serverless architecture on Confluent Cloud using Kubernetes Operator. Serverless is a cloud execution model that abstracts away server management, letting you run code on a pay-per-use basis without infrastructure concerns. Confluent Cloud's major design goal was to create a serverless Kafka solution, including handling its distributed state, its performance requirements, and seamlessly operating and scaling the Kafka brokers and Zookeeper. The serverless offering is built on top of an event-driven microservices architecture that allows you to deploy services independently with your own release cadence and maintained at the team level.There are 4 subjects that help create the serverless event streaming experience with Kafka:Confluent Cloud control plane: This Kafka-based control plane provisions resources required to run the application. It automatically scales resources for services, such as managed Kafka, managed ksqlDB, and managed connectors. The control plane and data plane are decoupled—if a single data plane has issues, it doesn’t affect the control plane or other data planes. Kubernetes Operator: The operator is an application-specific controller that extends the functionality of the Kubernetes API to create, configure, and manage instances of complex applications on behalf of Kubernetes users. The operator looks at Kafka metrics before upgrading a broker at a time. It also updates the status on cluster rebalancing and on shrink to rebalance data onto the remaining brokers. Self-Balancing Clusters: Cluster balance is measured on several dimensions, including replica counts, leader counts, disk usage, and network usage. In addition to storage rebalancing, Self-Balancing Clusters are essential to making sure that the amount of available disk and network capability is satisfied during any balancing decisions. Infinite Storage: Enabled by Tiered Storage, Infinite Storage rebalances data fast and efficiently—the most recently written data is stored directly on Kafka brokers, while older segments are moved off into a separate storage tier.  This has the added bonus of reducing the shuffling of data due to regular broker operations, like partition rebalancing. EPISODE LINKSMaking Apache Kafka Serverless: Lessons From Confluent CloudCloud-Native Apache KafkaJoin the Confluent CommunityLearn more with Kafka tutorials, resources, and guides at Confluent DeveloperLive demo: Intro to Event-Driven Microservices with ConfluentUse PODCAST100 to get an additional $100 of free Confluent Cloud usage (details)Watch the video version of this podcast
12/14/202128 minutes, 20 seconds
Episode Artwork

Using Apache Kafka as Cloud-Native Data System ft. Gwen Shapira

What does cloud native mean, and what are some design considerations when implementing cloud-native data services? Gwen Shapira (Apache Kafka® Committer and Principal Engineer II, Confluent) addresses these questions in today’s episode. She shares her learnings by discussing a series of technical papers published by her team, which explains what they’ve done to expand Kafka’s cloud-native capabilities on Confluent Cloud. Gwen leads the Cloud-Native Kafka team, which focuses on developing new features to evolve Kafka to its next stage as a fully managed cloud data platform. Turning Kafka into a self-service platform is not entirely straightforward, however, Kafka’s early day investment in elasticity, scalability, and multi-tenancy to run at a company-wide scale served as the North Star in taking Kafka to its next stage—a fully managed cloud service where users will just need to send us their workloads and everything else will magically work. Through examining modern cloud-native data services, such as Aurora, Amazon S3, Snowflake, Amazon DynamoDB, and BigQuery, there are seven capabilities that you can expect to see in modern cloud data systems, including: Elasticity: Adapt to workload changes to scale up and down with a click or APIs—cloud-native Kafka omits the requirement to install REST Proxy for using Kafka APIsInfinite scale: Kafka has the ability to elastic scale with a behind-the-scene process for capacity planning Resiliency: Ensures high availability to minimize downtown and disaster recovery Multi-tenancy: Cloud-native infrastructure needs to have isolations—data, namespaces, and performance, which Kafka is designed to supportPay per use: Pay for resources based on usageCost-effectiveness: Cloud deployment has notably lower costs than self-managed services, which also decreases adoption time Global: Connect to Kafka from around the globe and consume data locallyBuilding around these key requirements, a fully managed Kafka as a service provides an enhanced user experience that is scalable and flexible with reduced infrastructure management costs. Based on their experience building cloud-native Kafka, Gwen and her team published a four-part thesis that shares insights on user expectations for modern cloud data services as well as technical implementation considerations to help you develop your own cloud-native data system. EPISODE LINKSCloud-Native Apache KafkaDesign Considerations for Cloud-Native Data SystemsSoftware Engineer, Cloud Native KafkaJoin the Confluent CommunityLearn more with Kafka tutorials, resources, and guides at Confluent DeveloperLive demo: Intro to Event-Driven Microservices with ConfluentUse PODCAST100 to get an additional $100 of free Confluent Cloud usage (details)Watch the video version of this podcast
12/7/202133 minutes, 57 seconds
Episode Artwork

ksqlDB Fundamentals: How Apache Kafka, SQL, and ksqlDB Work Together ft. Simon Aubury

What is ksqlDB and how does Simon Aubury (Principal Data Engineer, Thoughtworks) use it to track down the plane that wakes his cat Snowy in the morning? Experienced in building real-time applications with ksqlDB since its genesis, Simon provides an introduction to ksqlDB by sharing some of his projects and use cases. ksqlDB is a database purpose-built for stream processing applications and lets you build real-time data streaming applications with SQL syntax. ksqlDB reduces the complexity of having to code with Java, making it easier to achieve outcomes through declarative programming, as opposed to procedural programming. Before ksqlDB, you could use the producer and consumer APIs to get data in and out of Apache Kafka®; however, when it comes to data enrichment, such as joining, filtering, mapping, and aggregating data, you would have to use the Kafka Streams API—a robust and scalable programming interface influenced by the JVM ecosystem that requires Java programming knowledge. This presented scaling challenges for Simon, who was at a multinational insurance company that needed to stream loads of data from disparate systems with a small team to scale and enrich data for meaningful insights. Simon recalls discovering ksqlDB during a practice fire drill, and he considers it as a memorable moment for turning a challenge into an opportunity.Leveraging your familiarity with relational databases, ksqlDB abstracts away complex programming that is required for real-time operations both for stream processing and data integration, making it easy to read, write, and process streaming data in real time.Simon is passionate about ksqlDB and Kafka Streams as well as getting other people inspired by the technology. He’s been using ksqlDB for projects, such as taking a stream of information and enriching it with static data. One of Simon’s first ksqlDB projects was using Raspberry Pi and a software-defined radio to process aircraft movements in real time to determine which plane wakes his cat Snowy up every morning. Simon highlights additional ksqlDB use cases, including e-commerce checkout interaction to identify where people are dropping out of a sales funnel. EPISODE LINKSksqlDB 101 courseA Guide to ksqlDB Fundamentals and Stream Processing ConceptsksqlDB 101 Training with Live Walkthrough ExerciseKSQL-ops! Running ksqlDB in the WildArticles from Simon AuburyWatch the video version of this podcastJoin the Confluent CommunityLearn more with Kafka tutorials, resources, and guides at Confluent DeveloperLive demo: Intro to Event-Driven Microservices with ConfluentUse PODCAST100 to get $100 of free Confluent Cloud usage (details)
12/1/202130 minutes, 42 seconds
Episode Artwork

Explaining Stream Processing and Apache Kafka ft. Eugene Meidinger

Many of us find ourselves in the position of equipping others to use Apache Kafka® after we’ve gained an understanding of what Kafka is used for. But how do you communicate and teach others event streaming concepts effectively? As a Pluralsight instructor and business intelligence consultant, Eugene Meidinger shares tips for creating consumable training materials for conveying event streaming concepts to developers and IT administrators, who are trying to get on board with Kafka and stream processing. Eugene’s background as a database administrator (DBA) and immense knowledge of event streaming architecture and data processing shows as he reveals his learnings from years of working with Microsoft Power BI, Azure Event Hubs, data processing, and event streaming with ksqlDB and Kafka Streams. Eugene mentions the importance of understanding your audience, their pain points, and their questions, such as why was Kafka invented? Why does ksqlDB matter? It also helps to use metaphors where appropriate. For example, when explaining what is processing typology for Kafka Streams, Eugene uses the analogy of a highway where people are getting on a bus as the blocking operations, after the grace period, the bus will leave even without passengers, meaning after the window session, the processor will continue even without events. He also likes to inject a sense of humor in his training and keeps empathy in mind. Here is the structure that Eugene uses when building courses:The first module is usually fundamentals, which lays out the groundwork and the objectives of the courseIt's critical to repeat and summarize core concepts or major points; for example, a key capability of Kafka is the ability to decouple data in both network space and in time Provide variety and different modalities that allow people to consume content through multiple avenues, such as screencasts, slides, and demos, wherever it makes senseEPISODE LINKSBuilding ETL Pipelines from Streaming Data with Kafka and ksqlDBDon't Make Me Think | Steve KrugDesign for How People Learn | Julie Dirksen Watch the video version of this podcastJoin the Confluent CommunityLearn more with Kafka tutorials, resources, and guides at Confluent DeveloperLive demo: Intro to Event-Driven Microservices with ConfluentUse PODCAST100 to get $100 of free Confluent Cloud usage (details)
11/23/202129 minutes, 28 seconds
Episode Artwork

Handling Message Errors and Dead Letter Queues in Apache Kafka ft. Jason Bell

If you ever wondered what exactly dead letter queues (DLQs) are and how to use them, Jason Bell (Senior DataOps Engineer, Digitalis) has an answer for you. Dead letter queues are a feature of Kafka Connect that acts as the destination for failed messages due to errors like improper message deserialization and improper message formatting. Lots of Jason’s work is around Kafka Connect and the Kafka Streams API, and in this episode, he explains the fundamentals of dead letter queues, how to use them, and the parameters around them. For example, when deserializing an Avro message, the deserialization could fail if the message passed through is not Avro or in a value that doesn’t match the expected wire format, at which point, the message will be rerouted into the dead letter queue for reprocessing. The Apache Kafka® topic will reprocess the message with the appropriate converter and send it back onto the sink. For a JSON error message, you’ll need another JSON connector to process the message out of the dead letter queue before it can be sent back to the sink. Dead letter queue is configurable for handling a deserialization exception or a producer exception. When deciding if this topic is necessary, consider if the messages are important and if there’s a plan to read into and investigate why the error occurs. In some scenarios, it’s important to handle the messages manually or have a manual process in place to handle error messages if reprocessing continues to fail. For example, payment messages should be dealt with in parallel for a better customer experience. Jason also shares some key takeaways on the dead letter queue: If the message is important, such as a payment, you need to deal with the message if it goes into the dead letter queue To minimize message routing into the dead letter queue, it’s important to ensure successful data serialization at the sourceWhen implementing a dead letter queue, you need a process to consume the message and investigate the errors EPISODE LINKS: Kafka Connect 101: Error Handling and Dead Letter QueuesCapacity Planning your Kafka ClusterTales from the Frontline of Apache Kafka DevOps ft. Jason BellTweet: Morning morning (yes, I have tea)Tweet: Kafka dead letter queues Watch the video version of this podcastJoin the Confluent CommunityLearn more with Kafka tutorials, resources, and guides at Confluent DeveloperLive demo: Intro to Event-Driven Microservices with ConfluentUse PODCAST100 to get an additional $100 of free Confluent Cloud usage (details)
11/16/202137 minutes, 41 seconds
Episode Artwork

Confluent Platform 7.0: New Features + Updates

Confluent Platform 7.0 has launched and includes Apache Kafka® 3.0, plus new features introduced by KIP-630: Kafka Raft Snapshot, KIP-745: Connect API to restart connector and task, and KIP-695: Further improve Kafka Streams timestamp synchronization. Reporting from Dubai, Tim Berglund (Senior Director, Developer Advocacy, Confluent) provides a summary of new features, updates, and improvements to the 7.0 release, including the ability to create a real-time bridge from on-premises environments to the cloud with Cluster Linking. Cluster Linking allows you to create a single cluster link between multiple environments from Confluent Platform to Confluent Cloud, which is available on public clouds like AWS, Google Cloud, and Microsoft Azure, removing the need for numerous point-to-point connections. Consumers reading from a topic in one environment can read from the same topic in a different environment without risks of reprocessing or missing critical messages. This provides operators the flexibility to make changes to topic replication smoothly and byte for byte without data loss. Additionally, Cluster Linking eliminates any need to deploy MirrorMaker2 for replication management while ensuring offsets are preserved. Furthermore, the release of Confluent for Kubernetes 2.2 allows you to build your own private cloud in Kafka. It completes the declarative API by adding cloud-native management of connectors, schemas, and cluster links to reduce the operational burden and manual processes so that you can instead focus on high-level declarations. Confluent for Kubernetes 2.2 also enhances elastic scaling through the Shrink API.  Following ZooKeeper’s removal in Apache Kafka 3.0, Confluent Platform 7.0 introduces KRaft in preview to make it easier to monitor and scale Kafka clusters to millions of partitions. There are also several ksqlDB enhancements in this release, including foreign-key table joins and the support of new data types—DATE and TIME— to account for time values that aren’t TIMESTAMP. This results in consistent data ingestion from the source without having to convert data types.EPISODE LINKSDownload Confluent Platform 7.0Check out the release notesRead the Confluent Platform 7.0 blog postWatch the video version of this podcastJoin the Confluent CommunityLearn more with Kafka tutorials, resources, and guides at Confluent DeveloperLive demo: Intro to Event-Driven Microservices with ConfluentUse PODCAST100 to get $100 of free Confluent Cloud usage (details)
11/9/202112 minutes, 16 seconds
Episode Artwork

Real-Time Stream Processing with Kafka Streams ft. Bill Bejeck

Kafka Streams is a native streaming library for Apache Kafka® that consumes messages from Kafka to perform operations like filtering a topic’s message and producing output back into Kafka. After working as a developer in stream processing, Bill Bejeck (Apache Kafka Committer and Integration Architect, Confluent) has found his calling in sharing knowledge and authoring his book, “Kafka Streams in Action.” As a Kafka Streams expert, Bill is also the author of the Kafka Streams 101 course on Confluent Developer, where he delves into what Kafka Streams is, how to use it, and how it works. Kafka Streams provides the abstraction over Kafka consumers and producers by minimizing administrative details like the need to code and manage frameworks required when using plain Kafka consumers and producers to process streams. Kafka Streams is declarative—you can state what you want to do, rather than how to do it. Kafka Streams leverages the KafkaConsumer protocol internally; it inherits its dynamic scaling properties and the consumer group protocol to dynamically redistribute the workload. When Kafka Streams applications are deployed separately but have the same application.id, they are logically still one application. Kafka Streams has two processing APIs, the declarative API or domain-specific language (DSL)  is a high-level language that enables you to build anything needed with a processor topology, whereas the Processor API lets you specify a processor typology node by node, providing the ultimate flexibility. To underline the differences between the two APIs, Bill says it’s almost like using the object-relational mapping framework (ORM) versus SQL. The Kafka Streams 101 course is designed to get you started with Kafka Streams and to help you learn the fundamentals of: How streams and tables work How stateless and stateful operations work How to handle time windows and out of order dataHow to deploy Kafka StreamsEPISODE LINKSKafka Streams 101 courseA Guide to Kafka Streams and Its UsesYour First Kafka Streams ApplicationKafka Streams 101 meetupWatch the video version of this podcastJoin the Confluent CommunityLearn more with Kafka tutorials, resources, and guides at Confluent DeveloperLive demo: Intro to Event-Driven Microservices with ConfluentUse podcon19 to get 40% off "Kafka Streams in Action"Use podcon19 to get 40% off "Event Streaming with Kafka Streams and ksqlDB"Use PODCAST100 to get $100 of free Confluent Cloud usage (details)
11/4/202135 minutes, 32 seconds
Episode Artwork

Automating Infrastructure as Code with Apache Kafka and Confluent ft. Rosemary Wang

Managing infrastructure as code (IaC) instead of using manual processes makes it easy to scale systems and minimize errors. Rosemary Wang (Developer Advocate, HashiCorp, and author of “Essential Infrastructure as Code: Patterns and Practices”) is an infrastructure engineer at heart and an aspiring software developer who is passionate about teaching patterns for infrastructure as code to simplify processes for system admins and software engineers familiar with Python, provisioning tools like Terraform, and cloud service providers. The definition of infrastructure has expanded to include anything that delivers or deploys applications. Infrastructure as software or infrastructure as configuration, according to Rosemary, are ideas grouped behind infrastructure as code—the process of automating infrastructure changes in a codified manner, which also applies to DevOps practices, including version controls, continuous integration, continuous delivery, and continuous deployment. Whether you’re using a domain-specific language or a programming language, the practices used to collaborate between you, your team, and your organization are the same—create one application and scale systems.The ultimate result and benefit of infrastructure as code is automation. Many developers take advantage of managed offerings like Confluent Cloud—fully managed Kafka as a service—to remove the operational burden and configuration layer. Still, as long as complex topologies like connecting to another server on a cloud provider to external databases exist, there is great value to standardizing infrastructure practices. Rosemary shares four characteristics that every infrastructure system should have: ResilienceSelf-serviceSecurityCost reductionIn addition, Rosemary and Tim discuss updating infrastructure with blue-green deployment techniques, immutable infrastructure, and developer advocacy. EPISODE LINKS: Use PODCAST100 to get $100 of free Confluent Cloud usage (details)Use podcon19 to get 40% off “Essential Infrastructure as Code: Patterns and Practices”Watch the video version of this podcastJoin the Confluent CommunityLearn more with Kafka tutorials, resources, and guides at Confluent DeveloperLive demo: Intro to Event-Driven Microservices with Confluent
10/26/202130 minutes, 8 seconds
Episode Artwork

Getting Started with Spring for Apache Kafka ft. Viktor Gamov

What’s the distinction between the Spring Framework and Spring Boot? If you are building a car, the Spring Framework is the engine while Spring Boot gives you the vehicle that you ride in. With experience teaching and answering questions on how to use Spring and Apache Kafka® together, Viktor Gamov (Principal Developer Advocate, Kong) designed a free course on Confluent Developer and previews it in this episode. Not only this, but he also explains why the opinionated Spring Framework would be a good hero in Marvel. Spring is an ever-evolving framework that embraces modern, cloud-native technologies with cross-language options, such as Kotlin integration. Unlike its predecessors, the Spring Framework supports a modern version of Java and the requirements of the Twelve-Factor App manifesto for you to move an application between environments without changing the code. With that engine in place, Spring Boot introduces a microservices architecture. Spring Boot contains databases and messaging systems integrations, reducing development time and increasing overall productivity. Spring for Apache Kafka applies best practices of the Spring community to the Kafka ecosystem, including features that abstract away infrastructure code for you to focus on programming logic that is important for your application. Spring for Apache Kafka provides a wrapper around the producer and consumer to ease Kafka configuration with APIs, including KafkaTemplate, MessageListenerContainer, @KafkaListener, and TopicBuilder.The Spring Framework and Apache Kafka course will equip you with the knowledge you need in order to build event-driven microservices using Spring and Kafka on Confluent Cloud. Tim and Viktor also discuss Spring Cloud Stream as well as Spring Boot integration with Kafka Streams and more. EPISODE LINKSSpring Framework and Apache Kafka courseSpring for Apache Kafka 101Bootiful Stream Processing with Spring and KafkaLiveStreams with Viktor GamovUse kafkaa35 to get 30% off "Kafka in Action"Watch the video version of this podcastJoin the Confluent CommunityLearn more with Kafka tutorials, resources, and guides at Confluent DeveloperLive demo: Intro to Event-Driven Microservices with ConfluentUse PODCAST100 to get an additional $100 of free Confluent Cloud usage (details)
10/19/202132 minutes, 44 seconds
Episode Artwork

Powering Event-Driven Architectures on Microsoft Azure with Confluent

When you order a pizza, what if you knew every step of the process from the moment it goes in the oven to being delivered to your doorstep? Event-Driven Architecture is a modern, data-driven approach that describes “events” (i.e., something that just happened). A real-time data infrastructure enables you to provide such event-driven data insights in real time. Israel Ekpo (Principal Cloud Solutions Architect, Microsoft Global Partner Solutions, Microsoft) and Alicia Moniz (Cloud Partner Solutions Architect, Confluent) discuss use cases on leveraging Confluent Cloud and Microsoft Azure to power real-time, event-driven architectures. As an Apache Kafka® community stalwart, Israel focuses on helping customers and independent software vendor (ISV) partners build solutions for the cloud and use open source databases and architecture solutions like Kafka, Kubernetes, Apache Flink, MySQL, and PostgreSQL on Microsoft Azure. He’s worked with retailers and those in the IoT space to help them adopt processes for inventory management with Confluent. Having a cloud-native, real-time architecture that can keep an accurate record of supply and demand is important in keeping up with the inventory and customer satisfaction. Israel has also worked with customers that use Confluent to integrate with Cosmos DB, Microsoft SQL Server, Azure Cognitive Search, and other integrations within the Azure ecosystem. Another important use case is enabling real-time data accessibility in the public sector and healthcare while ensuring data security and regulatory compliance like HIPAA. Alicia has a background in AI, and she expresses the importance of moving away from the monolithic, centralized data warehouse to a more flexible and scalable architecture like Kafka. Building a data pipeline leveraging Kafka helps ensure data security and consistency with minimized risk.The Confluent and Azure integration enables quick Kafka deployment with out-of-the-box solutions within the Kafka ecosystem. Confluent Schema Registry captures event streams with a consistent data structure, ksqlDB enables the development of real-time ETL pipelines, and Kafka Connect enables the streaming of data to multiple Azure services.EPISODE LINKSConfluent on Azure: Why You Should Add Confluent to Your Azure ToolkitIzzyAcademy Kafka on Azure Learning Series by Alicia MonizWatch the video version of this podcastJoin the Confluent CommunityLearn more with Kafka tutorials, resources, and guides at Confluent DeveloperLive demo: Intro to Event-Driven Microservices with ConfluentUse PODCAST100 to get an additional $100 of free Confluent Cloud usage (details)
10/14/202138 minutes, 42 seconds
Episode Artwork

Automating DevOps for Apache Kafka and Confluent ft. Pere Urbón-Bayes

Autonomy is key in building a sustainable and motivated team, and this core principle also applies to DevOps. Building self-serve Apache Kafka® and Confluent Platform deployments require a streamlined process with unrestricted tools—a centralized processing tool that allows teams in large or mid-sized organizations to automate infrastructure changes while ensuring shared standards are met. With more than 15 years of engineering and technology consulting experience, Pere Urbón-Bayes (Senior Solution Architect, Professional Services, Confluent) built an open source solution—JulieOps—to enable a self-serve Kafka platform as a service with data governance. JulieOps is one of the first solutions available to realize self-service for Kafka and Confluent with automation. Development, operations, security teams often face hurdles when deploying Kafka. How can a user request the topics that they need for their applications? How can the operations team ensure compliance and role-based access controls? How can schemas be standardized and structured across environments? Manual processes can be cumbersome with long cycle times. Automation reduces unnecessary interactions and shortens processing time, enabling teams to be more agile and autonomous in solving problems from a localized team level. Similar to Terraform, JulieOps is declarative. It's a centralized agent that uses the GitOps philosophy, focusing on a developer-centric experience with tools that developers are already familiar with, to provide abstractions to each product personas. All changes are documented and approved within the change management process to streamline deployments with timely and effective audits, as well as ensure security and compliance across environments.  The implementation of a central software agent, such as JulieOps, helps you automate the management of topics, configuration, access controls, Confluent Schema Registry, and more within Kafka. It’s multi tenant out of the box and supports on-premises clusters and the cloud with CI/CD practices. Tim and Pere also discuss the steps necessary to build a self-service Kafka with an automatic Jenkins process that will empower development teams to be autonomous.EPISODE LINKSJulieOps on GitHubJulieOps documentationBuilding a Self-Service Kafka Platform as a Service with GitOps with Pere Urbón-BayesOpen Service Broker APIDrive | Daniel H. Pink Watch the video version of this podcastJoin the Confluent CommunityLearn more with Kafka tutorials, resources, and guides at Confluent DeveloperLive demo: Intro to Event-Driven Microservices with ConfluentUse PODCAST100 to get an additional $100 of free Confluent Cloud usage (details)
10/7/202126 minutes, 8 seconds
Episode Artwork

Intro to Kafka Connect: Core Components and Architecture ft. Robin Moffatt

Kafka Connect is a streaming integration framework between Apache Kafka® and external systems, such as databases and cloud services. With expertise in ksqlDB and Kafka Connect, Robin Moffatt (Staff Developer Advocate, Confluent) helps and supports the developer community in understanding Kafka and its ecosystem. Recently, Robin authored a Kafka Connect 101 course that will help you understand the basic concepts of Kafka Connect, its key features, and how it works.What’s Kafka Connect, and how does it work with Kafka and brokers? Robin explains that Kafka Connect is a Kafka API that runs separately from the Kafka brokers, running on its own Java virtual machine (JVM) process known as the Kafka Connect worker. Kafka Connect is essential for streaming data from different sources into Kafka and from Kafka to various targets. With Connect, you don’t have to write programs using Java and instead specify your pipeline using configuration. Kafka Connect.As a pluggable framework, Kafka Connect has a broad set of more than 200 different connectors available on Confluent Hub, including but not limited to:NoSQL and document stores (Elasticsearch, MongoDB, and Cassandra)RDBMS (Oracle, SQL Server, DB2, PostgreSQL, and MySQL)Cloud object stores (Amazon S3, Azure Blob Storage, and Google Cloud Storage),Message queues (ActiveMQ, IBM MQ, and RabbitMQ)Robin and Tim also discuss single message transform (SMTs), as well as distributed and standalone deployment modes Kafka Connect. Tune in to learn more about Kafka Connect, and get a preview of the Kafka Connect 101 course.EPISODE LINKSKafka Connect 101 courseKafka Connect Fundamentals: What is Kafka Connect?Meetup: From Zero to Hero with Kafka ConnectConfluent Hub: Discover Kafka connectors and more12 Days of SMTsWhy Kafka Connect? ft. Robin MoffattWatch the video version of this podcastJoin the Confluent CommunityLearn more with Kafka tutorials, resources, and guides at Confluent DeveloperLive demo: Intro to Event-Driven Microservices with ConfluentUse PODCAST100 to get an additional $100 of free Confluent Cloud usage (details)
9/28/202131 minutes, 18 seconds
Episode Artwork

Designing a Cluster Rollout Management System for Apache Kafka ft. Twesha Modi

As one of the top coders of her Java coding class in high school, Twesha Modi is continuing to follow her passion for computer science as a senior at Cornell University, where she has proven to be one of the top programmers. During Twesha's summer internship at Confluent, she contributed to designing a new service to automate Apache Kafka® cluster rollout management—a process that releases the latest Kafka versions to customer’s clusters in Confluent Cloud.During Twesha’s internship, she was part of the Platform team, which designed a cluster management rollout service—capable of automating cluster rollout and generating rollout plans that streamline Kafka operations in the cloud. The pre-existing manual process worked well in scenarios involving just a couple hundred clusters, but with growth and the need to upgrade a significantly larger cluster fleet to target versions in the cloud, the process needed to be automated in order to accelerate feature releases while ensuring security. Under the mentorship of Pablo Berton (Staff Software Engineer I, Product Infrastructure, Confluent), Nikhil Bhatia (Principal Engineer I, Product Infrastructure, Confluent), and Vaibhav Desai (Staff Software Engineer I, Confluent), Twesha supported the design of the rollouts management process from scratch. Twesha’s 12-week internship helped her learn more about software architecture and the design process that goes into software as a service and beyond. Not only did she acquire new skills and knowledge, but she also met mentors who are willing to teach, share their experiences, and help her succeed along the way. Tim and Twesha also talk about the importance of asking questions during internships for the best learning experience, in addition to discussing the Vert.x, Java, Spring, and Kubernetes APIs. EPISODE LINKSMulti-Cluster Apache Kafka with Cluster Linking ft. Nikhil BhatiaWatch the video version of this podcastJoin the Confluent CommunityLearn more with Kafka tutorials, resources, and guides at Confluent DeveloperLive demo: Intro to Event-Driven Microservices with ConfluentUse PODCAST100 to get an additional $100 of free Confluent Cloud usage (details)
9/23/202130 minutes, 8 seconds
Episode Artwork

Apache Kafka 3.0 - Improving KRaft and an Overview of New Features

Apache Kafka® 3.0 is out! To spotlight major enhancements in this release, Tim Berglund (Apache Kafka Developer Advocate) provides a summary of what’s new in the Kafka 3.0 release from Krakow, Poland, including API changes and improvements to the early-access Kafka Raft (KRaft). KRaft is a built-in Kafka consensus mechanism that’s replacing Apache ZooKeeper going forward. It is recommended to try out new KRaft features in a development environment, as KRaft is not advised for production yet. One of the major features in Kafka 3.0 is the efficiency for KRaft controllers and brokers to store, load, and replicate snapshots into a Kafka cluster for metadata topic partitioning. The Kafka controller is now responsible for generating a Kafka producer ID in both ZooKeeper and KRaft, easing the transition from ZooKeeper to KRaft on the Kafka 3.X version line. This update also moves us closer to the ZooKeeper-to-KRaft bridge release. Additionally, this release includes metadata improvements, exactly-once semantics, and KRaft reassignments. To enable a stronger record delivery guarantee, Kafka producers turn on by default idempotency, together with acknowledgment delivery by all the replicas. This release also comprises enhancements to Kafka Connect task restarts, Kafka Streams timestamp based synchronization and more flexible configuration options for MirrorMaker2 (MM2). The first version of MirrorMaker has been deprecated, and MirrorMaker2 will be the focus for future developments. Besides that, this release drops support for older message formats, V0 and V1, as well as initiates the removal of Java 8 and Scala 2.12 across all components in Apache Kafka. The universal Java 8 and Scala 2.12 deprecation is anticipated to complete in the future Apache Kafka 4.0 release.Apache Kafka 3.0 is a major release and step forward for the Apache Kafka project!EPISODE LINKSApache Kafka 3.0 release notes Read the blog to learn moreDownload Apache Kafka 3.0Watch the video version of this podcastJoin the Confluent Community Slack
9/21/202115 minutes, 17 seconds
Episode Artwork

How to Build a Strong Developer Community with Global Engagement ft. Robin Moffatt and Ale Murray

A developer community brings people with shared interests and purpose together. The fundamental elements of a community are to gather, learn, support, and create opportunities for collaboration. A developer community is also an effective and efficient instrument for exploring and solving problems together. The power of a community is its endless advantages, from knowledge sharing to support, interesting discussions, and much more. Tim Berglund invites Ale Murray (Global Community Manager, Confluent) and Robin Moffatt (Staff Developer Advocate, Confluent) on the show to discuss the art of Q&A in a global community, share tips for building a vibrant developer community, and highlight the five strategic pillars for running a successful global community:MeetupsConferencesMVP program (e.g., Confluent Community Catalysts)Community hackathonsDigital platforms Digital platforms, such as a community Slack and forum, often consist of members who are well versed on topics of interest. As a leader in the Apache Kafka® and Confluent communities, Robin expresses the importance of being respectful when asking questions and providing details to the problem at hand. A well-formulated and focused question will more likely lead to a helpful answer. Oftentimes, the cognitive process of composing the question actually helps iron out the problem and draw out a solution. This process is also known as the rubber duck debugging theory. In a global community with diverse cultures and languages, being kind and having empathy is crucial. The tone and meaning of words can sometimes get lost in translation. Using emojis can help transcend language barriers by adding another layer of tone to plain text. Ale and Robin also discuss the pros and cons of a community forum vs. a Slack group. Tune in to find out more tips and best practices on building and engaging a developer community.EPISODE LINKSUse PODCAST100 to get an additional $100 of free Confluent Cloud usage (details)How to Ask Good QuestionsWhy We Launched a ForumGrowing the Event Streaming Community During COVID-19 ft. Ale MurrayMeetup HubAnnouncing the Confluent Community ForumWatch the video version of this podcastJoin the Confluent CommunityLearn more with Kafka tutorials, resources, and guides at Confluent DeveloperLive demo: Intro to Event-Driven Microservices with Confluent
9/14/202135 minutes, 18 seconds
Episode Artwork

What Is Data Mesh, and How Does it Work? ft. Zhamak Dehghani

The data mesh architectural paradigm shift is all about moving analytical data away from a monolithic data warehouse or data lake into a distributed architecture—allowing data to be shared for analytical purposes in real time, right at the point of origin. The idea of data mesh was introduced by Zhamak Dehghani (Director of Emerging Technologies, Thoughtworks) in 2019.  Here, she provides an introduction to data mesh and the fundamental problems that it’s trying to solve. Zhamak describes that the complexity and ambition to use data have grown in today’s industry. But what is data mesh? For over half a century, we’ve been trying to democratize data to deliver value and provide better analytic insights. With the ever-growing number of distributed domain data sets, diverse information arrives in increasing volumes and with high velocity. To remove the friction and serve the requirement for data to be consumed by operational needs in various use cases, the best way is to mesh the data. This means connecting data through a peer-to-peer fashion and liberating data for analytics, machine learning, serving up data-intensive applications across the organization, and more. Data mesh tackles the deficiency of the traditional, centralized data lake and data warehouse platform architecture. The data mesh paradigm is founded on four principles: Domain-oriented ownershipData as a productData available everywhere in a self-serve data infrastructureData standardization governanceA decentralized, agnostic data structure enables you to synthesize data and innovate. The starting point is embracing the ideology that data can be anywhere. Source-aligned data should serve as a product available for people across the organization to combine, explore, and drive actionable insights. Zhamak and Tim also discuss the next steps we need to take in order to bring data mesh to life at the industry level.To learn more about the topic, you can visit the all-new Confluent Developer course: Data Mesh 101. Confluent Developer is a single destination with resources to begin your Kafka journey.  EPISODE LINKSZhamak Dehghani: How to Build the Data Mesh FoundationData Mesh 101Practical Data Mesh: Building Decentralized Data Architectures with Event StreamsSaxo Bank’s Best Practices for a Distributed Domain-Driven Architecture Founded on the Data MeshPlacing Apache Kafka at the Heart of a Data Revolution at Saxo BankWhy Data Mesh?Watch video version of this podcastJoin the Confluent CommunityLearn Kafka on Confluent DeveloperUse PODCAST100 to get $100 of Confluent Cloud usage (details)
9/9/202134 minutes, 56 seconds
Episode Artwork

Multi-Cluster Apache Kafka with Cluster Linking ft. Nikhil Bhatia

Note: This episode was recorded when Cluster Linking was in preview mode. It’s now generally available as part of the Confluent Q3 ‘21 release on August 17, 2021. Infrastructure needs to react in real time to support globally distributed events, such as cloud migration, IoT, edge data collection, and disaster recovery. To provide a seamless yet cloud-native, cross-cluster topic replication experience, Nikhil Bhatia (Principal Engineer I, Product Infrastructure, Confluent) and the team engineered a solution called Cluster Linking. Available on Confluent Cloud, Cluster Linking is an API that enables Apache Kafka® to work across multi-datacenters, making it possible to design globally available distributed systems. As industries adopt multi-cloud usage and depart from on-premises and single cluster operations, we need to rethink how clusters operate across regions in the cloud. Cluster Linking as an inter-cluster replication layer into Confluent Server, allowing you to connect clusters together and replicate topics asynchronously without the need for Connect. Cluster Linking requires zero external components when moving messages from one cluster to another. It replicates data into its destination by partition and byte for byte, preserving offsets from the source cluster. Different from Confluent Replicator and MirrorMaker2, Cluster Linking simplifies failover in high availability and disaster recovery scenarios, improving overall efficiency by avoiding recompression. As a great cost-effective alternative to Multi-Region Cluster, Cluster Linking reduces traffic between data centers and enables inter-cluster replication without the need to deploy and manage a separate Connect cluster. With low recovery point objective (RPO) and recovery time objective (RTO), Cluster Linking enables scenarios such as: Migration to cloud: Remove the complexity layer of self-run datacenters with fully managed cloud services. Global reads: Enable users to connect to Kafka from around the globe and consume data locally. Empowering better performance and improving cost effectiveness. Disaster recovery: Prepare your system for fault tolerance, from datacenter, regional, or cloud-level disasters, ensuring zero data loss and high availability. Find out more about Cluster Linking architecture and set your data in motion with global Kafka.EPISODE LINKSAnnouncing the Confluent Q3 '21 ReleaseIntroducing Cluster Linking in Confluent Platform 6.0What is Cluster Linking? Resurrecting In-Sync Replicas with Automatic Observer Promotion ft. Anna McDonaldWatch video version of this podcastJoin the Confluent CommunityLearn Kafka at Confluent DeveloperDemo: Event-Driven Microservices with ConfluentUse PODCAST100 to get $100 of Confluent Cloud usage (details)
8/31/202131 minutes, 4 seconds
Episode Artwork

Using Apache Kafka and ksqlDB for Data Replication at Bolt

What does a ride-hailing app that offers micromobility and food delivery services have to do with data in motion? In this episode, Ruslan Gibaiev (Data Architect, Bolt) shares about Bolt’s road to adopting Apache Kafka® and ksqlDB for stream processing to replicate data from transactional databases to analytical warehouses. Rome wasn't built overnight, nor was the adoption of Kafka and ksqlDB at Bolt. Initially, Bolt noticed the need for system standardization and replacing the unreliable query-based change data capture (CDC) process. As an experienced Kafka developer, Ruslan believed that Kafka is the solution for adopting change data capture as a company-wide event streaming solution. Persuading the team at Bolt to adopt and buy in was hard at first, but Ruslan made it possible. Eventually, the team replaced query-based CDC with log-based CDC from Debezium, built on top of Kafka. Shortly after the implementation, developers at Bolt began to see precise, correct, and real-time data. As Bolt continues to grow, they see the need to implement a data lake or a data warehouse for OTP system data replication and stream processing. After carefully considering several different solutions and frameworks such as ksqlDB, Apache Flink®, Apache Spark™, and Kafka Streams, ksqlDB shines most for their business requirement. Bolt adopted ksqlDB because it is native to the Kafka ecosystem, and it is a perfect fit for their use case. They found ksqlDB to be a particularly good fit for replicating all their data to a data warehouse for a number of reasons, including: Easy to deploy and manageLinearly scalableNatively integrates with Confluent Schema Registry Turn in to find out more about Bolt’s adoption journey with Kafka and ksqlDB. EPISODE LINKSInside ksqlDB Course ksqlDB 101 CourseHow Bolt Has Adopted Change Data Capture with Confluent PlatformAnalysing Changes with Debezium and Kafka StreamsNo More Silos: How to Integrate Your Databases with Apache Kafka and CDCChange Data Capture with Debezium ft. Gunnar MorlingAnnouncing ksqlDB 0.17.0Real-Time Data Replication with ksqlDBWatch the video version of this podcastJoin the Confluent CommunityLearn more with Kafka tutorials, resources, and guides at Confluent DeveloperLive demo: Kafka streaming in 10 minutes on Confluent CloudUse PODCAST100 to get an additional $100 of free Confluent Cloud usage (details)
8/26/202129 minutes, 15 seconds
Episode Artwork

Placing Apache Kafka at the Heart of a Data Revolution at Saxo Bank

Monolithic applications present challenges for organizations like Saxo Bank, including difficulties when it comes to transitioning to cloud, data efficiency, and performing data management in a regulated environment. Graham Stirling, the head of data platforms at Saxo Bank and also a self-proclaimed recovering architect on the pathway to delivery, shares his experience over the last 2.5 years as Saxo Bank placed Apache Kafka® at the heart of their company—something they call a data revolution. Before adopting Kafka, Saxo Bank encountered scalability problems. They previously relied on a centralized data engineering team, using the database as an integration point and looking to their data warehouse as the center of the analytical universe. However, this needed to evolve. For a better data strategy, Graham turned his attention towards embracing a data mesh architecture: Create a self-serve platform that enables domain teams to publish and consume data assetsFederate ownership of domain data models and centralize oversights to allow a standard language to emerge while ensuring information efficiency Believe in the principle of data as a product to improve business decisions and processes Data mesh was first defined by Zhamak Dehghani in 2019, as a type of data platform architecture paradigm and has now become an integral part of Saxo Bank’s approach to data in motion. Using a combination of Kafka GitOps, pipelines, and metadata, Graham intended to free domain teams from having to think about the mechanics, such as connector deployment, language binding, style guide adherence, and data handling of personally identifiable information (PII). To reduce operational complexity, Graham recognized the importance of using Confluent Schema Registry as a serving layer for metadata. Saxo Bank authored schemes with Avro IDL for composability and standardization and later made a switch over to Uber’s Buf for strongly typed metadata. A further layer of metadata allows Saxo Bank to define FpML-like coding schemes to specify information classification, reference external standards, and link semantically related concepts. By embarking on the data mesh operating model, Saxo Bank scales data processing in a way that was previously unimaginable, allowing them to generate value sustainably and to be more efficient with data usage. Tune in to this episode to learn more about the following:Data meshTopic/schema as an APIData as a productKafka as a fundamental building block of data strategyEPISODE LINKSZhamak Dehghani Kafka Summit 2021 KeynoteData Mesh 101 CourseData Mesh Principles and Logical ArchitectureSaxo Bank’s Best Practices for a Distributed Domain-Driven ArchitectureWatch video version of this podcastJoin the Confluent CommunityLearn Kafka on Confluent DeveloperDemo: Event-Driven Microservices with ConfluentUse PODCAST100 to get $100 of free Confluent Cloud (details)
8/19/202128 minutes, 37 seconds
Episode Artwork

Advanced Stream Processing with ksqlDB ft. Michael Drogalis

ksqlDB makes it easy to read, write, process, and transform data on Apache Kafka®, the de facto event streaming platform. With simple SQL syntax, pre-built connectors, and materialized views, ksqlDB’s powerful stream processing capabilities enable you to quickly start processing real-time data at scale. But how does ksqlDB work? In this episode, Michael Drogalis (Principal Product Manager, Product Management, Confluent) previews an all-new Confluent Developer course: Inside ksqlDB, where he provides a full overview of ksqlDB’s internal architecture and delves into advanced ksqlDB features. When it comes to ksqlDB or Kafka Streams, there’s one principle to keep in mind: ksqlDB and Kafka Streams share a runtime. ksqlDB runs its SQL queries by dynamically writing Kafka Streams typologies. Leveraging Confluent Cloud makes it even easier to use ksqlDB.Once you are familiar with ksqlDB’s basic design, you’ll be able to troubleshoot problems and build real-time applications more effectively. The Inside ksqlDB course is designed to help you advance in ksqlDB and Kafka. Paired with hands-on exercises and ready-to-use codes, the course covers topics including: ksqlDB architectureHow stateless and stateful operations workStreaming joins Table-table joinsElastic scaling High availabilityMichael also sheds light on ksqlDB’s roadmap: Building out the query layer so that is highly scalable, making it able to execute thousands of concurrent subscriptionsMaking Confluent Cloud the best place to run ksqlDB and process streamsTune in to this episode to find out more about the Inside ksqlDB course on Confluent Developer. The all-new website provides diverse and comprehensive resources for developers looking to learn about Kafka and Confluent. You’ll find free courses, tutorials, getting started guides, quick starts for 60+ event streaming patterns, and more—all in a single destination. EPISODE LINKSInside ksqlDB Course ksqlDB 101 CourseHow ksqlDB Works: Internal Architecture and Advanced FeaturesHow Real-Time Stream Processing Safely Scales with ksqlDB, AnimatedHow Real-Time Materialized Views Work with ksqlDB, AnimatedHow Real-Time Stream Processing Works with ksqlDB, AnimatedWatch the video version of this podcastJoin the Confluent CommunityLearn more with Kafka tutorials, resources, and guides at Confluent DeveloperLive demo: Intro to Event-Driven Microservices with ConfluentUse PODCAST100 to get an additional $100 of free Confluent Cloud usage (details)
8/11/202128 minutes, 26 seconds
Episode Artwork

Minimizing Software Speciation with ksqlDB and Kafka Streams ft. Mitch Seymour

Building a large, stateful Kafka Streams application that tracks the state of each outgoing email is crucial to marketing automation tools like Mailchimp. Joining us today in this episode, Mitch Seymour, staff engineer at Mailchimp, shares how ksqlDB and Kafka Streams handle the company’s largest source of streaming data.  Almost like a post office, except instead of sending physical parcels, Mailchimp sends billions of emails per day. Monitoring the state of each email can provide visibility into the core business function, and it also returns information about the health of both internal and remote message transfer agents (MTAs). Finding a way to track those MTA systems in real time is pivotal to the success of the business. Mailchimp is an early Apache Kafka® adopter that started using the technology in 2014, a time before ksqlDB, Kafka Connect, and Kafka Streams came into the picture. The stream processing applications that they were building faced many complexities and rough edges. As their use case evolved and scaled overtime at Mailchimp, a large number of applications deviated from the initial implementation and design so that different applications emerged that they had to maintain. To reduce cost, complexity, and standardize stream processing applications, adopting ksqlDB and Kafka Streams became the solution to their problems. This is what Mitch calls, “minimizing software speciation in our software.”It's the idea when applications evolved into multiple systems to respond to failure-handling strategies, increased load, and the like. Using different scaling strategies and communication protocols creates system silos and can be challenging to maintain.Replacing the existing architecture that supported point-to-point communication, the new Mailchimp architecture uses Kafka as its foundation with scalable custom functions, such as a reusable and highly functional user-defined function (UDF). The reporting capabilities have also evolved from Kafka Streams’ interactive queries into enhanced queries with Elasticsearch. Turning experiences into books, Mitch is also an author of O’Reilly’s Mastering Kafka Streams and ksqlDB and the author and illustrator of Gently Down the Stream: A Gentle Introduction to Apache Kafka. EPISODE LINKSThe Exciting Frontier of Custom ksql Functions Kafka Streams 101 CourseMastering Kafka Streams and ksqlDB EbookksqlDB UDFs and UDADs Made EasyUsing Apache Kafka as a Scalable, Event-Driven Backbone for Service ArchitecturesThe Haiku Approach to Writing SoftwareWatch the video version of this podcastJoin the Confluent CommunityLearn Kafka on Confluent DeveloperLive demo: Kafka streaming on Confluent CloudUse PODCAST100 to get $100 of free Confluent Cloud usage (details)
8/5/202131 minutes, 32 seconds
Episode Artwork

Collecting Data with a Custom SIEM System Built on Apache Kafka and Kafka Connect ft. Vitalii Rudenskyi

The best-informed business insights that support better decision-making begin with data collection, ahead of data processing and analytics. Enterprises nowadays are engulfed by data floods, with data sources ranging from cloud services, applications, to thousands of internal servers. The massive volume of data that organizations must process presents data ingestion challenges for many large companies. In this episode, data security engineer, Vitalli Rudenskyi, discusses the decision to replace a vendor security information and event management (SIEM) system by developing a custom solution with Apache Kafka® and Kafka Connect for a better data collection strategy.Having a data collection infrastructure layer is mission critical for Vitalii and the team in helping enterprises protect data and detect security events. Building on the base of Kafka, their custom SIEM infrastructure is configurable and designed to be able to ingest and analyze huge amounts of data, including personally identifiable information (PII) and healthcare data. When it comes to collecting data, there are two fundamental choices: push or pull. But how about both? Vitalii shares that Kafka Connect API extensions are integral to data ingestion in Kafka. The three key components to allow their SIEM system to collect and record daily by pushing and pulling: NettySource Connector: A connector developed to receive data from different network devices to Apache Kafka. It helps receive data using both the TCP and UDP transport protocols and can be adopted to receive any data from Syslog to SNMP and NetFlow.PollableAPI Connector: A connector made to receive data from remote systems, pulling data from different remote APIs and services.Transformations Library: These are useful extensions to the existing out-of-the-box transformations. Approach with “tag and apply” transformations that transform data into the right place in the right format after collecting data.Listen to learn more as Vitalii shares the importance of data collection and the building of a custom solution to address multi-source data management requirements. EPISODE LINKSFeed Your SIEM Smart with Kafka ConnectTo Pull or to Push Your Data with Kafka Connect? That Is the Question.Free Kafka Connect 101 CourseSyslog Source Connector for Confluent PlatformJoin the Confluent CommunityLearn more with Kafka tutorials, resources, and guides at Confluent DeveloperLive demo: Kafka streaming in 10 minutes on Confluent CloudUse 60PDCAST to get an additional $60 of free Confluent Cloud usage (details)
7/27/202125 minutes, 14 seconds
Episode Artwork

Consistent, Complete Distributed Stream Processing ft. Guozhang Wang

Stream processing has become an important part of the big data landscape as a new programming paradigm to implement real-time data-driven applications. One of the biggest challenges for streaming systems is to provide correctness guarantees for data processing in a distributed environment. Guozhang Wang (Distributed Systems Engineer, Confluent) contributed to a leadership paper, along with other leaders in the Apache Kafka® community, on consistency and completeness in streaming processing in Apache Kafka in order to shed light on what a reimagined, modern infrastructure looks like. In his white paper titled Consistency and Completeness: Rethinking Distributed Stream Processing in Apache Kafka, Guozhang covers the following topics in his paper: Streaming correctness challengesStream processing with KafkaExactly-once in Kafka StreamsFor context, accurate, real-time data stream processing is more friendly to modern organizations that are composed of vertically separated engineering teams. Unlike in the past, stream processing was considered as an auxiliary system to normal batch processing oriented systems, often bearing issues around consistency and completeness. While modern streaming engines, such as ksqlDB and Kafka Streams are designed to be authoritative, as the source of truth, and are no longer treated as an approximation, by providing strong correctness guarantees. There are two major umbrellas of the correctness of guarantees: Consistency: Ensuring unique and extant recordsCompleteness: Ensuring the correct order of records, also referred to as exactly-once semantics. Guozhang also answers the question of why he wrote this academic paper, as he believes in the importance of knowledge sharing across the community and bringing industry experience back to academia (the paper is also published in SIGMOD 2021, one of the most important conference proceedings in the data management research area). This will help foster the next generation of industry innovation and push one step forward in the data streaming and data management industry. In Guozhang's own words, "Academic papers provide you this proof of concept design, which gets groomed into a big system."EPISODE LINKSWhite Paper: Rethinking Distributed Stream Processing in Apache KafkaBlog: Rethinking Distributed Stream Processing in Apache KafkaEnabling Exactly-Once in Kafka StreamsWhy Kafka Streams Does Not Use Watermarks ft. Matthias SaxStreams and Tables: Two Sides of the Same CoinWatch the video version of this podcastJoin the Confluent CommunityLearn more with Kafka tutorials, resources, and guides at Confluent DeveloperLive demo: Kafka streaming in 10 minutes on Confluent CloudUse 60PDCAST to get $60 of free Confluent Cloud usage (details)
7/22/202129 minutes
Episode Artwork

Powering Real-Time Analytics with Apache Kafka and Rockset

Using large amounts of streaming data increasingly requires interactive, real-time analytics and dashboards—and this applies to any industry, including tech. CTO and Co-Founder of Rockset Dhruba Borthakur shares how his company uses Apache Kafka® to perform complex joins, search, and aggregations on streaming data with low latencies. The Kafka database integrations allow his team to make a cloud-native analytics database that is a fundamental piece of enterprise infrastructure. Especially in e-commerce, logistics and manufacturing apps are typically receiving over 20 million events a day. As those events roll in, it is even more critical for real-time indexing to be queried with low latencies. This way, you can build high-performing and scalable dashboards that allow your organization to use clickstream and behavioral data to inform decisions and responses to consumer behavior. Typically, the data follow these steps:Events come in from mobile or web apps, such as clickstream or IoT dataThe app data is sent to the cloudData is fed into the database in real timeThis information is shared live on a dashboard or via SaaS application embedsFor example, when working with real-time analytics in real-time databases, both need to be continuously synced for optimal performance. If the latency is too significant, there can be a missed opportunity to interact with customers on their platform. You may want to write queries that join streaming data across transactional data or historical data lakes, even for complex analytics. You always want to make sure that the database performs at a speed and scale appropriate for customers to have a seamless experience. Using Rockset, you can write ANSI SQL on semi-structured and schemaless data. This way, you can achieve those complex joins with low latencies. Further data is required to supplement streaming data, but it can be easily supported through supported integrations. By having a solution for database requirements that are easily integrated and provide the correct data, you can make better decisions and maximize the result. EPISODE LINKSReal-Time Analytics and Monitoring Dashboards with Apache Kafka and RocksetWatch the video version of this podcastJoin the Confluent CommunityLearn more with Kafka tutorials, resources, and guides at Confluent DeveloperLive demo: Kafka streaming in 10 minutes on Confluent CloudUse 60PDCAST to get an additional $60 of free Confluent Cloud usage (details)
7/15/202125 minutes, 44 seconds
Episode Artwork

Automated Event-Driven Architectures and Microservices with Apache Kafka and SmartBear

Is it possible to have automated adoption of your event-driven architectures and microservices? The answer is yes! Alianna Inzana, product leader for API testing and virtualization at SmartBear, uses this evolutionary model to make event services reusable, functional, and strategic for both in-house needs and clients. SmartBear relies on Apache Kafka® to drive its automated microservices solutions forward through scaled architecture and adaptive workflows. Although the path to adoption may be different across use case and client requirements, it is all about maturity and API lifecycle management. As your services mature and grow, so should your event streaming architecture. The data your organization collects is no longer in a silo—rather, it has to be accessible across several events. The best architecture can handle these fluctuations. Alianna explains that although the result of these requirements is an architectural pattern, it doesn’t start that way. Instead, these automation processes begin as coding patterns on isolated platforms. You cannot rush code development at the coding stage because you never truly know how it will work for the end system. Testing must be done at each step of the implementation to ensure that event-driven architectures work for each step and variation of the service. The code will be altered as needed throughout the trial phase. Next, the product and development teams compare what architecture you currently have to where you’d like it to be. It is all about the product and how you’d like to scale it. The tricky part comes in the trial and error of bringing on each product and service one by one. However, once your offerings and architecture are synced, you’re saving time and effort not building something new for every microservice. As a result of event-driven architectures, you can minimize duplicate efforts and adapt your business offerings to them as the need arises. This is a strategic step for any organization looking to have a cohesive product offering. Architecture automation allows for flexible features that scale with your event services. Once these are in place, a company can use and grow them as needed. With innovative and adaptable event-driven architectures, organizations can grow with clients and scale the backend system as required. EPISODE LINKSExploring Event-Driven Architectures: Why Quality MattersApache Kafka + Event-Driven Architecture Support in ReadyAPIWatch the video version of this podcastJoin the Confluent CommunityLearn more with Kafka tutorials, resources, and guides at Confluent DeveloperLive demo: Kafka streaming in 10 minutes on Confluent CloudUse 60PDCAST to get an additional $60 of free Confluent Cloud usage (details)
7/8/202129 minutes, 53 seconds
Episode Artwork

Data-Driven Digitalization with Apache Kafka in the Food Industry at BAADER

Coming out of university, Patrick Neff (Data Scientist, BAADER) was used to “perfect” examples of datasets. However, he soon realized that in the real world, data is often either unavailable or unstructured. This compelled him to learn more about collecting data, analyzing it in a smart and automatic way, and exploring Apache Kafka® as a core ecosystem while at BAADER, a global provider of food processing machines. After Patrick began working with Apache Kafka in 2019, he developed several microservices with Kafka Streams and used Kafka Connect for various data analytics projects. Focused on the food value chain, Patrick’s mission is to optimize processes specifically around transportation and processing. In consulting one customer, Patrick detected an area of improvement related to animal welfare, lost revenues, unnecessary costs, and carbon dioxide emissions. He also noticed that often machines are ready to send data into the cloud, but the correct presentation and/or analysis of the data is missing and thus the possibility of optimization. As a result:Data is difficult to understand because of missing unitsData has not been analyzed so farComparison of machine/process performance for the same machine but different customers is missing In response to this problem, he helped develop the Transport Manager. Based on data analytics results, the Transport Manager presents information like a truck’s expected arrival time and its current poultry load. This leads to better planning, reduced transportation costs, and improved animal welfare. The Asset Manager is another solution that Patrick has been working on, and it presents IoT data in real time and in an understandable way to the customer. Both of these are data analytics projects that use machine learning.Kafka topics store data, provide insight, and detect dependencies related to why trucks are stopping along the route, for example. Kafka is also a real-time platform, meaning that alerts can be sent directly when a certain event occurs using ksqlDB or Kafka Streams.As a result of running Kafka on Confluent Cloud and creating a scalable data pipeline, the BAADER team is able to break data silos and produce live data from trucks via MQTT. They’ve even created an Android app for truck drivers, along with a desktop version that monitors the data inputted from a truck driver on the app in addition to other information, such as expected time of arrival and weather information—and the best part: All of it is done in real time.EPISODE LINKSLearn more about BAADER’s data-in-motion use casesRead about how BAADER uses Confluent CloudWatch the video version of this podcastJoin the Confluent CommunityLearn more with Kafka tutorials, resources, and guides at Confluent DeveloperLive demo: Kafka streaming in 10 minutes on Confluent CloudUse 60PDCAST to get an additional $60 of free Confluent Cloud usage (details)
6/29/202127 minutes, 53 seconds
Episode Artwork

Chaos Engineering with Apache Kafka and Gremlin

The most secure clusters aren’t built on the hopes that they’ll never break. They are the clusters that are broken on purpose and with a specific goal. When organizations want to avoid systematic weaknesses, chaos engineering with Apache Kafka® is the route to go. Your system is only as reliable as its highest point of vulnerability. Patrick Brennan (Principal Architect) and Tammy Butow (Principal SRE) from Gremlin discuss how they do their own chaos engineering to manage and resolve high-severity incidents across the company. But why would an engineer break things when they would have to fix them? Brennan explains that finding weaknesses in the cloud environment helps Gremlin to:Avoid lengthy downtime when there is an issue (not if, but when)Halt lost revenue that results from service interruptionsMaintain customer satisfaction with their stream processing servicesSteer clear of burnout for the SRE team Chaos engineering is all about experimenting with injecting failure directly into the clusters on the cloud. The key is to start with a small blast radius and then scale as needed. It is critical that SREs have a plan for failure and then practice an intense communication methodology with the development team. This plan has to be detailed and includes precise diagramming so that nothing in the chaos engineering process is an anomaly. Once the process is confirmed, SREs can automate it, and nothing about it is random. When something breaks or you find a vulnerability, it only helps the overall network become stronger. This becomes a way to problem-solve across engineering teams collaboratively. Chaos engineering makes it easier for SRE and development teams to do their job, and it helps the organization promote security and reliability to their customers. With Kafka, companies don’t have to wait for an issue to happen. They can make their disorder within microservices on the cloud and fix vulnerabilities before anything catastrophic happens.EPISODE LINKSTry Gremlin’s free tierJoin Gremlin’s Slack channelLearn more about Girl Geek AcademyLearn more about gardeningWatch the video version of this podcastJoin the Confluent CommunityLearn more with Kafka tutorials, resources, and guides at Confluent DeveloperLive demo: Kafka streaming in 10 minutes on Confluent CloudUse 60PDCAST to get an additional $60 of free Confluent Cloud usage (details)
6/22/202135 minutes, 32 seconds
Episode Artwork

Boosting Security for Apache Kafka with Confluent Cloud Private Link ft. Dan LaMotte

Confluent Cloud isn’t just for public access anymore. As the requirement for security across sectors increases, so does the need for virtual private cloud (VPC) connections. It is becoming more common today to come across Apache Kafka® implementations with the latest private link connectivity option. In the past, most Confluent Cloud users were satisfied with public connectivity paths and VPC peering. However, enabling private links on the cloud is increasingly important for security across networks and even the reliability of stream processing. Dan LaMotte, who since this recording became a staff software engineer II, and his team are focused on making secure connections for customers to utilize Confluent Cloud. This is done by allowing two VPCs to connect without sharing their own private IP address space. There’s no crossover between them, and it lends itself to entirely secure connection unidirectional connectivity from customer to service provider without sharing IPs. But why do clients still want to peer if they have the option to interface privately? Dan explains that peering has been known as the base architecture for this type of connection. Peering at the core is just point-to-point cloud connections that happen between two VPCs. With global connectivity becoming more commonplace and the rise of globally distributed working teams, networks are often not even based in the same region. Regardless of region, however, organizations must take the level of security into account. Peering and transit gateways with a high level of analogy are the new baseline for these use cases, and this is where Kafka’s private links come in handy. Private links now allow team members to connect to Confluent Cloud instantaneously without depending on the internet. You can directly connect all of your multi-cloud options and microservices within your own secure space that is private to the company and to specific IP addresses. Also, the connection must be initiated on the client side for an increased security measure. With the option of private links, you can now also build microservices that use new functionality that wasn’t available in the past, such as:Multi-cloud clustersProduct enhancements with IP rangesUnlimited IP space Scalability Load balancingYou no longer need to segment your workflow, thanks to completely secure connections between teams that are otherwise disconnected from one another. EPISODE LINKSSecuring the Cloud with VPC Peering ft. Daniel LaMotteSoftware Engineer, Cloud Networking [Remote – AMER]Software Engineer, Cloud Networking [Remote – USA]eBPF documentationWatch the video version of this podcastJoin the Confluent CommunityLearn more with Kafka tutorials, resources, and guides at Confluent DeveloperLive demo: Kafka streaming in 10 minutes on Confluent CloudUse 60PDCAST to get an additional $60 of free Confluent Cloud usage (details)
6/15/202125 minutes, 55 seconds
Episode Artwork

Confluent Platform 6.2 | What’s New in This Release + Updates

Based on Apache Kafka® 2.8, Confluent Platform 6.2 introduces Health+, which offers intelligent alerting, cloud-based monitoring tools, and accelerated support so that you can get notified of potential issues before they manifest as critical problems that lead to downtime and business disruption.Health+ provides ongoing, real-time analysis of performance and cluster metadata for your Confluent Platform deployment, collecting only metadata so that you can continue managing your deployment, as you see fit, with complete control.With cluster metadata being continuously analyzed, through an extensive library of expert-tested rules and algorithms, you can quickly get insights to cluster performance and spot potential problems before they occur using Health+. To ensure complete visibility, organizations can customize the types of notifications that they receive and choose to receive them via Slack, email, or webhook. Each notification that you receive is aimed at avoiding larger downtime or data loss by helping identify smaller issues before they become bigger problems.In today’s episode, Tim Berglund (Senior Director of Developer Experience, Confluent) highlights everything that’s new in Confluent Platform 6.2 and all the latest updates.EPISODE LINKSCheck out the release notesRead the blog post: Introducing Health+ with Confluent Platform 6.2Download Confluent Platform 6.2Watch the video version of this podcastJoin the Confluent CommunityLearn more with Kafka tutorials, resources, and guides at Confluent DeveloperLive demo: Kafka streaming in 10 minutes on Confluent CloudUse 60PDCAST to get an additional $60 of free Confluent Cloud usage (details)
6/10/20219 minutes, 20 seconds
Episode Artwork

Adopting OpenTelemetry in Confluent and Beyond ft. Xavier Léauté

Collecting internal, operational telemetry from Confluent Cloud services and thousands of clusters is no small feat. Stakeholders need to rely on the same data to make operational decisions. Whether it be metrics from clusters in Confluent Cloud or traces from our internal service, they all provide valuable insights not only to engineering teams but also to customers for their own operations and for business reporting needs. Traditionally, this data needs to be collected in multiple ways to satisfy all the different requirements. We leverage third-party vendors for our operational needs, which usually means deploying vendor agents or libraries in addition to our own, as we also need to collect some of the same data to expose to customers.However, this sometimes leads to discrepancies between various systems, which are often hard to reconcile and make it harder to troubleshoot issues across engineering, data science, and other teams.One of the earliest software engineers at Confluent, Xavier Léauté is no stranger to this. At Confluent, he leads our observability engineering efforts in Confluent Cloud.With OpenTelemetry, we can collect data in a vendor-agnostic way. It defines a standard format that all our services can use to expose telemetry, and it provides Go and Java libraries that we can use to instrument our services. Many vendors already integrate with OpenTelemetry, which gives us the flexibility to try out different observability solutions with minimal effort, without the need to rewrite applications or deploy new agents. This means that the same data we send to third parties can also be collected internally (in our own clusters).The same source of data can then be leveraged in many different ways:Using Kafka Connect, we can send this data to our data warehouse and data science teams in real time to derive many of the metrics that we use to track the health of our cloud businessThat very same data also powers our Cloud Metrics API to provide our customers visibility into their infrastructureEngineers and support teams can collect more fine-grained data to troubleshoot incidents or understand low-level application behaviorWe’ve also adopted the same approach for on-prem customers, which enables us to collect telemetry into our cloud and help them troubleshoot issues, leveraging the same infrastructure that we already built for Cloud. Regarding OpenTelemetry efforts in Apache Kafka®, we’re working on KIP-714 which will allow us to collect Kafka client metrics to help better understand client-side problems without the need to instrument client applications. Our ultimate goal has always been to migrate to OpenTelemetry, which is now underway. We’d like to make a way for direct integration with OpenTelemetry in Kafka, based on the work that we’ve done at Confluent.EPISODE LINKSOpenTelemetry Twitch channelConfluent Cloud Metrics APIConfluent Platform Proactive SupportKIP-714: Client Metrics and ObservabilityWatch the video version of this podcastJoin the Confluent CommunityLearn more at Confluent DeveloperUse 60PDCAST to get an additional $60 of free Confluent Cloud usage (details)
6/8/202132 minutes, 52 seconds
Episode Artwork

Running Apache Kafka Efficiently on the Cloud ft. Adithya Chandra

Focused on optimizing Apache Kafka® performance with maximized efficiency, Confluent’s Product Infrastructure team has been actively exploring opportunities for scaling out Kafka clusters. They are able to run Kafka workloads with half the typical memory usage while saving infrastructure costs, which they have tested and now safely rolled out across Confluent Cloud. After spending seven years at Amazon Web Services (AWS) working on search services and Amazon Aurora as a software engineer, Adithya Chandra decided to apply his expertise in cluster management, load balancing, elasticity, and performance of search and storage clusters to the Confluent team.Last year, Confluent shipped Tiered Storage, which moves eligible data to remote storage from a Kafka broker. As most of the data moves to remote storage, we can upgrade to better storage volumes backed by solid-state drives (SSDs). SSDs are capable of higher throughput compared to hard disk drives (HDDs), capable of fast, random IO, yet more expensive per provisioned gigabyte. Given that SSDs are useful at random IO and can support higher throughput, Confluent started investigating whether it was possible to run Kafka with lesser RAM, which is comparatively much more expensive per gigabyte compared to SSD. Instance types in the cloud had the same CPU but half the memory was 20% cheaper.In this episode, Adithya covers how to run Kafka more efficiently on Confluent Cloud and dives into the following:Memory allocation on an instance running KafkaWhat is a JVM heap? Why should it be sized? How much is enough? What are the downsides of a small heap?Memory usage of Datadog, Kubernetes, and other processes, and allocating memory correctlyWhat is the ideal page cache size? What is a page cache used for? Are there any parameters that can be tuned? How does Kafka use the page cache?Testing via the simulation of a variety of workloads using TrogdorHigh-throughput, high-connection, and high-partition tests and their resultsAvailable cloud hardware and finding the best fit, including choosing the number of instance types, migrating from one instance to another, and using nodepools to migrate brokers safely, one by oneWhat do you do when your preferred hardware is not available? Can you run hybrid Kafka clusters if the preferred instance is not widely available?Building infrastructure that allows you to perform testing easily and that can support newer hardware faster (ARM processors, SSDs, etc.)EPISODE LINKSWatch the video version of this podcastJoin the Confluent CommunityLearn more with Kafka tutorials, resources, and guides at Confluent DeveloperLive demo: Kafka streaming in 10 minutes on Confluent CloudUse 60PDCAST to get an additional $60 of free Confluent Cloud usage (details)
5/25/202138 minutes, 35 seconds
Episode Artwork

Engaging Database Partials with Apache Kafka for Distributed System Consistency ft. Pat Helland

When compiling database reports using a variety of data from different systems, obtaining the right data when you need it in real time can be difficult. With cloud connectivity and distributed data pipelines, Pat Helland (Principal Architect, Salesforce) explains how to make educated partial answers when you need to use the Apache Kafka® platform. After all, you can’t get guarantees across a distance, making it critical to consider partial results.Despite best efforts, managing systems from a distance can result in lag time. The secret, according to Helland, is to anticipate these situations and have a plan for when (not if) they happen. Your outputs may be incomplete from time to time, but that doesn’t mean that there isn’t valuable information and data to be shared. Although you cannot guarantee that stream data will be available when you need it, you can gather replicas within a batch to obtain a consistent result, also known as convergence. Distributed systems of all sizes and across large distances rely on reference architecture for database reporting. Plan and anticipate that there will be incomplete inputs at times. Regardless of the types of data that you’re using within a distributed database, there are many inferences that can be made from repetitive monitoring over time. There would be no reason to throw out data from 19 machines when you’re only waiting on one while approaching a deadline. You can make the sources that you have work by making the most out of what is available in the presence of a partition for the overall distributed database.Confluent Cloud and convergence capabilities have allowed Salesforce to make decisions very quickly even when only partial data is available using replicated systems across multiple databases. This analytical approach is vital for consistency for large enterprises, especially those that depend on multi-cloud functionality. EPISODE LINKSWatch the video version of this podcastJoin the Confluent CommunityLearn more with Kafka tutorials, resources, and guides at Confluent DeveloperLive demo: Kafka streaming in 10 minutes on Confluent CloudUse 60PDCAST to get an additional $60 of free Confluent Cloud usage (details)
5/20/202142 minutes, 9 seconds
Episode Artwork

The Truth About ZooKeeper Removal and the KIP-500 Release in Apache Kafka ft. Jason Gustafson and Colin McCabe

Jason Gustafson and Colin McCabe, Apache Kafka® developers, discuss the project to remove ZooKeeper—now known as the KRaft (Kafka on Raft) project. A previous episode of Streaming Audio featured both developers on the podcast before the release of Apache Kafka 2.8. Now they’re back to share their progress.The KRraft code has been merged (and continues to be merged) in phases. Both developers talk about the foundational Kafka Improvement Proposals (KIPs), such as KIP-595: a Raft protocol for Kafka, and KIP-631: the quorum-based Kafka controller. The idea going into this new release was to give users a chance to try out no-ZooKeeper mode for themselves. There are a lot of exciting milestones on the way for KRaft. The next release will feature Raft snapshot support, as well as support for running with security authorizers enabled.  EPISODE LINKSKIP-500: Apache Kafka Without ZooKeeper ft. Colin McCabe and Jason GustafsonWhat’s New in Apache Kafka 2.8Watch the video version of this podcastJoin the Confluent CommunityLearn more with Kafka tutorials, resources, and guides at Confluent DeveloperLive demo: Kafka streaming in 10 minutes on Confluent CloudUse 60PDCAST to get an additional $60 of free Confluent Cloud usage (details)
5/13/202131 minutes, 50 seconds
Episode Artwork

Resilient Edge Infrastructure for IoT Using Apache Kafka ft. Kai Waehner

What is the internet of things (IoT), and how does it relate to event streaming and Apache Kafka®? The deployment of Kafka outside the datacenter creates many new possibilities for processing data in motion and building new business cases.In this episode, Kai Waehner, field CTO and global technology advisor at Confluent, discusses the intersection of edge data infrastructure, IoT, and cloud services for Kafka. He also details how businesses get into the sticky situation of not accounting for solutions when data is running dangerously close to the edge. Air-gapped environments and strong security requirements are the norm in many edge deployments.Defining the edge for your industry depends on what sector you’re in plus the amount of data and interaction involved with your customers. The edge could lie on various points of the spectrum and carry various meanings to various people. Before you can deploy Kafka to the edge, you must first define where that edge is as it relates to your connectivity needs. Edge resiliency enables your enterprise to not only control your datacenter with ease but also preserve the data without privacy risks or data leaks. If a business does not have the personnel to handle these big IT jobs on their own or an organization simply does not have an IT department at all, this is where Kafka solutions can come in to fill the gap. This podcast explores use cases and architectures at the edge (i.e., outside the datacenter) across industries, including manufacturing, energy, retail, restaurants, and banks. The trade-offs of edge deployments are compared to a hybrid integration with Confluent Cloud. EPISODE LINKSProcessing IoT Data End to End with MQTT & KafkaEnd-to-End Integration: IoT Edge to ConfluentInfrastructure Checklist for Kafka at the EdgeUse Cases & Architectures for Kafka at the EdgeArchitecture Patterns for Distributed, Hybrid, Edge & Global Kafka DeploymentsBuilding a Smart Factory with Kafka & 5G Campus NetworksKafka Is the New Black at the Edge in Industrial IoT, Logistics & Retailing Kafka, KSQL & Apache PLC4X for IIoT Data Integration & Processing Streaming Machine Learning at Scale from 100K IoT Devices with HiveMQ, Kafka & TensorFlow Watch this podcast on YouTubeJoin the Confluent CommunityUse 60PDCAST to get an additional $60 of free Confluent Cloud usage (details)
5/4/202127 minutes, 19 seconds
Episode Artwork

Data Management and Digital Transformation with Apache Kafka at Van Oord

Imagine if you could create a better world for future generations simply by delivering marine ingenuity. Van Oord is a Dutch family-owned company that has served as an international marine contractor for over 150 years, focusing on dredging, land infrastructure in the Netherlands, and offshore wind and oil & gas infrastructure.Real-time insights into costs spent, the progress of projects, and the performance tracking of vessels and equipment are essential for surviving as a business. Becoming a data-driven company requires that all data connected, synchronized, and visualized—in fact, truly digitized.This requires a central nervous system that supports:Legacy (monolith environment) as well as microservicesELT/ETL/streaming ETLAll types of data, including transactional, streaming, geo, machine, and (sea) survey/bathymetryMaster data/enterprise common data modelThe need for agility and speed makes it necessary to have a fully integrated DevOps-infrastructure-as-code environment, where data lineage, data governance, and enterprise architecture are holistically embedded. Thousands of topics need to be developed, updated, tested, accepted, and deployed each day. This together with different scripts for connectors requires a holistic data management solution, where data lineage, data governance and enterprise architecture are an integrated part.Thus, Marlon Hiralal (Enterprise/Data Management Architect, Van Oord) and Andreas Wombacher (Data Engineer, Van Oord) turned to Confluent for a three-month proof of concept and explored the pre-prep stage of using Apache Kafka® on Van Oord’s vessels.Since the environment in Van Oord is dynamic with regards to the application landscape and offered services, it is essential that a stable environment with controlled continuous integration and deployment is applied. Beyond the software components itself, this also applies to configurations and infrastructure, as well as applying the concept of CI/CD with infrastructure as code. The result: using Terraform and Confluent together.Publishing information is treated as a product at Van Oord. An information product is a set of Kafka topics: topics to communicate change (via change data capture) and topics for sharing the state of a data source (Kafka tables). The set of all information products forms the enterprise data model.Apache Atlas is used as a data dictionary and governance tool to capture the meaning of different information products. All changes in the data dictionary are available as an information product in Confluent, allowing for consumers of information products to subscribe to the information and be notified about changes.Van Oord’s enterprise architecture model must remain up to date and aligned with the current implementation. This is achieved by automatically inspecting and analyzing Confluent data flows. Fortunately, Confluent embeds homogeneously in this holistic reference architecture. The basis of the holistic reference architecture is a change data capture (CDC) layer and a persistent layer, which makes Confluent the core component of the Van Oord future-proof digital data management solution.EPISODE LINKSWatch the video version of this podcastJoin the Confluent CommunityLearn more with Kafka tutorials, resources, and guides at Confluent DeveloperLive demo: Kafka streaming in 10 minutes on Confluent CloudUse 60PDCAST to get an additional $60 of free Confluent Cloud usage (details)&
4/29/202128 minutes, 28 seconds
Episode Artwork

Powering Microservices Using Apache Kafka on Node.js with KafkaJS at Klarna ft. Tommy Brunn

At Klarna, Lead Engineer Tommy Brunn is building a runtime platform for developers. But outside of his professional role, he is also one of the authors of the JavaScript client for Apache Kafka® called KafkaJS, which has grown from being a niche open source project to the most downloaded Kafka client for Node.js since 2018.Using Kafka in Node.js has previously meant relying on community-contributed bindings to librdkafka, which required you to spend more of your time debugging failed builds than working on your application. With the original authors moving away from supporting the bindings, and the community only partially picking up the slack, using Kafka on NodeJS was a painful proposition.Kafka is a core part of Klarna’s microservice architecture, with hundreds of services using it to communicate among themselves. In 2017, as their engineering team was building the ecosystem of Node.js services powering the Klarna app, it was clear that the experience of working with any of the available Kafka clients was not good enough, so they decided to perform something similar for the Erlang client, Brod, and build their own. Rather than wrapping librdkafka, their client is a complete reimplementation in native JavaScript, allowing for a far superior user experience at the cost of being a lot more work to implement. Towards the end of 2017, KafkaJS 0.1.0 was released.Tommy has also used KafkaJS to build several Kafka-powered services at Klarna, as well as worked on supporting libraries such as integrations with Confluent Schema Registry and Zstandard compression.Since KafkaJS is written entirely in JavaScript, there is no build step required. It will work 100% of the time in any version of Node.js and evolve together with the platform with no effort required from the end user. It also unlocks some creative use cases. For example, Klarna once did an experiment where they got it to run in a browser. KafkaJS will also run on any platform that’s supported by Node.js, such as ARM. Klarna’s “no dependencies” policy also means that the deployment footprint is small, which makes it a perfect fit for serverless environments.EPISODE LINKSNode.js ❤️ Apache Kafka – Getting Started with KafkaJS Watch the video version of this podcastJoin the Confluent CommunityLearn more with Kafka tutorials, resources, and guides at Confluent DeveloperLive demo: Kafka streaming in 10 minutes on Confluent CloudUse 60PDCAST to get an additional $60 of free Confluent Cloud usage (details)
4/22/202131 minutes, 3 seconds
Episode Artwork

Apache Kafka 2.8 - ZooKeeper Removal Update (KIP-500) and Overview of Latest Features

Apache Kafka 2.8 is out! This release includes early access to the long-anticipated ZooKeeper removal encapsulated in KIP-500, as well as other key updates, including the addition of a Describe Cluster API, support for mutual TLS authentication on SASL_SSL listeners, exposed task configurations in the Kafka Connect REST API, the removal of a properties argument for the TopologyTestDriver, the introduction of a Kafka Streams specific uncaught exception handler, improved handling of window size in Streams, and more.EPISODE LINKSRead about what’s new in Apache Kafka 2.8Check out the Apache Kafka 2.8 release notesWatch the video version of this podcastJoin the Confluent CommunityLearn more with Kafka tutorials, resources, and guides at Confluent DeveloperLive demo: Kafka streaming in 10 minutes on Confluent CloudUse 60PDCAST to get an additional $60 of free Confluent Cloud usage (details)
4/19/202110 minutes, 48 seconds
Episode Artwork

Connecting Azure Cosmos DB with Apache Kafka - Better Together ft. Ryan CrawCour

When building solutions for customers in Microsoft Azure, it is not uncommon to come across customers who are deeply entrenched in the Apache Kafka® ecosystem and want to continue expanding within it. Thus, figuring out how to connect Azure first-party services to this ecosystem is of the utmost importance.Ryan CrawCour is a Microsoft engineer who has been working on all things data and analytics for the past 10+ years, including building out services like Azure Cosmos DB, which is used by millions of people around the globe. More recently, Ryan has taken a customer-facing role where he gets to help customers build the best solutions possible using Microsoft Azure’s cloud platform and development tools. In one case, Ryan helped a customer leverage their existing Kafka investments and persist event messages in a durable managed database system in Azure. They chose Azure Cosmos DB, a fully managed, distributed, modern NoSQL database service as their preferred database, but the question remained as to how they would feed events from their Kafka infrastructure into Azure Cosmos DB, as well as how they could get changes from their database system back into their Kafka topics. Although integration is in his blood, Ryan confesses that he is relatively new to the world of Kafka and has learned to adjust to what he finds in his customers’ environments. Oftentimes this is Kafka, and for many good reasons, customers don’t want to change this core part of their solution infrastructure. This has led him to embrace Kafka and the ecosystem around it, enabling him to better serve customers. He’s been closely tracking the development and progress of Kafka Connect. To him, it is the natural step from Kafka as a messaging infrastructure to Kafka as a key pillar in an integration scenario. Kafka Connect can be thought of as a piece of middleware that can be used to connect a variety of systems to Kafka in a bidirectional manner. This means getting data from Kafka into your downstream systems, often databases, and also taking changes that occur in these systems and publishing them back to Kafka where other systems can then react. One day, a customer asked him how to connect Azure Cosmos DB to Kafka. There wasn’t a connector at the time, so he helped build two with the Confluent team: a sink connector, where data flows from Kafka topics into Azure Cosmos DB, as well as a source connector, where Azure Cosmos DB is the source of data pushing changes that occur in the database into Kafka topics.EPISODE LINKSIntegrating Azure and Confluent: Ingesting Data to Azure Cosmos DB through Apache Kafka Download the Azure Cosmos DB Connector (Source and Sink) Join the Confluent CommunityGitHub: Kafka Connect for Azure Cosmos DBWatch the video version of this podcastLearn more with Kafka tutorials, resources, and guides at Confluent DeveloperLive demo: Kafka streaming in 10 minutes on Confluent CloudUse 60PDCAST to get an additional $60 of free Confluent Cloud usage (details)
4/14/202131 minutes, 59 seconds
Episode Artwork

Automated Cluster Operations in the Cloud ft. Rashmi Prabhu

If you’ve heard the term “clusters,” then you might know it refers to Confluent components and features that we run in all three major cloud providers today, including an event streaming platform based on Apache Kafka®, ksqlDB, Kafka Connect, the Kafka API, databalancers, and Kafka API services. Rashmi Prabhu, a software engineer on the Control Plane team at Confluent, has the opportunity to help govern the data plane that comprises all these clusters and enables API-driven operations on these clusters. But running operations on the cloud in a scaling organization can be time consuming, error prone, and tedious. This episode addresses manual upgrades and rolling restarts of Confluent Cloud clusters during releases, fixes, experiments, and the like, and more importantly, the progress that’s been made to switch from manual operations to an almost fully automated process. You’ll get a sneak peek into what upcoming plans to make cluster operations a fully automated process using the Cluster Upgrader, a new microservice in Java built with Vertx. This service runs as part of the control plane and exposes an API to the user to submit their workflows and target a set of clusters. It performs statement management on the workflow in the backend using Postgres.So what’s next? Looking forward, there will be the selection phase will be improved to support policy-based deployment strategies that enable you to plan ahead and choose how you want to phase your deployments (e.g., first Azure followed by part of Amazon Web Services and then Google Cloud, or maybe Confluent internal clusters on all cloud providers followed by customer clusters on Google Cloud, Azure, and finally AWS)—the possibilities are endless! The process will become more flexible, more configurable, and more error tolerant so that you can take measured risks and experience a standardized way of operating Cloud. In addition, expanding operation automations to internal application deployments and other kinds of fleet management operations that fit the “Select/Apply/Monitor” paradigm are in the works.EPISODE LINKSWatch Project Metamorphosis videos Learn about elastic scaling with Apache KafkaNick Carr: The Many Ways Cloud Computing Will Disrupt IT Join the Confluent CommunityLearn more with Kafka tutorials, resources, and guides at Confluent DeveloperLive demo: Kafka streaming in 10 minutes on Confluent CloudUse 60PDCAST to get an additional $60 of free Confluent Cloud usage (details)
4/12/202124 minutes, 41 seconds
Episode Artwork

Resurrecting In-Sync Replicas with Automatic Observer Promotion ft. Anna McDonald

As most developers and architects know, data always needs to be accessible no matter what happens outside of the system. This week, Tim Berglund virtually sits down with Anna McDonald (Principal Customer Success Technical Architect, Confluent) to discuss how Automatic Observer Promotion (AOP) can help solve the Apache Kafka® 2.5 datacenter dilemma as a feature now available in Confluent Platform 6.1 and above. Many industries must have a backup plan not only to do the right thing by the data that they collect but because they are regulated by law to do so. Anna has a knack for preparing operations that makes replication of data possible both synchronously and asynchronously. To avoid roadblocks in stretch clusters, she’s found that you need both a replication factor and a minimum in-sync replica (ISR). There needs to be a consideration for not just one but multiple copies for the protection of your data criteria. Not replicating the correct number on the datacenter can mean that your application is down, and there’s no way to retrieve vital information during this outage. The presence of observers enables asynchronous replicas that don’t count towards that minimum ISR. These ISRs work because they help recover data without invalidating any other standards. Architects should try to maintain topic availability during an event in a two-zone configuration. This assures that the writes go to both zones during normal operation without compromise. With the newest version of Confluent, you can get data in sync and within the minimum ISR. AOP is an excellent solution for developers who want to prepare for the unexpected and maintain accessibility across zones. When you can avoid manual interruption, you’re more likely to avoid errors and tedious operations, which would otherwise lead to a higher probability of data loss. In other exciting news, Anna shares about discovering patterns in order to make the entire Confluent ecosystem more automatic. EPISODE LINKSAutomatic Observer Promotion Brings Fast and Safe Multi-Datacenter Failover with Confluent Platform 6.1Amusing Ourselves to DeathJoin the Confluent CommunityLearn more with Kafka tutorials, resources, and guides at Confluent DeveloperLive demo: Kafka streaming in 10 minutes on Confluent CloudUse 60PDCAST to get an additional $60 of free Confluent Cloud usage (details)
4/7/202124 minutes, 33 seconds
Episode Artwork

Building Real-Time Data Pipelines with Microsoft Azure, Databricks, and Confluent

Processing data in real time is a process, as some might say. Angela Chu (Solution Architect, Databricks) and Caio Moreno (Senior Cloud Solution Architect, Microsoft) explain how to integrate Azure, Databricks, and Confluent to build real-time data pipelines that enable you to ingest data, perform analytics, and extract insights from data at hand. They share about where to start within the Apache Kafka® ecosystem and how to maximize the tools and components that it offers using fully managed services like Confluent Cloud for data in motion.EPISODE LINKSConsuming Avro Data from Apache Kafka Topics and Schema Registry with Databricks and Confluent Cloud on Azure Azure Data Lake Storage Gen2 introductionBest practices for using Azure Data Lake Storage Gen2Join the Confluent CommunityLearn more with Kafka tutorials, resources, and guides at Confluent DeveloperLive demo: Kafka streaming in 10 minutes on Confluent CloudUse 60PDCAST to get an additional $60 of free Confluent Cloud usage (details)
3/31/202130 minutes, 32 seconds
Episode Artwork

Smooth Scaling and Uninterrupted Processing with Apache Kafka ft. Sophie Blee-Goldman

Availability in Kafka Streams is hard, especially in the face of any changes. Any change to topic metadata or group membership triggers a rebalance. But Kafka Streams struggles even after this stop-the-world rebalance has finished. According to Apache Kafka® Committer and Confluent Software Engineer Sophie Blee-Goldman, this is because a Streams app will generally have some state associated with a given partition, and to move this state from one consumer instance to another requires rebuilding this state from a special backing topic called a changelog, the source of truth for a partition’s state. Restoring the changelog can take hours, and until the state is ready, Streams can’t do any further processing on that partition. Furthermore, it can’t serve any requests for local state until the local state is “caught up” with the changelog. So scaling out your Streams application results in pretty significant downtime—which is a bummer, especially if the reason for scaling out in the first place was to handle a particularly heavy workload.To solve the stop-the-world rebalance, we have to find a way to safely assign partitions so we can be confident that they’ve been revoked from their previous owner before being given to a new consumer. To solve the scaling out problem in Kafka Streams, we go a step further. When you add a new instance to your Streams application, we won’t immediately assign any stateful partitions to it. Instead, we’ll leave them assigned to their current owner to continue processing and serving queries as usual. During this time, the new instance will start to “warm up” the local state in the background; it starts consuming from the changelog and building up the local state. We then follow a similar pattern as in cooperative rebalancing, and issue a follow-up rebalance. In KIP-441, we call these probing rebalances. Every so often (i.e., 10 minutes by default), we trigger a rebalance. In the member’s subscription metadata that it sends to the group leader, each member encodes the current status of its local state. We use the changelog lag as a measure of how “caught up” a partition is. During a rebalance, only instances that are completely caught up are allowed to own stateful tasks; everything else must first warm up the state. So long as there is some task still warming up on a node, we will “probe” with rebalances until it’s ready.EPISODE LINKSFrom Eager to Smarter in Apache Kafka Consumer RebalancesKIP-429: Kafka Consumer Incremental Rebalance Protocol KIP-441: Smooth Scaling Out for Kafka StreamsJoin the Confluent CommunityLearn more with Kafka tutorials, resources, and guides at Confluent DeveloperLive demo: Kafka streaming in 10 minutes on Confluent CloudUse 60PDCAST to get an additional $60 of free Confluent Cloud usage (details)
3/24/202150 minutes, 33 seconds
Episode Artwork

Event-Driven Architecture - Common Mistakes and Valuable Lessons ft. Simon Aubury

Event-driven architecture has taken on numerous meanings over the years—from event notification to event-carried state transfer, to event sourcing, and CQRS. Why has event-driven programming become so popular, and why is it such a topic of interest? For the first time, Simon Aubury (Principal Data Engineer, ThoughtWorks) joins Tim Berglund on the Streaming Audio podcast to tell all, including his own experiences adopting event-driven technologies and common blunders when working in this area.Simon admits that he’s made some mistakes and learned some valuable lessons that can benefit others. Among these are accidentally building a message bus, the idea that messages are not events, realizing that getting too fixated on the size of a microservice is the wrong problem, the importance of understanding events and boundaries, defining choreography vs. orchestration, and dealing with passive-aggressive events.This brings Simon to where he is today, as he advocates for Apache Kafka® as a foundation for building a scalable, event-driven architecture and data-intensive applications. EPISODE LINKSShould You Put Several Event Types in the Same Kafka Topic? Meetup Recording: Event-Driven Architecture Mistakes – I’ve Made a FewJoin the Confluent CommunityLearn more with Kafka tutorials, resources, and guides at Confluent DeveloperLive demo: Kafka streaming in 10 minutes on Confluent CloudUse 60PDCAST to get an additional $60 of free Confluent Cloud usage (details)
3/17/202142 minutes, 32 seconds
Episode Artwork

The Human Side of Apache Kafka and Microservices ft. SPOUD

Many industries depend on real-time data, requiring a range of solutions that Apache Kafka® can help solve. Samuel Benz (CTO) and Patrick Bönzli (Product Owner) explain how their company, SPOUD, has fully embraced Kafka for data delivery, which has proven to be successful for SPOUD since 2016 across various industries and use cases. The four Kafka use cases that Sam and Patrick see most often are microservices, event processing, event sourcing/the data lake, and integration architecture. But implementing streaming software for each of these areas is not without its challenges. It’s easy to become frustrated by trivial problems that arise when integrating Kafka into the enterprise, because it’s not just about technology but also people and how they react to a new technology that they are not yet familiar with. Should enterprises be scared of Kafka? Why can it be hard to adopt Kafka? How do you drive Kafka adoption internally? All good questions.When adopting Kafka into a new data service, there will be challenges from a data sharing perspective, but with the right architecture, the possibilities are endless. Kafka enables collaboration on previously siloed data in a controlled and layered way. Sam and Patrick’s goal today is to educate others on Kafka and show what success looks like from a data-driven point of view. It’s not always easy, but in the end, event streaming is more than worth it. EPISODE LINKSRead blog posts from SPOUDImprove the Quality of Breaks with KafkaApache Kafka PyramidReady, Steady, Connect. Help Your Organization to Appreciate KafkaAGOORA by SPOUDJoin the Confluent CommunityLearn more with Kafka tutorials, resources, and guides at Confluent DeveloperLive demo: Kafka streaming in 10 minutes on Confluent CloudUse 60PDCAST to get an additional $60 of free Confluent Cloud usage (details)
3/8/202145 minutes, 11 seconds
Episode Artwork

Gamified Fitness at Synthesis Software Technologies Using Apache Kafka and IoT

Synthesis Software Technologies, a Confluent partner, is migrating an existing behavioral IoT framework into Kafka to streamline and normalize vendor information. The legacy messaging technology that they currently use has altered the behavioral IoT data space, and now Apache Kafka® will allow them to take that to the next level. New ways of normalizing the data will allow for increased efficiency for vendors, users, and manufacturers. It will also enable the scaling IoT technology going forward. Nick Walker (Principle of Streaming) and Yoni Lew (DevOps Developer) of Synthesis discuss how they utilize Confluent Platform in a personal behavior data pipeline provided by Vitality Group. Vitality Group promotes a shared-value insurance model, which sources behavioral change information and transforms it into personal incentives and rewards to members associated with their global partners.Yoni shares about the motivators of moving their data from an existing product over to Kafka. The decision was made for two reasons: taking different forms and features of existing data from vendors and streamlining it, and addressing how quickly users of the system want the processed data from the system. Kafka is the best choice for Synthesis because it can stream messages through various topics and workflows while storing them appropriately. It is especially important for Synthesis to be able to replay data as needed without losing its integrity. Yoni explains how Kafka gives them the opportunity to—even if something goes wrong downstream and someone doesn’t process something correctly—process the data on their own timeline and at their rate, because they have the data.The implementation of Kafka into Synthesis’ current workflow has allowed them to create new functionality for assisting various groups that use the data in different ways. This has furthermore opened up new options for the company to build up its framework using Kafka features that lead to creative reactive applications. With Kafka, Synthesis sees endless opportunities to integrate the data that they collect into usable, historical pipelines for long-term models. EPISODE LINKSJoin the Confluent CommunityLearn more with Kafka tutorials, resources, and guides at Confluent DeveloperLive demo: Kafka streaming in 10 minutes on Confluent CloudUse 60PDCAST to get an additional $60 of free Confluent Cloud usage (details)
3/3/202133 minutes, 32 seconds
Episode Artwork

Becoming Data Driven with Apache Kafka and Stream Processing ft. Daniel Jagielski

When it comes to adopting event-driven architectures, a couple of key considerations often arise: the way that an asynchronous core interacts with external synchronous systems and the question of “how do I refactor my monolith into services?” Daniel Jagielski, a consultant working as a tech lead/dev manager at VirtusLab for Tesco, recounts how these very themes emerged in his work with European clients. Through observing organizations as they pivot toward becoming real time and event driven, Daniel identifies the benefits of using Apache Kafka® and stream processing for auditing, integration, pub/sub, and event streaming.He describes the differences between a provisioned cluster vs. managed cluster and the importance of this within the Kafka ecosystem. Daniel also dives into the risk detection platform used by Tesco, which he helped build as a VirtusLab consultant and that marries the asynchronous and synchronous worlds.As Tesco migrated from a legacy platform to event streaming, determining risk and anomaly detection patterns have become more important than ever. They need the flexibility to adjust due to changing usage patterns with COVID-19. In this episode, Daniel talks integrations with third parties, push-based actions, and materialized views/projects for APIs.Daniel is a tech lead/dev manager, but he’s also an individual contributor for the Apollo project (an ICE organization) focused on online music usage processing. This means working with data in motion; breaking the monolith (starting with a proof of concept); ETL migration to stream processing, and ingestion via multiple processes that run in parallel with record-level processing.EPISODE LINKSBuilding an Apache Kafka Center of Excellence Within Your Organization ft. Neil Buesing Risk Management in Retail with Stream ProcessingEvent Sourcing, Stream Processing and ServerlessIt’s Time for Streaming to Have a Maturity Model ft. Nick DeardenRead Daniel Jagielski's articles on the Confluent blogJoin the Confluent CommunityLearn more with Kafka tutorials, resources, and guides at Confluent DeveloperLive demo: Kafka streaming in 10 minutes on Confluent CloudUse 60PDCAST to get an additional $60 of free Confluent Cloud usage (details)
2/22/202148 minutes, 10 seconds
Episode Artwork

Integrating Spring Boot with Apache Kafka ft. Viktor Gamov

Viktor Gamov (Developer Advocate, Confluent) joins Tim Berglund on this episode to talk all about Spring and Apache Kafka®. Viktor’s main focus lately has been helping developers build their apps with stream processing, and helping them do this effectively with different languages. Viktor recently hosted an online Spring Boot workshop that turned out to be a lot of fun. This means it was time to get him back on the show to talk about this all-important framework and how it integrates with Kafka and Kafka Streams. Spring Boot enables you to do more with less. Its features offer numerous benefits, making it easy to create standalone, production-grade Spring-based applications that you can just run.  The pattern also runs inside the Spring framework for a long time. The Spring Integration Framework implements many enterprise integration patterns and also has a pre-built Kafka connector.Spring Boot was highly inspired by a 12-factor app manifesto that allows you to write portable apps and extract the configuration, providing different profiles for you to customize your deployment. This is a critical part of the Kafka client infrastructure. Even though it's a Java client, Confluent offers a native Spring for Apache Kafka integration for the configuration as a springboard inside Confluent Cloud. If you try to connect your application, you can copy a snippet and place it directly to your Spring application, which works with Confluent Cloud. Now, he’s working on bringing Spring Cloud Stream like YAML-based configuration into Confluent Cloud too so folks can easily copy and paste to work out of the box.To close, Viktor shares about an interesting new project that the Confluent Developer Relations team is working on. Stick around to hear all about it and learn how Spring and Kafka work together.EPISODE LINKSSpring example in streaming-opsAvro, Protobuf, Spring Boot, Kafka Streams and Confluent Cloud | LivestreamsEvent-Driven Microservices with Spring Boot and Confluent Cloud | Livestreams Choosing Christmas Movies with Kubernetes, Spring Boot, and Apache Kafka | Livestreams 015Spring Cloud Stream and Confluent Cloud | Livestreams 018Joining Forces with Spring Boot, Apache Kafka, and Kotlin ft. Josh LongMastering DevOps with Apache Kafka, Kubernetes, and Confluent Cloud ft. Rick Spurgeon and Allison WaltherFollow Viktor Gamov on TwitterJoin the Confluent Community SlackLearn more with Kafka tutorials, resources, and guides at Confluent DeveloperLive demo: Kafka streaming in 10 minutes on Confluent CloudUse 60PDCAST to get an additional $60 of free Confluent Cloud usage (details)
2/17/202145 minutes, 8 seconds
Episode Artwork

Confluent Platform 6.1 | What’s New in This Release + Updates

Confluent Platform 6.1 further simplifies management tasks for Apache Kafka® operators. Based on Apache Kafka 2.7, this release provides even higher availability for enterprises who are using Kafka as the central backbone for their business-critical applications. Confluent Platform 6.1 delivers enhancements that reduce the risk of downtime, simplify operations and streamline the user experience, as well as improve visibility and control with centralized management.EPISODE LINKSCheck out the release notesRead the blog post: Introducing Confluent Platform 6.1Download Confluent Platform 6.1Watch the video version of this podcastJoin the Confluent CommunityLearn more with Kafka tutorials, resources, and guides at Confluent DeveloperLive demo: Kafka streaming in 10 minutes on Confluent CloudUse 60PDCAST to get an additional $60 of free Confluent Cloud usage (details)
2/10/20219 minutes, 37 seconds
Episode Artwork

Building a Microservices Architecture with Apache Kafka at Nationwide Building Society ft. Rob Jackson

Nationwide Building Society, a financial institution in the United Kingdom with 137 years of history and over 18,000 employees, relies on Apache Kafka® for their event streaming needs. But how did this come to be? In this episode, Tim Berglund talks with Rob Jackson (Principal Architect, Nationwide) about their Kafka adoption journey as they celebrate two years in production. Nationwide chose to adopt Kafka as a central part of their information architecture in order to integrate microservices. You can't have them share a database that's design-time coupling, and maybe you tried having them call each other synchronously. There's a little bit too much runtime coupling, leading to the rise of event-driven reactive microservices as a stable and extensible architecture for the next generation.Nationwide also chose to use Kafka for the following reasons:To replace their mortgage sales systems from traditional orchestration style to event-driven designs and choreography-based solutions using microservices in KafkaA cost-effective way to scale their mainframe systems with change data capture (CDC)Rob explains to Tim that now with the adoption of Kafka across other use cases at Nationwide, he no longer needs to ask his team to query their APIs. Kafka has also enabled more choreography-based use cases and the ability to design new applications to create events (pushed into a common/enterprise event hub). Kafka has helped Nationwide eliminate any bottlenecks in the process and speed up production. Furthermore, Rob delves into why his team migrated from orchestration to choreography, explaining their differences in depth. When you start building your applications in a choreography-based way, you will find as a byproduct that interesting events are going into Kafka that you didn’t foresee leveraging but that may be useful for the analytics community. In this way, you can truly get the most out of your data. EPISODE LINKSCase Study: Event Streaming & Real-Time Data in BankingIntroducing Events and Stream Processing into Nationwide Building Society (Kafka Summit talk)Learn more about NationwideJoin the Confluent CommunityCheck out Kafka tutorials, resources, and guides at Confluent DeveloperLive demo: Kafka streaming in 10 minutes on Confluent CloudUse 60PDCAST to get an additional $60 of free Confluent Cloud usage (details)
2/8/202148 minutes, 54 seconds
Episode Artwork

Examining Apache Kafka Performance Metrics ft. Alok Nikhil

Coming up with an honest test built on open source tools in an easily documented, replicable environment for a distributed system like Apache Kafka® is not simple. Alok Nikhil (Cloud Native Engineer, Confluent) shares about getting Kafka in the cloud and how best to leverage Confluent Cloud for high performance and scalability. His blog post “Benchmarking Apache Kafka, Apache Pulsar, and RabbitMQ: Which is the Fastest?” discusses how Confluent tested Kafka’s performance on the latest cloud hardware using research-based methods to answer this question. Alok and Tim talk through the vendor-neutral framework OpenMessaging Benchmark used for the tests, which is Pulsar’s standardized benchmarking framework for event streaming workloads. Alok and his co-author Vinoth Chandar helped improve that framework, evaluated messaging systems in the event streaming space like RabbitMQ, and talked about improvements to those existing platforms. Later in this episode, Alok shares what he believes would help move Kafka forward and what he predicts to come soon, like KIP-500, the removal of ZooKeeper dependency in Kafka. EPISODE LINKSBenchmarking Apache Kafka, Apache Pulsar, and RabbitMQ: Which is the Fastest?Join the Confluent Community SlackLearn more with Kafka tutorials, resources, and guides at Confluent DeveloperLive demo: Kafka streaming in 10 minutes on Confluent CloudUse 60PDCAST to get an additional $60 of free Confluent Cloud usage (details)
2/1/202150 minutes, 30 seconds
Episode Artwork

Distributed Systems Engineering with Apache Kafka ft. Guozhang Wang

Tim Berglund picks the brain of a distributed systems engineer, Guozhang Wang, tech lead in the Streaming department of Confluent. Guozhang explains what compelled him to join the Stream Processing team at Confluent coming from the Apache Kafka®  core infrastructure. He reveals what makes the best distributed systems infrastructure engineers tick and how to prepare to take on this kind of role—solving failure scenarios, a satisfying challenge. One challenge in distributed systems is achieving agreements from multiple nodes that are connected in a Kafkacluster, but the connection in practice is asynchronous.Guozhang also shares the newest updates in the Kafka community, including the coming ZooKeeper-free architecture where metadata will be maintained by Kafka logs.Prior to joining Confluent, Guozhang worked for LinkedIn, where he used Kafka for a few years before he started asking himself, “How fast can I get value from the data that I’ve collected?” This question eventually led him to begin building Kafka Streams and ksqlDB. Ever since, he’s been working to advance stream processing, and in this episode, provides an exciting preview of what’s to come. EPISODE LINKSJoin the Confluent teamDiving into Exactly-Once Semantics with Guozhang WangIn Search of an Understandable Consensus AlgorithmThe Curious Incident of the State Store in Recovery in ksqlDBFrom Eager to Smarter in Apache Kafka Consumer RebalancesKIP-595: A Raft Protocol for the Metadata QuorumJoin the Confluent Community SlackLearn more with Kafka tutorials, resources, and guides at Confluent DeveloperLive demo: Kafka streaming in 10 minutes on Confluent CloudUse 60PDCAST to get an additional $60 of free Confluent Cloud usage (details)
1/25/202144 minutes, 52 seconds
Episode Artwork

Scaling Developer Productivity with Apache Kafka ft. Mohinish Shaikh

Confluent Cloud and Confluent Platform run efficiently largely because of the dedication of the Developer Productivity (DevProd) team, formerly known as the Tools team. Mohinish Shaikh (Software Engineer, Confluent) talks to Tim Berglund about how his team builds the software tooling and automation for the entire event streaming platform and ensures seamless delivery of several engineering processes across engineering and the rest of the org. With the right tools and the right data, developer productivity can understand the overall effectiveness of a development team and their ability to produce results.The DevProd team helps engineering teams at Confluent ship code from commit to end customers actively using Apache Kafka®. This team proficiently understands a wide scope of polyglot applications and also the complexities of using a diverse technology stack on a regular basis to help solve business-critical problems for the engineering org. The team actively measures how each system interacts with one another and what programs are needed to properly run the code in various environments to help with the release of reliable artifacts for Confluent Cloud and Confluent Platform. An in-depth understanding of the entire framework and development workflow is essential for organizations to deliver software reliably, on time, and within their cost budget.The DevProd team provides that second line of defense and reliability before the code is released to end customers. As the need for compliance increases and the event streaming platform continues to evolve, the DevProd team is in place to make sure that all of the final touches are completed. EPISODE LINKSLeveraging Microservices and Apache Kafka to Scale Developer ProductivityJoin the Confluent Community SlackLearn more with Kafka tutorials, resources, and guides at Confluent DeveloperLive demo: Kafka streaming in 10 minutes on Confluent CloudUse 60PDCAST to get an additional $60 of free Confluent Cloud usage (details)
1/20/202134 minutes, 19 seconds
Episode Artwork

Change Data Capture and Kafka Connect on Microsoft Azure ft. Abhishek Gupta

What’s it like being a Microsoft Azure Cloud advocate working with Apache Kafka® and change data capture (CDC) solutions? Abhishek Gupta would know! At Microsoft, Abhishek focuses his time on Kafka, databases, Kubernetes, and open source projects. His experience in a wide variety of roles ranging from engineering, consulting, and product management for developer-focused products has positioned him well for developer advocacy, where he is now.Switching gears, Abhishek proceeds to break down the concept of CDC starting off with some of the core concepts such as "commit logs." Abhishek then explains how CDC can turn data around when you compare it to the traditional way of querying the database to access data—you don't call the database; it calls you. He then goes on to discuss Debezium, which is an open source change data capture solution for Kafka. He also covers Azure connectors, Azure Data Explorer, and use cases powered by the Azure Data Explorer Sink Connector for Kafka.EPISODE LINKSStreaming Data from Confluent Cloud into Azure Data ExplorerIntegrate Apache Kafka with Azure Data ExplorerChange Data Capture with Debezium ft. Gunnar MorlingTales from the Frontline of Apache Kafka DevOps ft. Jason BellMySQL CDC Source (Debezium) Connector for Confluent CloudMySQL, Cassandra, BigQuery, and Streaming Analytics with Joy GaoJoin the Confluent Community SlackLearn more with Kafka tutorials, resources, and guides at Confluent DeveloperLive demo: Kafka streaming in 10 minutes on Confluent CloudUse 60PDCAST to get an additional $60 of free Confluent Cloud usage (details)
1/11/202143 minutes, 4 seconds
Episode Artwork

Event Streaming Trends and Predictions for 2021 ft. Gwen Shapira, Ben Stopford, and Michael Noll

Coming out of a whirlwind year for the event streaming world, Tim Berglund sits down with Gwen Shapira (Engineering Leader, Confluent), Ben Stopford (Senior Director, Office of the CTO, Confluent), and Michael Noll (Principal Technologist, Office of the CTO, Confluent) to take a guess at what 2021 will bring. The experts share what they believe will happen for analytics, frameworks, multi-cloud services, stream processing, and other topics important to the event streaming space. These Apache Kafka® related predictions include the future of the Kafka cluster partitions and removing restrictions that users have found in the past, such as too many variations and excessive concurrency as it relates to your number of partitions.Ben also thinks that ZooKeeper will continue to maintain open source servers for highly reliable application distribution. Kafka clusters will still be able to keep the most important data while growing in size at record speed with ZooKeeper, although it will no longer be required with KIP-500 removing ZooKeeper dependency. This upgrade allows Kafka and ZooKeeper to run independently in deployment while Kafka’s cluster capability will increase.Michael expects a continued need for COVID-19 tracking as well as enhanced event streaming capabilities. Ben believes that scalable Tiered Storage for Kafka will increase productivity and benefit workloads. Gwen predicts that databases will become more conventional by the end of next year, leading to new data architectural design with the support of Kafka.EPISODE LINKSKIP-500: Apache Kafka Without ZooKeeper ft. Colin McCabe and Jason GustafsonHow to set up podcasts on AlexaBetter to Be Wrong Than Vague: Apache Kafka and Data Architecture Predictions for 2021Join the Confluent Community SlackLearn more with Kafka tutorials, resources, and guides at Confluent DeveloperLive demo: Kafka streaming in 10 minutes on Confluent CloudUse 60PDCAST to get an additional $60 of free Confluent Cloud usage (details)
1/6/202144 minutes, 34 seconds
Episode Artwork

How to Become a Certified Apache Kafka Expert ft. Niamh O’Byrne and Barry Ballard

It’s one thing to know how to use Apache Kafka® and another to prove it to the world that you know. Niamh O’Byrne (Certification Manager, Confluent) and Barry Ballard (Senior Technical Trainer, Confluent) discuss Confluent’s Certification program, including sample test questions, bootcamp, exam details, Kafka training, and getting the necessary practical hands-on experience.It’s no secret that the entire world of work has changed, and now we expect to communicate across a vast number of digital platforms. In this new age, Barry predicts three primary skills that will become more important than ever to employers as they seek to hire a candidate:Emotional intelligenceBuilding your personal brand Digital security knowledgeWith emotional intelligence, we're really talking about effective communication and soft skills. This means understanding how to achieve consensus on utilizing digital technology, specifically Apache Kafka, which we test for in the Certification exam. This will help you stand out all around—on paper, in interviews, and in knowledge too. Especially as more and more businesses rely on Kafka, and as cybercriminals take their savviness to a new level, strong security expertise will truly set you apart.EPISODE LINKSConfluent Certification Get in touch about the Certification at [email protected] Event Modeling to Architect Event-Driven Information Systems ft. Bobby CalderwoodLearn Apache Kafka to build and scale modern applicationsProject MetamorphosisJoin the Confluent Community SlackLearn more with Kafka tutorials, resources, and guides at Confluent DeveloperLive demo: Kafka streaming in 10 minutes on Confluent CloudUse the code DEV21CERT for 20% off certificationUse 60PDCAST to get an additional $60 of free Confluent Cloud usage (details)
12/28/202043 minutes, 36 seconds
Episode Artwork

Mastering DevOps with Apache Kafka, Kubernetes, and Confluent Cloud ft. Rick Spurgeon and Allison Walther

How do you use Apache Kafka®, Confluent Platform, and Confluent Cloud for DevOps? Integration Architects Rick Spurgeon and Allison Walther share how,  including a custom tool they’ve developed for this very purpose. First, Rick and Allison share their perspective of what it means to be a DevOps engineer. Mixing development and operations skills to deploy, manage, monitor, audit, and maintain distributed systems. DevOps is multifaceted and can be compared to glue, in which you’re stitching software, services, databases, Kafka, and more, together to integrate end to end solutions.Using the Confluent Cloud Metrics API (actionable operational metrics), you pull a wide range of metrics about your cluster, a topic or partition, bytes, records, and requests. The Metrics API is unique in that it is queryable. You can send this API question, “What's the max retained bytes per hour over 10 hours for my topic or my cluster?” and find out just like that. To make writing operators much easier, Rick and Allison also share about Crossplane, KUDO, Shell-operator, and how to use these tools.EPISODE LINKSConfluent Cloud Metrics APIShell OperatorDevOps for Apache KafkaThe Kubernetes Universal Declarative OperatorIntroducing the AWS Controllers for Kubernetes (ACK)Manage any infrastructure your applications need directly from Kubernetes with CrossplaneDevOps for Apache Kafka with Kubernetes and GitOpsSpring Your Microservices into Production with Kubernetes and GitOpsJoin the Confluent Community SlackLearn more with Kafka tutorials, resources, and guides at Confluent DeveloperLive demo: Kafka streaming in 10 minutes on Confluent CloudUse 60PDCAST to get an additional $60 of free Confluent Cloud usage (details)
12/22/202046 minutes, 18 seconds
Episode Artwork

Apache Kafka 2.7 - Overview of Latest Features, Updates, and KIPs

Apache Kafka® 2.7 is here! Here are the key Kafka Improvement Proposals (KIPs) and updates in this release, presented by Tim Berglund. KIP-497 adds a new inter-broker API to alter in-sync replicas (ISRs). Every partition leader maintains the ISR list or the list of ISRs. KIP-497 is also related to the removal of ZooKeeper.KIP-599 has to do with throttling the rate of creating topics, deleting topics, and creating partitions. This KIP will add a new feature called the controller mutation rate.KIP-612 adds the ability to limit the connection creation rate on brokers, while KIP-651 supports the PEM format for SSL certificates and private keys.The release of Kafka 2.7 furthermore includes end-to-end latency metrics and sliding windows.Find out what’s new with the Kafka broker, producer, and consumer, and what’s new with Kafka Streams in today’s episode of Streaming Audio!EPISODE LINKSRead about what’s new in Apache Kafka 2.7Check out the Apache Kafka 2.7 release notesWatch the video version of this podcastJoin the Confluent Community SlackLearn more with Kafka tutorials, resources, and guides at Confluent DeveloperLive demo: Kafka streaming in 10 minutes on Confluent CloudUse 60PDCAST to get an additional $60 of free Confluent Cloud usage (details)
12/21/202010 minutes, 59 seconds
Episode Artwork

Choreographing the Saga Pattern in Microservices ft. Chris Richardson

Chris Richardson, creator of the original Cloud Foundry, maintainer of microservices.io and author of “Microservices Patterns,” discovered cloud computing in 2006 during an Amazon talk about APIs for provisioning servers. At this time, you could provision 20 servers and pay 10 cents per hour. This blew his mind and led him in 2008 to create the original Cloud Foundry, a PaaS for deploying Java applications on EC2.One of the original Cloud Foundry’s earliest success stories was a digital marketing agency for a beer company that ran a campaign around the Super Bowl. Cloud Foundry actually enabled them to deploy an application on AWS and then adjust its capacity based on the load. They were leveraging the elasticity of the cloud back in the ‘08–‘09 timeframe. SpringSource eventually acquired Cloud Foundry, followed by VMware. It's the origin of the name of today's Cloud Foundry.Later in the show, Chris explains what choreographed sagas are, reasons to leverage them, and how to measure their efficacy.EPISODE LINKSThe microservices pattern languageEventuate frameworkBook: The Art of ScalabilityUse podcon19 to get 40% off Microservices Patterns by Chris RichardsonJoin the Confluent Community SlackLearn more with Kafka tutorials, resources, and guides at Confluent DeveloperLive demo: Kafka streaming in 10 minutes on Confluent CloudUse 60PDCAST to get an additional $60 of free Confluent Cloud usage (details)
12/16/202047 minutes, 49 seconds
Episode Artwork

Apache Kafka and Porsche: Fast Cars and Fast Data ft. Sridhar Mamella

We have all heard of Porsche, but did you know that Porsche utilizes event streaming with Apache Kafka®?  Today, Sridhar Mamella (Platform Manager, Data Streaming Platforms, Porsche) discusses how Kafka’s event streaming technology powers Porsche through Streamzilla.With the modern Porsche car having 150–200 sensors, Sridhar dives into what Streamzilla is and how it functions with Kafka on premises and in the cloud. He reveals how the first months of event streaming in production went, Porsche’s path to the cloud, Streamzilla's learnings from a developer and a business perspective, and plans for parts of Streamzilla to go open source.Stick around through the end as Sridhar talks through cloud migration, cloud-first strategy, and Porsche’s event streaming use cases. This Streaming Audio is all about speed—fast cars and fast data, an episode you won't want to miss!EPISODE LINKSWhy Software Is Eating the WorldEvery Company Is Becoming SoftwareTaycan Models at Porsche Join the Confluent Community SlackLearn more with Kafka tutorials, resources, and guides at Confluent DeveloperLive demo: Kafka streaming in 10 minutes on Confluent CloudUse 60PDCAST to get an additional $60 of free Confluent Cloud usage (details)
12/7/202042 minutes, 59 seconds
Episode Artwork

Tales from the Frontline of Apache Kafka DevOps ft. Jason Bell

Jason Bell (Apache Kafka® DevOps Engineer, digitalis.io, and Author of “Machine Learning: Hands-On for Developers and Technical Professionals” ) delves into his 32-year journey as a DevOps engineer and how he discovered Apache Kafka. He began his voyage in hardware technology before switching over to software development. From there, he got involved in event streaming in the early 2000s where his love for Kafka started. His first Kafka project involved monitoring Kafka clusters for flight search data, and he's been making magic ever since!Jason first learned about the power of the event streaming during Michael Noll’s talk on the streaming API in 2015. It turned out that Michael had written off 80% of Jason’s streaming API jobs with a single talk. As a Kafka DevOps engineer today, Jason works with on-prem clusters and faces challenges like instant replicas going down and bringing other developers who are new to Kafka up to speed so that they can eventually adopt it and begin building out APIs for Kafka. He shares some tips that have helped him overcome these challenges and bring success to the team.EPISODE LINKSMachine Learning: Hands-On for Developers and Technical Professionals by Jason Bell Join the Confluent Community SlackLearn more with Kafka tutorials, resources, and guides at Confluent DeveloperLive demo: Kafka streaming in 10 minutes on Confluent CloudUse 60PDCAST to get an additional $60 of free Confluent Cloud usage (details)
12/2/20201 hour, 25 seconds
Episode Artwork

Multi-Tenancy in Apache Kafka ft. Anna Pozvner

Multi-tenancy has been quite the topic within the Apache Kafka® community. Anna Povzner, an engineer on the Confluent team, spends most of her time working on multi-tenancy in Kafka in Confluent Cloud.Anna kicks off the conversation with Tim Berglund (Senior Director of Developer Experience, Confluent) by explaining what multi-tenancy is, why it is worthy to be desired, and advantages over single-tenant architecture. By putting more applications and use cases on the same Kafka cluster instead of having a separate Kafka cluster for each individual application and use case, multi-tenancy helps minimize the costs of physical machines and also maintenance.She then switches gears to discuss quotas in Kafka. Quotas are essentially limits—you must set quotas for every tenant (or set up defaults) in Kafka. Anna says it’s always best to start with bandwidth quotas because they’re better understood.Stick around until the end as Anna gives us a sneak peek on what’s ahead for multi-tenant Kafka, including KIP-612, the addition of the connection rate quota, which will help protect brokers.EPISODE LINKSSharing is Caring: Toward Creating Self-Tuning Multi-Tenant Kafka (Anna Povzner, Confluent)Join the Confluent Community SlackLearn more with Kafka tutorials, resources, and guides at Confluent DeveloperLive demo: Kafka streaming in 10 minutes on Confluent CloudUse 60PDCAST to get an additional $60 of free Confluent Cloud usage (details)
11/23/202044 minutes, 19 seconds
Episode Artwork

Distributed Systems Engineering with Apache Kafka ft. Roger Hoover

Roger Hoover, one of the first engineers to work on Confluent Cloud, joins Tim Berglund (Senior Director of Developer Experience, Confluent) to chat about the evolution of Confluent Cloud, all the stages that it’s been through, and the lessons he’s learned on the way. He talks through the days before Confluent Platform was created, and how he contributed to Apache Kafka® to run it on OpenStack (the feature used to separate advertised hostnames from the internal hostnames).The Confluent Cloud control plane is now run in over 40 regions. Under the covers, Roger and his team are managing tens of thousands of resources at the cloud provider layer. This means creating VPCs, VMs, volumes, and DNS records, to manage software artifacts, like what version of Kafka is running and user management. Confluent Cloud is a complex application and distributed system spread across the entire world, but Roger reveals how it's done.EPISODE LINKSBuilding Confluent Cloud – Here’s What We’ve Learned Join the Confluent Community SlackLearn more with Kafka tutorials, resources, and guides at Confluent DeveloperLive demo: Kafka streaming in 10 minutes on Confluent CloudUse 60PDCAST to get an additional $60 of free Confluent Cloud usage (details)
11/18/202050 minutes, 24 seconds
Episode Artwork

Why Kafka Streams Does Not Use Watermarks ft. Matthias J. Sax

Do you ever feel like you’re short on time? Well, good news! Confluent Software Engineer Matthias J. Sax is back to discuss how event streaming has changed the game, making time management more simple yet efficient. Matthias explains what watermarking is, the reasons behind why Kafka Streams doesn’t use them, and an alternative approach to watermarking informally called the “slack time approach.” Later, Matthias discusses how you can compare “stream time,” which is the maximum timestamp observed, to the watermark approach as a high-time watermark. Stick around for the end of the episode, where Matthias reveals other new approaches in the pipeline. Learn how to get the most out of your time on today’s episode of Streaming Audio!EPISODE LINKSKafka Summit talk: The Flux Capacitor of Kafka Streams and ksqlDBWatermarks, Tables, Event Time, and the Dataflow ModelKafka Streams’ Take on Watermarks and TriggersJoin the Confluent Community SlackLearn more with Kafka tutorials, resources, and guides at Confluent DeveloperLive demo: Kafka streaming in 10 minutes on Confluent CloudUse 60PDCAST to get an additional $60 of free Confluent Cloud usage (details)
11/12/202052 minutes, 20 seconds
Episode Artwork

Distributed Systems Engineering with Apache Kafka ft. Apurva Mehta

What's it like being a distributed systems engineer? Apurva Mehta (Engineering Leader, Confluent) explains what attracted him to Apache Kafka®, the challenges and uniqueness of distributed systems, and how to excel in this industry. He dives into the complex math behind the temporal logic of actions (TLA) and shares about his experiences working at Yahoo and Linkedin, which have prepared him to be where he is today.Apurva also shares what he looks for when hiring someone to join his team. When you're working on a system like Kafka and Kafka Streams, really understanding what your machine is doing, where the bottlenecks are, and how to design improvements to address inefficiencies is critical. EPISODE LINKSJason Gufstason discusses TLA validation (and distributed systems engineering in general) MIT Courseware on Distributed Systems Join the Confluent Community SlackLearn more with Kafka tutorials, resources, and guides at Confluent DeveloperLive demo: Kafka streaming in 10 minutes on Confluent CloudUse 60PDCAST to get an additional $60 of free Confluent Cloud usage (details)
11/2/202049 minutes, 15 seconds
Episode Artwork

Most Terrifying Apache Kafka JIRAs of 2020 ft. Anna McDonald

It’s Halloween again, which means Anna McDonald (Staff Technical Account Manager, Confluent) is back for another spooktacular episode of Streaming Audio.In this episode, Anna shares six of the most spine-chilling, hair-raising  Apache Kafka® JIRAs from the past year. Her job is to help hunt down problems like these and dig up skeletons like: Early death causes epoch time travelAttack of the clonesMissing snapshot file leads to madnessShrink inWriteLock time to avoid maiming cluster performanceOlder groups are forced to flatlineGhost segment haunts for eternity If JIRAs are undead monsters, Anna is practically a zombie slayer. Get a haunting taste of the horrors that she's battled with as she shares about each of these Kafka updates. Keep calm and scream on in today’s special episode of Streaming Audio!EPISODE LINKSKafka: A Modern Distributed SystemFrom Eager to Smarter in Apache Kafka Consumer Rebalances by Sophie Blee-GoldmanThe Magical Rebalance Protocol of Apache Kafka (Strange Loop)Join the Confluent Community SlackLearn more with Kafka tutorials, resources, and guides at Confluent DeveloperLive demo: Kafka streaming in 10 minutes on Confluent CloudUse 60PDCAST to get an additional $60 of free Confluent Cloud usage (details)
10/28/202051 minutes, 59 seconds
Episode Artwork

Ask Confluent #18: The Toughest Questions ft. Anna McDonald

It’s the first work-from-home episode of Ask Confluent, where Gwen Shapira (Core Kafka Engineering Leader, Confluent) virtually sits down with Apache Kafka® expert Anna McDonald (Staff Technical Account Manager, Confluent) to answer questions from Twitter. Find out Anna’s favorite Kafka Improvement Proposal (KIP), which  will start to use racially neutral terms in the Kafka community and in our code base, as well as answers to the following questions: If you could pick any one KIP from the backlog that hasn't yet been implemented and have it immediately available, which one would you pick?Are we able to arrive at any formula for identifying the consumer/producer throughput rate in Kafka with the given hardware specifications (CPU, RAM, network, and disk)? Does incremental cooperative rebalancing also work for general Kafka consumers in addition to Kafka Connect rebalancing?They also answer how to determine throughput and achieve your desired SLA by using partitions. EPISODE LINKSWatch Ask Confluent #18: The Toughest Questions ft. Anna McDonaldFrom Eager to Smarter in Apache Kafka Consumer RebalancesStreaming Heterogeneous Databases with Kafka Connect – The Easy WayKeynote: Tim Berglund, Confluent | Closing Keynote Presentation | Kafka Summit 2020Join the Confluent Community SlackLearn more with Kafka tutorials, resources, and guides at Confluent DeveloperLive demo: Kafka streaming in 10 minutes on Confluent CloudUse 60PDCAST to get an additional $60 of free Confluent Cloud usage (details)
10/21/202033 minutes, 46 seconds
Episode Artwork

Joining Forces with Spring Boot, Apache Kafka, and Kotlin ft. Josh Long

Wouldn’t it be awesome if there was a language as elegant as Spring Boot is as a framework? In this episode of Streaming Audio, Tim Berglund talks with Josh Long, Spring developer advocate at VMware about Kotlin, about the productivity-focused language from our friends at JetBrains, and how it works with Spring Boot to make the experience leaner, cleaner, and easy to use.Josh shares how the Spring and Kotlin teams have worked hard to make sure that Kotlin and Spring Boot are a first-class experience for all developers trying to get to production faster and safer. They also talk about the issues that arise when wrapping one set of APIs with another, as often arises in the Spring Framework: when APIs should leak, when they should not, and how not to try to be a better Kafka Streams when the original is working well enough. EPISODE LINKSJoin the Confluent Community SlackLearn more with Kafka tutorials, resources, and guides at Confluent DeveloperLive demo: Kafka streaming in 10 minutes on Confluent CloudUse 60PDCAST to get an additional $60 of free Confluent Cloud usage (details)
10/21/202050 minutes, 41 seconds
Episode Artwork

Building an Apache Kafka Center of Excellence Within Your Organization ft. Neil Buesing

Neil Buesing, an Apache Kafka® community stalwart at Object Partners, spends his days building things out of Kafka and helping others do the same. Today, he discusses the concept of a CoE (center of excellence), and how a CoE is integral to attain and sustain world-class performance, business value, and success in a business. Neil talks us through how to make a CoE successful, the importance of event streaming, how to better understand streaming technologies, and how to best utilize CoE for your needs. This includes evangelizing Kafka, building a Proof of Value (PoV) with team members, defining deliverables as part of that CoE, and understanding how to implement Kafka into your organization. EPISODE LINKSEoS in Kafka: Listen up, I will only say this once! by Jason Gustafson The Magical Rebalance Protocol of Apache Kafka by Gwen Shapira Chair-throwing meme that was discussed at end of episode Apache Kafka and Confluent Platform Reference ArchitectureBenchmark Your Dedicated Apache Kafka Cluster on Confluent CloudOptimizing Your Apache Kafka DeploymentCluster sizingJoin the Confluent Community SlackLearn more with Kafka tutorials, resources, and guides at Confluent DeveloperLive demo: Kafka streaming in 10 minutes on Confluent CloudUse 60PDCAST to get an additional $60 of free Confluent Cloud usage (details)
10/14/202046 minutes, 22 seconds
Episode Artwork

Creating Your Own Kafka Improvement Proposal (KIP) as a Confluent Intern ft. Leah Thomas

Ever wonder what it's like to intern at a place like Confluent? How about working with Kafka Streams and creating your own KIP? Well, that's exactly what we discuss on today's episode with Leah Thomas. Leah Thomas, who first interned as a recruiter for Confluent, quickly realized that she was enamored with the problem solving the engineering team was doing, especially with Kafka Streams. The next time she joined Confluent's intern program, she worked on the Streams team and helped bring KIP-450 to life. With KIP-450, Leah started learning Apache Kafka® from the inside out and how to better address the user experience. She discusses her experience with getting a KIP approved with the Apache Software Foundation and how she dove into solving the problem of hopping windows with sliding windows instead.EPISODE LINKSRange: How Generalists Triumph in a Specialized WorldConfluent CareersJoin the Confluent Community SlackLearn more with Kafka tutorials, resources, and guides at Confluent DeveloperLive demo: Kafka streaming in 10 minutes on Confluent CloudUse 60PDCAST to get an additional $60 of free Confluent Cloud usage (details)
10/7/202046 minutes, 15 seconds
Episode Artwork

Confluent Platform 6.0 | What's New in This Release + Updates

The feature-rich release of Confluent Platform 6.0, based on Apache Kafka® 2.6, introduces Tiered Storage, Self-Balancing Clusters, ksqlDB 0.10, Admin REST APIs, and Cluster Linking in preview. These features enhance the platform with greater elasticity, improved cost effectiveness, infinite data retention, and global availability so that you can simplify management operations, reduce the cost of adopting Kafka, and focus on building event streaming applications.EPISODE LINKSConfluent Platform 6.0 Release NotesIntroducing Confluent Platform 6.0Download Confluent Platform 6.0Watch the video version of this podcastJoin the Confluent Community SlackLearn more with Kafka tutorials, resources, and guides at Confluent DeveloperLive demo: Kafka streaming in 10 minutes on Confluent CloudUse 60PDCAST to get an additional $60 of free Confluent Cloud usage (details)
10/1/202014 minutes, 11 seconds
Episode Artwork

Using Event Modeling to Architect Event-Driven Information Systems ft. Bobby Calderwood

Bobby Calderwood (Founder, Evident Systems) discusses event streaming, event modeling, and event-driven architecture. He describes the emerging visual language and process, how to effectively understand and teach what events are, and some of Bobby's own use cases in the field with oNote, Evident System’s new SaaS platform for event modeling. Finally, Bobby emphasizes the power of empowering and informing the community on how best to integrate event streaming with the outside world.EPISODE LINKSBuilding Information Systems Using Event Modeling Real-Time Payments with Clojure and Apache Kafka ft. Bobby CalderwoodEvent modeling leaders Adam Dymitruk and Greg YoungGood Enough Software is by Definition Good Enough written by Greg YoungoNoteEvent modelingJoin the Confluent Community SlackLearn more with Kafka tutorials, resources, and guides at Confluent DeveloperLive demo: Kafka streaming in 10 minutes on Confluent CloudUse 60PDCAST to get an additional $60 of free Confluent Cloud usage (details)
9/30/202056 minutes, 41 seconds
Episode Artwork

Using Apache Kafka as the Event-Driven System for 1,500 Microservices at Wix ft. Natan Silnitsky

Did you know that a team of 900 developers at Wix is using Apache Kafka® to maintain 1,500 microservices? Tim Berglund sits down with Natan Silnitsky (Backend Infrastructure Engineer, Wix) to talk all about how Wix benefits from using an event streaming platform. Wix (the website that’s made for building websites) is designing a platform that gives people the freedom to create, manage, and develop their web presence exactly the way they want as they look to move from synchronous to asynchronous messaging. In this episode, Natan and Tim talk through some of the vital lessons learned at Wix through their use of Kafka, including common infrastructure, at-least-once processing, message queuing, and monitoring. Finally, Natan gives Tim a brief overview of the open source project Greyhound and how it's being used at Wix. EPISODE LINKSgithub.com/wix/greyhoundJoin the Confluent Community SlackLearn more with Kafka tutorials, resources, and guides at Confluent DeveloperLive demo: Kafka streaming in 10 minutes on Confluent CloudUse 60PDCAST to get an additional $60 of free Confluent Cloud usage (details)
9/21/202049 minutes, 12 seconds
Episode Artwork

Top 6 Things to Know About Apache Kafka ft. Gwen Shapira

This year, Confluent turns six! In honor of this milestone, we are taking a very special moment to celebrate with Gwen Shapira by highlighting the top six things everyone should know about Apache Kafka®:Clients have metricsBug fix releases/Kafka Improvement Proposals (KIPs)Idempotent producers and how they workKafka Connect is part of Kafka and Single Message Transforms (SMTs) are worth not missing out onCooperative rebalancing Generating sequence numbers and how Kafka changes the way you thinkListen as Tim and Gwen talk through the importance of Kafka Connect, cooperative rebalancing protocols, and the promise (and warning) that your data architecture will never be the same. As Gwen puts it, “Kafka gives you the options, but it's up to you how you use it.”EPISODE LINKSKIP-415: Incremental Cooperative Rebalancing in Kafka ConnectWhy Kafka Connect? ft. Robin Moffatt Confluent Hub Creativity IncFifth Discipline Join the Confluent Community SlackLearn more with Kafka tutorials, resources, and guides at Confluent DeveloperUse 60PDCAST to get an additional $60 of free Confluent Cloud usage (details)
9/15/202047 minutes, 27 seconds
Episode Artwork

5 Years of Event Streaming and Counting ft. Gwen Shapira, Ben Stopford, and Michael Noll

With the explosion of real-time data, Apache Kafka and event stream processing (ESP) have grown in proliferation, with event streaming technology becoming the de facto technology transforming businesses across numerous verticals. Gwen Shapira (Engineering Leader, Confluent), Ben Stopford (Senior Director, OCTO, Confluent), and Michael Noll (Principal Technologist, Confluent) meet up to talk all about their last five years at Confluent and the changes they’ve seen in event streaming. They discuss what they were doing with Apache Kafka® before they arrived at Confluent, challenges in event streaming challenges that have arisen, and their favorite use cases. They then talk through what they think the Kafka community is undervaluing and where they think event streaming will be in the next five years. EPISODE LINKSTim’s Budapest Drone Footage Rolling Kafka Upgrades and Confluent Cloud ft. Gwen ShapiraDistributed Systems Engineering with Apache Kafka ft. Gwen ShapiraImproving Fairness Through Connection Throttling in the Cloud with KIP-402 ft. Gwen ShapiraAsk ConfluentApache Kafka Fundamentals: The Concept of Streams and Tables ft. Michael NollBen Stopford on Microservices and Event StreamingThe Portable Wonder Synthesizer Children's Hospital of Atlanta: Helping Healthcare with Apache Kafka and KSQL ft. Ramesh SringeriJoin the Confluent Community SlackLearn more with Kafka tutorials, resources, and guides at Confluent DeveloperUse 60PDCAST to get an additional $60 of free Confluent Cloud usage (details)
8/31/202048 minutes, 18 seconds
Episode Artwork

Championing Serverless Eventing at Google Cloud ft. Jay Smith

Jay Smith helps Google Cloud users modernize their applications with serverless eventing. This helps them focus on their code instead of managing infrastructure, as well as ultra-fast deployments and reduced server costs. On today’s show, he discusses the definition of serverless, serverless eventing, data-driven vs. event-driven architecture, sources and sinks, and hybrid cloud with on-prem components. Finally, Jay shares how he sees application architecture changing in the future and where Apache Kafka® fits in.EPISODE LINKSQuine ProgramsGet Started with QwiklabsKubernetes PodcastsJoin the Confluent Community SlackLearn more with Kafka tutorials, resources, and guides at Confluent DeveloperUse 60PDCAST to get an additional $60 of free Confluent Cloud usage (details)
8/26/202047 minutes, 26 seconds
Episode Artwork

Disaster Recovery with Multi-Region Clusters in Confluent Platform ft. Anna McDonald and Mitch Henderson

Multi-Region Clusters improve high availability in Apache Kafka®, ensure cluster replication across multiple zones, and help with disaster recovery. Making sure users are successful in every area of their Kafka deployment, be it operations or application development for specific use cases, is what Anna McDonald (Team Lead Customer Success Technical Architect) and Mitch Henderson (Principal Customer Success Technical Architect) are passionate about here at Confluent.In this episode, they share common challenges that users often run into with Multi-Region Clusters, uses cases for them, and what to keep in mind when considering replication. Anna and Mitch also discuss consuming from followers, auto client failover, and offset issues to be aware of.EPISODE LINKSKafka Screams: The Scariest JIRAs and How to Survive Them ft. Anna McDonaldDeploying Confluent Platform, from Zero to Hero ft. Mitch HendersonJoin the Confluent Community SlackLearn more with Kafka tutorials, resources, and guides at Confluent DeveloperUse 60PDCAST to get an additional $60 of free Confluent Cloud usage (details)
8/17/202043 minutes, 4 seconds
Episode Artwork

Developer Advocacy (and Kafka Summit) in the Pandemic Era

All Confluent developer advocates...assemble! COVID-19 has changed the face of meetings and events, halting all in-person gatherings and forcing companies to adapt on the fly. In today's episode of Streaming Audio, the developer advocates come together to discuss how their jobs have changed during the worldwide pandemic. Less than a year ago, this group was constantly on the road or in a plane on their way to present something new about Apache Kafka and event streaming, so how has the current climate affected their work? The group talks about Zoom fatigue, online presenting, online conferences/meetups, and of course, Kafka Summit 2020. EPISODE LINKSGrowing the Event Streaming Community During COVID-19 ft. Ale MurrayRegister for Kafka Summit 2020Join the Confluent Community SlackLearn more with Kafka tutorials, resources, and guides at Confluent DeveloperUse 60PDCAST to get an additional $60 of free Confluent Cloud usage (details)
8/12/202041 minutes, 44 seconds
Episode Artwork

Apache Kafka 2.6 - Overview of Latest Features, Updates, and KIPs

Apache Kafka® 2.6 is out! This release includes progress toward removing ZooKeeper dependency, adding client quota APIs to the admin client, and exposing disk read and write metrics, and support for Java 14. In addition, there are improvements to Kafka Connect, such as allowing source connectors to set topic-specific settings for new topics and expanding Connect worker internal topic settings. Kafka 2.6 also augments metrics for Kafka Streams and adds emit-on-change support for Kafka Streams, as well as other updates. EPISODE LINKSWatch the video version of this podcastRead about what's new in Apache Kafka 2.6Join the Confluent Community SlackLearn more with Kafka tutorials, resources, and guides at Confluent DeveloperUse 60PDCAST to get an additional $60 of free Confluent Cloud usage (details)
8/6/202010 minutes, 37 seconds
Episode Artwork

Testing ksqlDB Applications ft. Viktor Gamov

Viktor Gamov (Developer Advocate, Confluent) returns to Streaming Audio to explain the magic of ksqlDB, ideal testing environments for ksqlDB, and the ksqlDB test runner. For those who are just starting to explore the interface, Viktor provides some tips and best practices for what to look out for too. He also talks about the future of ksqlDB, the future of integration testing, and his favorite new feature among recent upgrades.EPISODE LINKSStreaming Audio episodes on ksqlDBWatch #LiveStreams with Viktor Gamov I Don't Always Test My StreamsJoin the Confluent Community SlackLearn more with Kafka tutorials, resources, and guides at Confluent DeveloperUse 60PDCAST to get an additional $60 of free Confluent Cloud usage*
8/3/202039 minutes, 36 seconds
Episode Artwork

How to Measure the Business Value of Confluent Cloud ft. Lyndon Hedderly

As developers, we are good at envisioning the future state of any given system we want to build, but are we as good at telling the business how those changes positively impact the bottom line? Lyndon Hedderly (Team Lead, Business Value Consulting, Confluent) describes his approach to business value, how to justify a new technology that you’re introducing to your company, and tips on adopting new technologies and processes effectively. As Lyndon walks through each part of the business value framework: (1) baseline, (2) target state, (3) quantified benefits, (4) unquantified benefits, and (5) proof points, you’ll learn about cost effectiveness with Confluent Cloud, how to measure ROI vs. TCO, and a retail example from a customer that details their implementation of an event streaming platform.EPISODE LINKSMeasuring the Cost Effectiveness of Confluent Cloud Measuring TCO: Apache Kafka vs. Confluent Cloud’s Managed Service Get a Free TCO AssessmentJoin the Confluent Community SlackLearn more with Kafka tutorials, resources, and guides at Confluent DeveloperUse 60PDCAST to get an additional $60 of free Confluent Cloud usage*
7/27/202054 minutes, 29 seconds
Episode Artwork

Modernizing Inventory Management Technology ft. Sina Sojoodi and Rohit Kelapure

Inventory management systems are crucial for reducing real-time inventory data drift, improving customer experience, and minimizing out-of-stock events. Apache Kafka®’s real-time data technology provides seamless inventory tracking at scale, saving billions of dollars in the supply chain, making modernized data architectures more important to retailers now more than ever.  In this episode, we’ll discuss how Apache Kafka allows the implementation of stateful event streaming architectures on a cloud-native platform for application and architecture modernization. Sina Sojoodi (Global CTO, Data and Architecture, VMware) and Rohit Kelapure (Principal Advisor, VMware) will discuss data modeling, as well as the architecture design needed to achieve data consistency and correctness while handling the scale and resilience needs of a major retailer in near real time. The implemented solution utilizes Spring Boot, Kafka Streams, and Apache Cassandra, and they explain the process of using several services to write to Cassandra instead of trying to use Kafka as a distributed log for enforcing consistency.  EPISODE LINKSHow to Run Kafka Streams on Kubernetes ft. Viktor GamovMachine Learning with Kafka Streams, Kafka Connect, and ksqlDB ft. Kai WaehnerUnderstand What’s Flying Above You with Kafka Streams ft. Neil BuesingJoin the Confluent Community SlackLearn more with Kafka tutorials, resources, and guides at Confluent DeveloperUse 60PDCAST to get an additional $60 of free Confluent Cloud usage*
7/20/202041 minutes, 32 seconds
Episode Artwork

Fault Tolerance and High Availability in Kafka Streams and ksqlDB ft. Matthias J. Sax

Apache Kafka® Committer and PMC member Matthias J. Sax explains fault tolerance, high-availability stream processing, and how it’s done in Kafka Streams. He discusses the differences between changelogging vs. checkpointing and the complexities checkpointing introduces. From there, Matthias explains what hot standbys are and how they are used in Kafka Streams, why Kafka Streams doesn’t do watermarking, and finally, why Kafka Streams is a library and not infrastructure. EPISODE LINKSAsk Confluent #7: Kafka Consumers and Streams Failover Explained ft. Matthias SaxAsk Confluent #8: Guozhang Wang on Kafka Streams Standby TasksHow to Run Kafka Streams on Kubernetes ft. Viktor GamovKafka Streams Interactive Queries Go Prime TimeHighly Available, Fault-Tolerant Pull Queries in ksqlDBKIP-535: Allow state stores to serve stale reads during rebalanceKIP-562: Allow fetching a key from a single partition rather than iterating over all the stores on an instanceKIP-441: Smooth Scaling Out for Kafka Streams Skip to end of metadataJoin the Confluent Community SlackLearn more with Kafka tutorials, resources, and guides at Confluent DeveloperUse 60PDCAST to get an additional $60 of free Confluent Cloud usage*
7/15/202054 minutes, 3 seconds
Episode Artwork

Benchmarking Apache Kafka Latency at the 99th Percentile ft. Anna Povzner

Real-time stock trades, GPS location, and website click tracking are just a few industries that heavily rely on Apache Kafka®'s real-time messaging and data delivery functions. As such, Kafka's latency is incredibly important.Anna Povzner (Software Engineer, Confluent) gives you the breakdown and everything you need to know when it comes to measuring latency. The five components of latency are produce time, publish time, commit time, catch-up time, and fetch time. When consumer pulling adds to latency, Anna shares some best practices to keep in mind for how to think about partitioning in conjunction with latency. She also discusses client configuration in the cloud, interesting problems she's helped solve for customers, and her top two tips for debugging latency. EPISODE LINKS99th Percentile Latency at Scale with Apache KafkaBenchmark Your Dedicated Apache Kafka Cluster on Confluent CloudDistributed Systems Engineering with Apache Kafka ft. Gwen ShapiraJoin the Confluent Community SlackLearn more with Kafka tutorials, resources, and guides at Confluent DeveloperUse 60PDCAST to get an additional $60 of free Confluent Cloud usage*
7/8/202046 minutes, 30 seconds
Episode Artwork

Open Source Workflow Automation with Apache Kafka ft. Bernd Ruecker

What started out as a consulting company, Camunda eventually turned into a developer-friendly, open source vendor that now focuses on workflow automation. Bernd Ruecker, a co-founder and the chief technologist at Camunda, talks through the company's journey, how he ended up in open source, and all things automation, including how it differs from business process management and the issue of diagrams. Bernd also dives into dead letter topics in Apache Kafka®, software interacting with software, orchestration tension, and best practices for approaching challenges that pop up along the way. This episode will take you through a thorough introduction of Camunda Cloud, a cloud-native workflow engine, as well as Camunda’s Kafka connector. EPISODE LINKSJay Kreps, Confluent | Kafka Summit SF 2019 Keynote ft. Dev Tagare, Lyftzeebe.ioJoin the Confluent Community SlackLearn more with Kafka tutorials, resources, and guides at Confluent DeveloperUse 60PDCAST to get an additional $60 of free Confluent Cloud usage*
6/29/202043 minutes, 3 seconds
Episode Artwork

Growing the Event Streaming Community During COVID-19 ft. Ale Murray

We've all been affected by COVID-19 in one way or another, resulting in big changes in workplace functionality, productivity, and even our relationships within the Apache Kafka® and Confluent communities as meetings and events have needed to turn virtual. Ale Murray (Global Community Manager, Confluent) shares interesting trends, changes in community metrics, and what we’ve done to adapt as a response. Ale also explains what makes a comprehensive community program and the value of community meetups in light of the pandemic. Despite how much we miss in-person interactions, by digitizing events and focusing on the community, we saw great growth in attendance and engagement across our Slack community, online hackathons, MVP program, and online meetups over the last couple of months, proving that nothing can stop this amazing community from thriving.EPISODE LINKSGet involved with the Confluent CommunityJoin the Confluent Community SlackLearn more with Kafka tutorials, resources, and guides at Confluent DeveloperUse 60PDCAST to get an additional $60 of free Confluent Cloud usage*
6/24/202040 minutes, 19 seconds
Episode Artwork

From Monolith to Microservices with Sam Newman

Author Sam Newman catches up with Tim Berglund (Senior Director of Developer Advocacy, Confluent) in the virtual studio on what microservices are, how they work, the drawbacks of microservices, what splitting the monolith looks like, and patterns to look for. The pair talk through Sam's book “Monolith to Microservices” chapter by chapter, looking at key components of microservices in more detail. Sam also walks through database decomposition, integrating with new technology, and performing joins in event streaming architecture. Lastly, Sam shares what he’s excited for in the future, which includes “Monolith to Microservices Volume II.”EPISODE LINKSMonolith to MicroservicesJoin the Confluent Community SlackLearn more with Kafka tutorials, resources, and guides at Confluent DeveloperUse 60PDCAST to get an additional $60 of free Confluent Cloud usage* 
6/17/202040 minutes, 27 seconds
Episode Artwork

Exploring Event Streaming Use Cases with µKanren ft. Tim Baldridge

Tim Baldridge (Senior Software Engineer, Cisco) joins us on Streaming Audio to talk about event streaming, stream processing use cases, and µKanren. First, Tim shares about his work at Cisco related to intaking viruses, the backend, and finding new ways to process data. Later, Tim talks about interesting bank and airline use cases, as well as his time at Walmart, taking a closer look at specific retail use cases and the product that Walmart used to process data streams. If you’re curious about what µKanren is, how it relates to relational programming, the complex math that goes into the workflow of µKanren, and how Apache Kafka® holds up to all other event streaming platforms, Tim also dives into that too. EPISODE LINKSµKanren: A Minimal Functional Core for Relational ProgrammingIt's Actors All The Way Down  Den of Clojure Build Your Own Logic EngineJoin the Confluent Community SlackLearn more with Kafka tutorials, resources, and guides at Confluent Developer
6/8/202051 minutes
Episode Artwork

Introducing JSON and Protobuf Support ft. David Araujo and Tushar Thole

Confluent Platform 5.5 introduces long-awaited JSON Schema and Protobuf support in Confluent Schema Registry and across other platform components. Support for Protobuf and JSON Schema in Schema Registry provides the same assurances of data compatibility and consistency we already had with Avro, while opening up Kafka to more businesses, applications, and use cases that are built upon those data serialization formats. Tushar Thole (Engineering Leader, Confluent) and David Araujo (Product Manager, Confluent) share about these new improvements to Confluent Schema Registry, the differences between Apache Avro™, Protobuf, and JSON Schemas, how to treat optional fields, some of the arguments between Avro and Protobuf, and why it took some time for Schema Registry to support JSON Schemas and Protobuf.Later, they talk about custom plugins, adding another layer of safety in Confluent Platform 5.5, and their vision for data governance.EPISODE LINKSIntroducing Confluent Platform 5.5Confluent Platform Now Supports Protobuf, JSON Schema, and Custom FormatsDownload Confluent PlatformGetting Started with Protobuf in Confluent CloudRead articles by Robert Yokota Schema Validation with Confluent Platform 5.4 Playing Chess with Confluent Schema RegistryJSON Schema specsSend feedback to [email protected] managed Apache Kafka as a service! Try free.Join the Confluent Community SlackLearn more with Kafka tutorials, resources, and guides at Confluent Developer
6/1/202040 minutes
Episode Artwork

Scaling Apache Kafka in Retail with Microservices ft. Matt Simpson from Boden

Apache Kafka® is a powerful toolset for microservice architectures. In this podcast, we’ll cover how Boden, an online retail company that specializes in high-end fashion linked to the royal family, used streaming microservices to modernize their business. Matt Simpson (Solutions Architect, Boden) shares a real life use case showing how Kafka has helped Boden digitize their business, transitioning from catalogs to online sales, tracking stock, and identifying buying patterns. Matt also shares about what he's learned through using Kafka as well as the challenges of being a product master. And lastly, what is Matt excited for for the future of Boden? Find out in this episode!EPISODE LINKSDigital Transformation in Style: How Boden Innovates Retail Using Apache KafkaLearn about BodenETL and Event Streaming Explained ft. Stewart BrysonConnecting Snowflake and Apache Kafka ft. Isaac KunenInstagram for Kensington PalaceJoin the Confluent Community SlackLearn more with Kafka tutorials, resources, and guides at Confluent Developer
5/27/202042 minutes, 1 second
Episode Artwork

Connecting Snowflake and Apache Kafka ft. Isaac Kunen

Isaac Kunen (Senior Product Manager, Snowflake) and Tim Berglund (Senior Director of Developer Advocacy, Confluent) practice social distancing by meeting up in the virtual studio to discuss all things Apache Kafka® and Kafka Connect at Snowflake. Isaac shares what Snowflake is, what it accomplishes, and his experience with developing connectors. The pair discuss the Snowflake Kafka Connector and some of the unique challenges and adaptations it has had to undergo, as well as the interesting history behind the connector. In addition, Isaac talks about how they’re taking on event streaming at Snowflake by implementing the Kafka connector and what he hopes to see in the future with Kafka releases. EPISODE LINKSDownload the Snowflake Kafka ConnectorPaving a Data Highway with Kafka Connect ft. Liz BennettMaking Apache Kafka Connectors for the Cloud ft. Magesh NandakumarMachine Learning with Kafka Streams, Kafka Connect, and ksqlDB ft. Kai WaehnerConnecting to Apache Kafka with Neo4jContributing to Open Source with the Kafka Connect MongoDB Sink ft. Hans-Peter GrahslConnecting Apache Cassandra to Apache Kafka with Jeff Carpenter from DataStaxWhy Kafka Connect? ft. Robin MoffattJoin the Confluent Community SlackLearn more with Kafka tutorials, resources, and guides at Confluent Developer
5/20/202031 minutes, 46 seconds
Episode Artwork

AMA with Tim Berglund | Streaming Audio Special

Happy 100th episode of Streaming Audio! Thank you to everyone who has listened, subscribed, left a review, and mostly, for sharing our passion for event streaming. We can't wait for the next 100! To celebrate, Ben Stopford (Senior Director of the Office of the CTO, Confluent) hosts an AMA (ask me anything) with Tim, covering 62 questions in total—from his career, his time at Confluent, Marvel vs. DC, and what he looks for in a new hire, to how to nail your next conference talk. We hope you enjoy this special 100th episode of Streaming Audio: a podcast about Apache Kafka®, Confluent, and the cloud.EPISODE LINKSThe Song of the Strange AsceticAvoiding Lock-InBlogs by Ben Stopford Join the Confluent Community SlackLearn more with Kafka tutorials, resources, and guides at Confluent Developer
5/18/202047 minutes, 9 seconds
Episode Artwork

Kubernetes Meets Apache Kafka ft. Kelsey Hightower

Kelsey Hightower was already an advocate, just like all other developers, long before joining Google officially as a developer advocate and Kubernetes expert. Gaining trust in your product, process, and the way you develop code requires the ability to explain those things well. Kelsey reflects on the journey that brought him to where he is today and how Kubernetes has evolved over the years too, including what makes Kubernetes so successful. But Tim is not the only one with questions. Kelsey asks a few of his own: does Apache Kafka® want to be a database? Does Kafka want to be a system of record? Is there overlap between Kubernetes and Kafka? Can you run Kafka on Kubernetes?EPISODE LINKSKubernetes the Hard WayJoin the Confluent Community SlackLearn about Kafka at Confluent Developer
5/13/202042 minutes, 2 seconds
Episode Artwork

Apache Kafka Fundamentals: The Concept of Streams and Tables ft. Michael Noll

If you’ve ever wondered what Apache Kafka® is, what it’s used for, or wanted to learn about Kafka architecture and all its components, buckle up! In today’s episode, Michael Noll (Principal Technologist, Confluent) and Tim Berglund (Senior Director of Developer Advocacy, Confluent) discuss a series of fundamental questions: What is Kafka? What is an event? How do we organize and store events? And what is Kafka Streams? Over the course of this episode, Michael covers an in-depth look into Kafka technology and core concepts: the process of reading from a topic, differences between tables and streams, mutability, and what ksqlDB is and what its event streaming database features accomplish. If you've ever wanted to get a better grasp on how Kafka works, this episode is for you!EPISODE LINKSStreams and Tables in Apache Kafka: A PrimerStreams and Tables in Apache Kafka: Topics, Partitions, and Storage FundamentalsStreams and Tables in Apache Kafka: Processing Fundamentals with Kafka Streams and ksqlDBStreams and Tables in Apache Kafka: Elasticity, Fault Tolerance, and other Advanced ConceptsBrowse the Confluent HubJoin the Confluent Community SlackLearn more with Kafka tutorials, resources, and guides at Confluent Developer
5/4/202048 minutes, 52 seconds
Episode Artwork

IoT Integration and Real-Time Data Correlation with Kafka Connect and Kafka Streams ft. Kai Waehner

There are two primary industries within the Internet of Things (IoT): industrial IoT (IIoT) and consumer IoT (CIoT), both of which can benefit from the Apache Kafka® ecosystem, including Kafka Streams and Kafka Connect. Kai Waehner, who works in the advanced tech group at Confluent with customers, defining their needs, use cases, and architecture, shares example use cases where he’s seen IoT integration in action. He specifically focuses on Walmart and its real-time customer integration using the Walmart app. Kafka Streams helps fine-tune the Walmart app, optimizing the user experience, offering a seamless omni-channel experience, and contributing to business success. Other topics discussed in today’s episode include integration from various legacy and modern IoT data sources, latency sensitivity, machine learning for quality control and predictive maintenance, and when event streaming can be more useful than traditional databases or data lakes.EPISODE LINKSApache Kafka 2.5 – Overview of Latest Features, Updates, and KIPsMachine Learning with Kafka Streams, Kafka Connect, and ksqlDB ft. Kai WaehnerBlog posts by Kai WaehnerProcessing IoT Data from End to End with MQTT and Apache Kafka®End-to-End Integration: IoT Edge to Confluent CloudApache Kafka is the New Black at the Edge in Industrial IoT, Logistics, and RetailingApache Kafka, KSQL, and Apache PLC4X for IIoT Data Integration and ProcessingStreaming Machine Learning at Scale from 100,000 IoT Devices with HiveMQ, Apache Kafka, and TensorFlowEvent-Model Serving: Stream Processing vs. RPC with Kafka and TensorFlowJoin the Confluent Community SlackLearn about Kafka at Confluent Developer
4/29/202040 minutes, 55 seconds
Episode Artwork

Confluent Platform 5.5 | What's New in This Release + Updates

Confluent Platform 5.5 is out, and Tim Berglund (Senior Director of Developer Advocacy, Confluent) is here to give you the latest updates! The first is improved schema management and Confluent Schema Registry support for Protobuf and JSON, making these components pluggable. The second is better support for languages other than Java within the sphere of librdkafka. And finally, this release includes an upgrade to ksqlDB, which expands its functionality, supports more data types, increases availability for pull queries, and adds a new aggregate function.EPISODE LINKSConfluent Platform 5.5 Release NotesIntroducing Confluent Platform 5.5Watch the video version of this podcastJoin the Confluent Community SlackLearn about Kafka at Confluent Developer
4/24/202011 minutes, 20 seconds
Episode Artwork

Making Abstract Algebra Count in the World of Event Streaming ft. Sam Ritchie

During his time at Twitter, Sam Ritchie (Staff Research Engineer, Google) led the development of Summingbird, a project that helped Twitter ingest and process massive amounts of data. It relieved some key pain points, saving developers at Twitter from doing work twice, as was a natural consequence of the then-current Lambda Architecture. In this episode, Sam dives teaches us some abstract algebra and explains how it has informed his attempts to make stream processing programs easy to write in a more general way.EPISODE LINKSCheck out SummingbirdJoin the Confluent Community SlackLearn about Kafka at Confluent Developer
4/22/202046 minutes, 21 seconds
Episode Artwork

Apache Kafka 2.5 – Overview of Latest Features, Updates, and KIPs

Apache Kafka® 2.5 is here, and we’ve got some Kafka Improvement Proposals (KIPs) to discuss! Tim Berglund (Senior Director of Developer Advocacy, Confluent) shares improvements and changes to over 10 KIPs all within the realm of Core Kafka, Kafka Connect, and Kafka Streams, including foundational improvements to exactly once semantics, the ability to track a connector’s active topics, and adding a new co-group operator to the Streams DSL.EPISODE LINKSCheck out the Apache Kafka 2.5 release notesRead about what’s new in Apache Kafka 2.5Watch the video version of this podcastJoin the Confluent Community SlackLearn about Kafka at Confluent Developer
4/16/202010 minutes, 28 seconds
Episode Artwork

Streaming Data Integration – Where Development Meets Deployment ft. James Urquhart

Applications, development, deployment, and theory are all key pieces behind customer experience, event streaming, and improving systems and integration. James Urquhart (Global Field CTO, VMware) is writing a book combining Wardley Mapping and Promise Theory to evaluate the future of event streaming and how it will become a more economic choice for users. James argues that reducing the cost of integration does not deter people from buying but instead encourages creativity to find more uses for integration. He stresses the importance of user experience and how knowing what users are going through helps mend products and workflows, which improves systems that bring economic value. The two then go into explanations around the Promise Theory, Jevons Paradox, and Geoffrey Moore's Core vs. Context Theory. EPISODE LINKSPromise Theory: Principles and ApplicationsJoin the Confluent Community SlackLearn about Apache Kafka® at Confluent Developer
4/15/202055 minutes, 2 seconds
Episode Artwork

How to Run Kafka Streams on Kubernetes ft. Viktor Gamov

There’s something about YAML and the word “Docker” that doesn’t quite sit well with Viktor Gamov (Developer Advocate, Confluent). But Kafka Streams on Kubernetes is a phrase that does.Kubernetes is an open source platform that allows teams to deploy, manage, and automate containerized services and workloads. Running Kafka Streams on Kubernetes simplifies operations and gets your environment allocated faster.Viktor describes what that process looks like and how Jib helps build, test, and deploy Kafka Streams applications on Kubernetes for an improved DevOps experience. He also shares about some exciting projects he’s currently working on. EPISODE LINKSInstalling Apache Kafka® with Ansible ft. Viktor Gamov and Justin ManchesterContainerized Apache Kafka on KubernetesKubernetes 101 | Confluent Operator (1/3)Installation | Confluent Operator (2/3)Confluent Operator vs. Open Source Helm Charts (3/3)Streams Must Flow: Developing Fault-Tolerant Stream Processing Applications with Kafka Streams and KubernetesKafka TutorialsJoin the Confluent Community SlackLearn about Kafka at Confluent Developer
4/6/202041 minutes, 49 seconds
Episode Artwork

Cloud Marketplace Considerations with Dan Rosanova

As the fundamental data abstractions used by developers have changed over time, event streams are now the present and the future. Coming from decades of experience in messaging, Dan Rosanova (Senior Group Product Manager for Confluent Cloud, Confluent) discusses the pros and cons of cloud event streaming services on Google Cloud Platform (GCP), Microsoft Azure, and Confluent Cloud. He also compares major stream processing and messaging services: Cloud Pub/Sub vs. Azure Event Hubs vs. Confluent Cloud, and outlines major differences among them. Also on the table in today’s episode are cloud lock-in, the anxieties around it, and where cloud marketplaces are headed.EPISODE LINKSDon’t Get Locked Up in Avoiding Lock-InJoin the Confluent Community SlackFully managed Apache Kafka as a service! Try free.
3/30/202033 minutes, 31 seconds
Episode Artwork

Explore, Expand, and Extract with 3X Thinking ft. Kent Beck

As a programmer, Kent Beck chats about various topics of broad interest to developers, including some of his books: “Extreme Programming Explained: Embrace Change,” “Test-Driven Development: By Example,” and “Implementation Patterns.” He wrote “Implementation Patterns” to highlight the positive habits a developer should form in order to write accessible code. He also shares about what it’s like to experiment with new ideas and implement them, especially when others doubt what you're trying to achieve. This relates to the concept behind the explore-to-expand transition and a short piece he wrote titled "Idea to Impact." Finally, Tim and Kent talk through the difference between refactoring and tidying, Kent's involvement with agile software and test-driven development, and what exactly test-commit-revert is. And yes, they talk a little bit about event streaming too!EPISODE LINKSExtreme Programming Explained: Embrace ChangeTest-Driven Development: By ExampleSmalltalk Best Practice PatternsImplementation PatternsOh, the Methods You’ll Compose (inspired by “Implementation Patterns”)Idea to ImpactFast/Slow in 3X: Explore/Expand/ExtractRefactoring: Improving the Design of Existing CodeNominalism and RealismRobert Mathus, W.V.O. Quine, and ThanosJoin the Confluent Community Slack
3/25/202054 minutes, 45 seconds
Episode Artwork

Ask Confluent #17: The “What is Apache Kafka?” Episode ft. Tim Berglund

Ask Confluent is back! From questions on Apache Kafka®, data integration, and log aggregation, to potential interview questions that Tim would ask if he were to interview himself, anything goes. If you're already a Kafka expert (or any type of expert), think about becoming a speaker. Gwen and Tim talk through how to submit a proposal and get accepted to conferences. As experienced conference goers, they explain that what makes a successful talk is making sure you present for the attendee instead of making it about yourself. In essence, what can your idea or code do to help someone else? From there, the pair chat about the secret for a long marriage, REST Proxy and where it exists in Confluent Operator, how Kafka relates to Splunk when aggregating logs, and whether Tim can start making some use case based video content so that people can better understand Kafka and how it works. For those who have just started integrating Kafka, Tim and Gwen also provide some pointers about how to go about understanding it. EPISODE LINKSConfluent REST Proxy DocumentationWhat is Apache Kafka?Splunk Connect for Kafka – Connecting Apache Kafka with SplunkWatch the video version of this podcastJoin the Confluent Community Slack
3/24/202025 minutes, 35 seconds
Episode Artwork

Domain-Driven Design and Apache Kafka with Paul Rayner

Domain-driven design (DDD) is helpful for managing complex processes and rules—especially those between business experts and developers/users—and turning them into models. CEO of Virtual Genius Paul Rayner describes how the vast tooling in DDD enables developers to focus on the coding that really matters and makes systems more collaborative, taking into account three primary considerations: (1) how to get better at collaborating, (2) strategic design and understanding why design really matters, and (3) modeling codes. He also touches on bounded context, microservices, event storming, event sourcing, and the relationship between Apache Kafka® and DDD.  EPISODE LINKSWhat is Domain-Driven Design?Microservices, Apache Kafka, and Domain-Driven DesignTurning the Database Inside Out with Apache SamzaLet’s Build “eBay” by “Turning the Database Inside Out” and Using ServerlessDesigning Event-Driven SystemsDesign Patterns: Elements of Reusable Object-Oriented SoftwareThe Event Storming HandbookRefactoring: Improving the Design of Existing CodeExplore DDD ConferenceJoin the Confluent Community Slack
3/18/202050 minutes, 42 seconds
Episode Artwork

Machine Learning with TensorFlow and Apache Kafka ft. Chris Mattmann

TensorFlow is an open source machine learning platform that can be used with Apache Kafka® for deep learning. Chris Mattmann, author of Machine Learning with TensorFlow, introduces us to TensorFlow as a Google technology that teaches computers how to think and make connections like humans do. For example, when there is a signifier that the mind processes, out comes a label to the object in front of you. TensorFlow is Google's version of wrangling various technologies to help group them together and work smoothly as large amounts of data flow through. Chris also breaks down neural networks, how technology simulates cerebral processes that take place when our visual cortex receives a new image, plus a use case that involves Apache Kafka and event streaming to achieve TensorFlow's goals.EPISODE LINKSAsk Confluent #13: Machine Learning with Kai Waehner Join the Confluent Community SlackGet 40% off Machine Learning with TensorFlow using the code podcon19
3/11/202053 minutes, 6 seconds
Episode Artwork

Distributed Systems Engineering with Apache Kafka ft. Gwen Shapira

As an engineering leader managing a team, Gwen Shapira talks through the steps she took to get to Confluent and how she got started working with Apache Kafka®. She shares about what it's like being on the Project Management Committee (PMC) for the Apache Software Foundation as well as some of the responsibilities involved, such as choosing Kafka Improvement Proposals (KIPs), monitoring releases, and making contributions to the community. For Gwen, part of finding Kafka was her willingness to take risks, learn all types of code bases, and leave companies for a new technology that showed promise and sparked her interest. Given that not only Kafka itself but also how people learn Kafka has changed, Gwen shares her best tips for approaching the project. There are differences between distributed systems engineers and full stack engineers, and for anyone who wants to work at a company like Confluent, it’s important to showcase design and architecture knowledge, a knack for solving problems, and confidence in your ideas. EPISODE LINKSApache Governance GuidelinesTim’s GitHub GIFsApply to be an infrastructure engineer Join the Confluent Community SlackGet 30% off Kafka Summit London registration with the code KSL20Audio
3/4/202048 minutes, 26 seconds
Episode Artwork

Towards Successful Apache Kafka Implementations ft. Jakub Korab

Whether it's stream processing, real-time data analytics, to adding business value, Professional Services helps customers thrive within their chosen software or products and ultimately be successful as a digital enterprise. As a solutions architect and member of the Professional Services Team at Confluent, Jakub Korab discusses what Professional Services actually is and how it relates to customer success. It all centers around what customers want to do, and you’ll hear about trends, Apache Kafka® use cases, and real-life examples of Professional Services in action within various industries over the last year.EPISODE LINKSUnderstanding Message Brokers by Jakub KorabApache Camel Developer's Cookbook by Jakub KorabJoin our teamLearn more about Professional ServicesJoin the Confluent Community SlackGet 30% off Kafka Summit London registration with the code KSL20Audio
2/26/202055 minutes, 3 seconds
Episode Artwork

Knative 101: Kubernetes and Serverless Explained with Jacques Chester

What is Knative and how does it simplify Kubernetes-related processes through seamless extension? Jacques Chester (Software Engineer, VMware) is publishing a book called “Knative in Action” that walks through the problems Knative is trying to solve. You don’t need to be an expert to fully understand Knative, so start getting hands on and see what you can do with it! You also don't need to be an expert on Kubernetes to read the book, but some experience with the tool can help you get it working with your software more quickly. This episode will help you understand the relationship between Knative and serverless and simplify your Kubernetes cluster.EPISODE LINKSLearn more about KnativeFactory Physics by Hopp and SpearmanBusiness Dynamics: Systems Thinking and Modeling for a Complex World  by John D. StermanMatt Stine's tweetJoin the Confluent Community SlackGet 30% off Kafka Summit London registration with the code KSL20AudioGet 40% off Knative in Action with the code podcon19
2/19/202047 minutes, 13 seconds
Episode Artwork

Paving a Data Highway with Kafka Connect ft. Liz Bennett

The Stitch Fix team benefits from a centralized data integration platform at scale using Apache Kafka and Kafka Connect. Liz Bennett (Software Engineer, Confluent) got to play a key role building their real-time data streaming infrastructure. Liz explains how she implemented Apache Kafka® at Stitch Fix, her previous employer, where she successfully introduced Kafka first through a Kafka hackathon and then by pitching it to the management team. Her first piece of advice? Give it a cool name like The Data Highway. As part of the process, she prepared a detailed document proposing a Kafka roadmap, which eventually landed her in a meeting with management on how they would successfully integrate the product (spoiler: it worked!). If you’re curious about the pros and cons of Kafka Connect, the self-service aspect, how it does with scaling, metrics, helping data scientists, and more, this is your episode! You’ll also get to hear what Liz thinks her biggest win with Kafka has been.EPISODE LINKSPutting the Power of Apache Kafka into the Hands of Data ScientistsJoin the Confluent Community SlackGet 30% off Kafka Summit London registration with the code KSL20Audio
2/12/202046 minutes, 1 second
Episode Artwork

Distributed Systems Engineering with Apache Kafka ft. Jun Rao

Jun Rao (Co-founder, Confluent) explains what relational databases and distributed databases are, how they work, and major differences between the two. He also delves into important lessons he’s learned along the way through the transition from the relational world to the distributed world. To be successful at a place like Confluent, he outlines three fundamental traits that a distributed systems engineer must possess, emphasizing the importance of curiosity and knowledge, care in code development, and being open-minded and collaborative. You may even find that sometimes, the people with the best answers to your problems aren't even at your company! Originally from China, Jun moved to the U.S. for his Ph.D. and eventually landed in IBM research labs. He worked there for over 10 years before moving to LinkedIn, where Apache Kafka® was initially being developed and implemented. EPISODE LINKSGet 30% off Kafka Summit London registration with the code KSL20AudioJoin the Confluent Community Slack
2/5/202054 minutes, 59 seconds
Episode Artwork

How to Write a Successful Conference Abstract | Streaming Audio Special

Learn how to write an abstract for conference submissions and call for papers with tips from Tim Berglund, chair of the Kafka Summit Program Committee. Whether you're giving a talk for the very first time or you consider yourself to be an experienced speaker, these guidelines will help you craft a strong story that stands out from the others.EPISODE LINKSJoin #summit-office-hours on the Confluent Community SlackSign up to speak at a meetupWatch the video version of this podcastGet 30% off Kafka Summit London registration with the code KSL20Audio
2/4/20207 minutes, 40 seconds
Episode Artwork

Streaming Call of Duty at Activision with Apache Kafka ft. Yaroslav Tkachenko

Call of Duty: Modern Warfare is the most played Call of Duty multiplayer of this console generation with over $1 billion in sales and almost 300 million multiplayer matches. Behind the scenes, Yaroslav Tkachenko (Software Engineer and Architect, Activision) gets to be on the team behind it all, architecting, designing, and implementing their next-generation event streaming platform, including a large-scale, near-real-time streaming data pipeline using Kafka Streams and Kafka Connect.Learn about how his team ingests huge amounts of data, what the backend of their massive distributed system looks like, and the automated services involved for collecting data from each pipeline. EPISODE LINKSBuilding a Scalable and Extendable Data Pipeline for Call of Duty GamesDeploying Kafka Connect ConnectorsJoin the Confluent Community SlackGet 30% off Kafka Summit London registration with the code KSL20Audio
1/27/202046 minutes, 43 seconds
Episode Artwork

Confluent Platform 5.4 | What's New in This Release + Updates

A quick summary of new features, updates, and improvements in Confluent Platform 5.4, including Role-Based Access Control (RBAC), Structured Audit Logs, Multi-Region Clusters, Confluent Control Center enhancements, Schema Validation, and the preview for Tiered Storage.This release also includes pull queries and embedded connectors in preview as part of KSQL.EPISODE LINKSConfluent Platform 5.4 Release Notes Introducing Confluent Platform 5.4Download Confluent Platform 5.4Watch the video version of this podcastJoin us in Confluent Community SlackGet 30% off Kafka Summit London registration with the code KSL20Audio
1/22/202014 minutes, 26 seconds
Episode Artwork

Making Apache Kafka Connectors for the Cloud ft. Magesh Nandakumar

From previously focusing on Confluent Schema Registry to now making connectors for Confluent Cloud, Magesh Nandakumar (Software Engineer, Confluent) discusses what connectors do, how they simplify data integrations, and how they enable sophisticated customer use cases. With connectors built for Confluent Cloud on Google Cloud Platform (GCP), Microsoft Azure, and Amazon Web Services (AWS), this helps users implement Apache Kafka® within their existing systems in an easy way. There’s a lot that Magesh is looking forward to when the world of connectors and the world of cloud collide.EPISODE LINKSWhy Kafka Connect? ft. Robin MoffattJoin the Confluent Community SlackFully managed Apache Kafka as a service! Try free.Get 30% off Kafka Summit London registration with the code KSL20Audio
1/13/202025 minutes, 19 seconds
Episode Artwork

Location Data and Geofencing with Apache Kafka ft. Guido Schmutz

One way to put Apache Kafka into action is through geofencing and tracking the location data of objects, barges, and cars in real time. Guido Schmutz (Principal Consultant, Trivadis) shares about one such use case involving a German steel company and the development project he worked on for them, which he featured in a talk at Berlin Buzzwords. EPISODE LINKSLocation Analytics – Real-Time Geofencing Using Kafka (Video) Location Analytics – Real-Time Geofencing Using Kafka (Slides) Join the Confluent Community SlackGet 30% off Kafka Summit London registration with the code KSL20Audio
1/8/202048 minutes, 20 seconds
Episode Artwork

Multi-Cloud Monitoring and Observability with the Metrics API ft. Dustin Cote

The role of monitoring hosted services is evolving, but the ability to let go of the details to get what you are paying for with SaaS has always been there. Dustin Cote (Product Manager for Observability, Confluent Cloud) talks about Apache Kafka® made serverless and how beyond just the brokers, Confluent Cloud focuses on fitting into customer systems rather than building monitoring silos. When it comes to monitoring, logging, tracing, and alerting, Dustin defines what they all mean and how they operate in a database before diving into the requirements needed in order for a properly observable cloud service to exist and on-prem service. EPISODE LINKSConfluent Cloud Metrics API documentationJoin #confluent-cloud on the Confluent Community SlackFully managed Apache Kafka as a service! Try free.
12/30/201942 minutes, 19 seconds
Episode Artwork

Apache Kafka and Apache Druid – The Perfect Pair ft. Rachel Pedreschi

As the head of global field engineering and community at Imply, Rachel Pedreschi is passionate about engaging both externally with customers and internally with departments all across the board, from sales to engineering. Rachel’s involvement in the open source community focuses primarily on Apache Druid, a real-time, high-performance datastore that provides fast, sub-second analytics and complements another powerful open source project as well: Apache Kafka®. Together, Kafka and Druid provide real-time event streaming and high-performance streaming analytics with powerful visualizations.EPISODE LINKSHow To Use Kafka and Druid to Tame Your Router DataETL and Event Streaming Explained ft. Stewart BrysonWho is Abraham Wald?How Not to Be Wrong: The Power of Mathematical ThinkingJoin the Confluent Community SlackFully managed Apache Kafka as a service! Try free.
12/23/201950 minutes, 12 seconds
Episode Artwork

Apache Kafka 2.4 – Overview of Latest Features, Updates, and KIPs

Apache Kafka 2.4 includes new Kafka Core developments and improvements to Kafka Streams and Kafka Connect, including MirrorMaker 2.0, RocksDB metrics, and more.EPISODE LINKSRead about what's new in Apache Kafka 2.4Check out the Apache Kafka 2.4 release notesWatch the video version of this podcast
12/16/201915 minutes, 4 seconds
Episode Artwork

Cloud-Native Patterns with Cornelia Davis

Developing cloud-based applications requires unique patterns and practices that make them suitable for modern cloud platforms. Host Tim Berglund catches up with Cornelia Davis, author of Cloud-Native Patterns and VP of Technology at Pivotal, on what cloud-native patterns are, the example code she created, her latest book, and how she wrote the book for the customers she interacts with on a daily basis. EPISODE LINKSGet 40% off Cloud Native Patterns with the code podcon19Join the Confluent Community SlackFully managed Apache Kafka as a service! Try free.
12/16/201953 minutes, 12 seconds
Episode Artwork

Ask Confluent #16: ksqlDB Edition

Vinoth Chandar has led various infrastructure projects at Uber and is one of the main drivers behind the ksqlDB project. In this episode hosted by Gwen Shapira (Engineering Manager, Cloud-Native Apache Kafka®), Vinoth and Gwen discuss what ksqlDB is, the kinds of applications that you can build with it, vulnerabilities, and various ksqlDB use cases. They also talk about what's currently the best version of Apache Kafka version for performance improvements that don’t cause breaking changes to existing Kafka configuration and functionality. EPISODE LINKSRead about ksqlDB on the blogLearn more about ksqlDBksqlDB Demo | The Event Streaming Database in ActionFollow ksqlDB on TwitterWhat’s New in Apache Kafka 2.3What is Apache Kafka? Watch the video version of this podcastJoin the Confluent Community SlackFully managed Apache Kafka as a service! Try free.
12/12/201930 minutes, 11 seconds
Episode Artwork

Machine Learning with Kafka Streams, Kafka Connect, and ksqlDB ft. Kai Waehner

In this episode, Kai Waehner (Senior Systems Engineer, Confluent) defines machine learning in depth, describes the architecture of his dream machine learning pipeline, shares about its relevance to Apache Kafka®, Kafka Connect, ksqlDB, and the related ecosystem, and discusses the importance of security and fraud detection. He also covers Kafka use cases, including an example of how Kafka Streams and TensorFlow provide predictive analytics for connected cars.EPISODE LINKSHow to Build and Deploy Scalable Machine Learning in Production with Apache KafkaLearn about Apache KafkaLearn about Kafka ConnectLearn about ksqlDB, the successor to KSQLJoin the Confluent Community SlackFully managed Apache Kafka as a service! Try free.
12/4/201938 minutes, 30 seconds
Episode Artwork

Real-Time Payments with Clojure and Apache Kafka ft. Bobby Calderwood

Streamlining banking technology to help smaller banks and credit unions thrive among financial giants is top of mind for Bobby Calderwood (Founder, Evident Systems), who started out in programming, transitioned to banking, and recently launched Evident Real-Time Payments. Payments leverages Confluent Cloud to help banks of all sizes transform to real-time banking services from traditionally batch-oriented, bankers’ hours operational mode. This is achieved through Apache Kafka® and the Kafka Streams and Kafka Connect APIs with Clojure using functional programming paradigms like transducers. Bobby also shares about his efforts to help financial services companies build their next-generation platforms on top of streaming events, including interesting use cases, addressing hard problems that come up in payments, and identifying solutions that make event streaming technology easy to use within established banking structures. EPISODE LINKSToward a Functional Programming Analogy for MicroservicesEvent Modeling: Designing Modern Information SystemsFinovate Fall/Evident SystemsThe REPL Podcast: 30: Bobby Calderwood on Kafka and FintechClojure TransducersRich Hickey’s TwitterDavid Nolen's TwitterStuart Halloway’s TwitterChris Redinger’s TwitterTim Ewald’s LinkedInJoin the Confluent Community SlackFully managed Apache Kafka as a service! Try free.
11/27/201958 minutes
Episode Artwork

Announcing ksqlDB ft. Jay Kreps

Jay Kreps (Co-creator of Apache Kafka® and CEO, Confluent) introduces ksqlDB, an event streaming database. As the successor to KSQL, ksqlDB seeks to unify the multiple systems involved in stream processing into a single, easy-to-use solution for building event streaming applications.ksqlDB offers support for running connectors in an embedded mode, in addition to support for both push and pull queries. Push queries allow you to subscribe to changing query results as new events occur, while pull queries allow you to look up a particular value at a single point in time. To use a ride-sharing app as an example, there is both a continuous feed of the current position of the driver (a push query) and the ability to look up current values such as the price of the ride (a pull query). Databases are still effective in their own realms, and ksqlDB is not intended as a replacement. Rather, ksqlDB enables you to build event streaming applications with the same ease and familiarity of building traditional applications on a relational database. It simplifies the underlying architecture for these applications so you can build powerful, real-time systems with just a few SQL statements.EPISODE LINKSLearn about ksqlDB on the blogWatch the demo to see ksqlDB in actionGet started with ksqlDBFollow ksqlDB on TwitterWhy Kafka Connect? ft. Robin MoffattContributing to Open Source with the Kafka Connect MongoDB Sink ft. Hans-Peter GrahslConnecting to Apache Kafka with Neo4jJoin the Confluent Community SlackFully managed Apache Kafka as a service! Try free.
11/20/201926 minutes, 57 seconds
Episode Artwork

Installing Apache Kafka with Ansible ft. Viktor Gamov and Justin Manchester

“It’s one thing to get a distributed system up and running. It’s another thing to get a distributed system up and running well.” Ansible keeps your Apache Kafka® deployment, management, and installation consistent, and it enables you to implement best practices that make it easy to get started. Justin Manchester (Platform DevOps Engineer, Confluent) and Viktor Gamov (Developer Advocate, Confluent) discuss the problems that Ansible is trying to solve, enabling collaboration and optimizing all components for top performance.EPISODE LINKSLearn more about AnsibleFollow Viktor Gamov on TwitterFollow Justin Manchester on TwitterThe Easiest Way to Install Apache Kafka and Confluent Platform – Using AnsibleJoin the Confluent Community SlackFully managed Apache Kafka as a service! Try free.
11/18/201946 minutes, 6 seconds
Episode Artwork

Securing the Cloud with VPC Peering ft. Daniel LaMotte

Everything is moving to the cloud, which makes it increasingly important to secure your cloud infrastructure and minimize the threat of potential attackers. With a virtual private cloud (VPC)—your own private network in the cloud that you can launch your own instances into—this can be done with VPC Peering, connecting VPCs together to create a path between them to keep your data safe and accessible to you alone. Although typically performed in a single cloud provider, it is possible to do in more than one—think of it as your cloud routerDaniel LaMotte (Site Reliability Engineer, Confluent) walks through the details of cloud networking and VPC peering: what it is, what it does, and how to launch a VPC in the cloud, plus the difference between AWS PrivateLink and AWS Transit Gateway, CIDR, and its accessibility across cloud providers.  EPISODE LINKSVPC Peering in Confluent CloudJoin the Confluent Community SlackFully managed Apache Kafka as a service! Try free.
11/13/201931 minutes, 56 seconds
Episode Artwork

ETL and Event Streaming Explained ft. Stewart Bryson

Migrating from traditional ETL tools to an event streaming platform is a process that Stewart Bryson (CEO and founder, Red Pill Analytics), is no stranger to. In this episode, he dispels misconceptions around what “streaming ETL” means, and explains why event streaming and event-driven architectures compel us to rethink old approaches:Not all data is corporate data anymoreNot all data is relational data anymoreThe cost of storing data is now negligibleSupporting modern, distributed event streaming platforms, and the shift of focus from on-premises to the cloud introduces new use cases that focus primarily on building new systems and rebuilding existing ones. From Kafka Connect and stack applications to the importance of tables, events, and logs, Stewart also discusses Gradle and how it’s being used at Red Pill Analytics. EPISODE LINKSDeploying Kafka Streams and KSQL with Gradle – Part 1: Overview and MotivationDeploying Kafka Streams and KSQL with Gradle – Part 2: Managing KSQL ImplementationsDeploying Kafka Streams and KSQL with Gradle – Part 3: KSQL User-Defined Functions and Kafka Streams Join the Confluent Community SlackFully managed Apache Kafka as a service! Try free.
11/6/201949 minutes, 42 seconds
Episode Artwork

The Pro’s Guide to Fully Managed Apache Kafka Services ft. Ricardo Ferreira

Several definitions of a fully managed Apache Kafka® service have floated around, but Ricardo Ferreira (Developer Advocate, Confluent) breaks down what it truly means and why every developer should care. Addressing a handful of questions around Apache Kafka®, Confluent Cloud, hosted solutions, and how they all work, Ricardo describes the benefits of using a fully managed service as a means of simplifying the lives of developers and letting them get back to building—which is why they started out as developers in the first place! EPISODE LINKSThe Rise of Managed Services for Apache KafkaExcerpt from The Beginner's Guide to Mathematica, Version 4Jay Kreps’ keynote at Kafka Summit SF 2019Neha Narkhede’s keynote at Kafka Summit London 2019Demos by Ricardo FerreiraJoin the Confluent Community SlackFully managed Apache Kafka as a service! Try free.
11/4/201956 minutes, 28 seconds
Episode Artwork

Kafka Screams: The Scariest JIRAs and How To Survive Them ft. Anna McDonald

In today's spooktacular episode of Streaming Audio, Anna McDonald (Technical Account Manager, Confluent) discusses six of the scariest Apache Kafka® JIRAs. Starting with KAFKA-6431: Lock Contention in Purgatory, Anna breaks down what purgatory is and how it’s not something to fear or avoid. Next, she dives into KAFKA-8522: Tombstones Can Survive Forever, where she explains tombstones, compacted topics, null values, and log compaction. Not to mention there’s KAFKA-6880: Zombie Replicas Must Be Fenced, which sounds like the spookiest of them all. KAFKA-8233, which focuses on the new TestTopology mummy (wrapper) class, provides one option for setting the topology through your Kafka Screams Streams application. As Anna puts it, "This opens doors for people to build better, more resilient, and more interesting topologies." To close out the episode, Anna talks about two more JIRAs: KAFKA-6738, which focuses on the Kafka Connect dead letter queue as a means of handling bad data, and the terrifying KAFKA-5925 on the addition of an executioner API. EPISODE LINKSKAFKA-6431: Lock Contention in PurgatoryKAFKA-8522: Tombstones Can Survive ForeverKAFKA-6880: Zombie Replicas Must Be FencedKAFKA-8233: Helper Classes to Make it Simpler to Write Test Logic with TopologyTestDriver KAFKA-6738: Kafka Connect Handling of Bad DataKAFKA-5925: Adding Records Deletion Operation to the New Admin Client APIStreaming Apps and Poison Pills: Handle the Unexpected with Kafka StreamsData Modeling for Apache Kafka – Streams, Topics & More with Dani TraphagenDistributed Systems Engineering with Apache Kafka ft. Jason GustafsonKafka Streams Topology VisualizerFollow Anna McDonald on TwitterFollow Mitch Henderson on TwitterJoin the Confluent Community SlackFully managed Apache Kafka as a service! Try free.
10/30/201946 minutes, 32 seconds
Episode Artwork

Data Integration with Apache Kafka and Attunity

From change data capture (CDC) to business development, connecting Apache Kafka® environments, and customer success stories, Graham Hainbach discusses the possibilities of data integration with Kafka and Attunity using Replicate, Compose, and Enterprise Manager. He also shares real-life examples of how Attunity best leverages Kafka in their systems.EPISODE LINKSApache Kafka Transaction Data Streaming for DummiesJoin the Confluent Community SlackFully managed Apache Kafka as a service! Try free.
10/28/201943 minutes, 49 seconds
Episode Artwork

Distributed Systems Engineering with Apache Kafka ft. Colin McCabe

Colin McCabe shares about what it’s like being a distributed systems engineer on the Core Kafka team at Confluent, where he has worked previously, and how that led to his interest in Apache Kafka®. As an active member of the Apache open source community, he describes that the community is a place that both welcomes newcomers and fosters different ideas that help make the product the best that it can be for everyone.Being a distributed systems engineer versus a full stack engineer comes with its own unique challenges. Colin offers some advice for those interested in working with Kafka and what the interview process is like at Confluent. It’s not all about what you know, but rather how you collaborate and contribute to the team, and how you get to the answer. Part of finding the answer is getting involved with Apache projects themselves by engaging with others and helping with bug fixes as much as possible, because it’ll help you gain a better grasp on a technology that is ever-changing.EPISODE LINKSKIP-500: Apache Kafka Without ZooKeeper ft. Colin McCabe and Jason GustafsonJoin the Confluent Community SlackFully managed Apache Kafka as a service! Try free.
10/23/201945 minutes, 41 seconds
Episode Artwork

Apache Kafka on Kubernetes, Microsoft Azure, and ZooKeeper with Lena Hall

Lena Hall joins Tim Berglund in the studio to talk about Apache Kafka®, the various ways to run Kafka on Microsoft Azure, Kafka on Kubernetes (K8s), and some exciting events that are happening in the Kafka world. Lena shares about serving double duty as both a senior software engineer and senior cloud developer advocate for Azure Engineering, including her unique roles and responsibilities, and how she balances engineering with advocacy. From writing tech articles to her experience with fuzzing and presence on YouTube, Lena is a strong community supporter and believes in the importance of staying rooted in the world of code as an advocate, because it helps you better understand common challenges and gives you insight as an engineer trying to fix them. It’s important to ask what's good about it and how it can be improved.They also discuss Kubernetes, the benefits of running Kafka on Kubernetes, why it’s popular, and using systems that can integrate with it. With Confluent Operator, it’s faster to spin up new environments, as well as easier to support a larger number of clusters in addition to scaling and configuration changes. As the Kafka ecosystem continues to grow and progress, one of the most notable updates of all is KIP-500 involving ZooKeeper. EPISODE LINKSApache Kafka on Kubernetes – Could You? Should You?Ask Confluent #1: Kubernetes, Confluent Operator, Kafka and KSQLFuzzing, Confluent Operator, StatefulSets, and Azure Kubernetes Service (AKS)Distributed Datastores on Kubernetes (on StatefulSets) at GOTO ChicagoRunning a Distributed Database on Kubernetes on AzureKIP-500: Replace ZooKeeper with a Self-Managed Metadata Quorum | PodcastProject Springfield, the Microsoft Project that Lena worked on in regard to KIP-500What is Apache Kafka in Azure HDInsightUse Azure Event Hubs from Apache Kafka applicationsApache Kafka based event streaming platform optimized for Azure StackConfluent Platform on Azure MarketplaceEventually Perfect Distributed Systems: Blog | O'Reilly Velocity KeynoteJoin the Confluent Community SlackFully managed Apache
10/16/201946 minutes, 8 seconds
Episode Artwork

Improving Fairness Through Connection Throttling in the Cloud with KIP-402 ft. Gwen Shapira

The focus of KIP-402 is to improve fairness in how Apache Kafka® processes connections and how network threads pick up requests and new data. Gwen Shapira (Engineering Manager for Cloud-Native Kafka, Confluent) outlines the details of this KIP and her team’s efforts to make user-facing Kafka improvements. Halfway through the episode, Gwen shares how to send metadata and produce client messages. EPISODE LINKSKIP-402: Improve fairness in SocksetServer processorsJoin the Confluent Community SlackFully managed Apache Kafka as a service! Try free.
10/9/201948 minutes, 37 seconds
Episode Artwork

Data Modeling for Apache Kafka – Streams, Topics & More with Dani Traphagen

Helping users be successful when it comes to using Apache Kafka® is a large part of Dani Traphagen’s role as a senior systems engineer at Confluent. Whether she’s advising companies on implementing parts of Kafka or rebuilding their systems entirely from the ground up, Dani is passionate about event-driven architecture and the way streaming data provides real-time insights on business activity. She explains the concept of a stream, topic, key, and stream-table duality, and how each of these pieces relate to one another. When it comes to data modeling, Dani covers importance business requirements, including the need for a domain model, practicing domain-driven design principles, and bounded context. She also discusses the attributes of data modeling: time, source, key, header, metadata, and payload, in addition to exploring the significance of data governance and lineage and performing joins.EPISODE LINKSConvert from table to stream and stream to table Distributed, Real-Time Joins and Aggregations on User Activity Events Using Kafka StreamsKSQL in Action: Real-Time Streaming ETL from Oracle Transactional DataKSQL in Action: Enriching CSV Events with Data from RDBMS into AWSJourney to Event Driven – Part 4: Four Pillars of Event Streaming MicroservicesJoin the Confluent Community SlackFully managed Apache Kafka as a service! Try free.
10/7/201940 minutes, 25 seconds
Episode Artwork

MySQL, Cassandra, BigQuery, and Streaming Analytics with Joy Gao

Joy Gao chats with Tim Berglund about all things related to streaming ETL—how it works, its benefits, and the implementation and operational challenges involved. She describes the streaming ETL architecture at WePay from MySQL/Cassandra to BigQuery using Apache Kafka®, Kafka Connect, and Debezium.  EPISODE LINKSCassandra Source Connector DocumentationStreaming Databases in Real Time with MySQL, Debezium, and KafkaStreaming Cassandra at WePayChange Data Capture with Debezium ft. Gunnar MorlingJoin the Confluent Community SlackFully managed Apache Kafka as a service! Try free.
10/2/201943 minutes, 59 seconds
Episode Artwork

Scaling Apache Kafka with Todd Palino

Todd Palino, a senior SRE at LinkedIn, talks about the start of Apache Kafka® at LinkedIn, what learning to use Kafka was like, how Kafka has changed, and what he and others in the community hope for in the future of Kafka. If you’re curious about life as an SRE, Todd shares the details on that too, and goes into how Kafka is used at LinkedIn, as well as several wins and challenges over the years with the product. EPISODE LINKSKafka: The Definitive Guide by Neha Narkhede, Gwen Shapira & Todd PalinoURP? Excuse You! The Three Metrics You Have to Know Join the Confluent Community Slack
9/25/201946 minutes, 3 seconds
Episode Artwork

Understand What’s Flying Above You with Kafka Streams ft. Neil Buesing

Neil Buesing (Director of Real-Time Data, Object Partners) discusses what a day in his life looks like and how Kafka Streams helps analyze flight data.EPISODE LINKSUsing Location Data to Showcase Keys, Windows, and Joins in Kafka Streams DSL and KSQLKafka: The Definitive Guide by Neha Narkhede, Gwen Shapira & Todd PalinoRead the Confluent blogJoin the Confluent Community Slack
9/23/201913 minutes
Episode Artwork

KIP-500: Apache Kafka Without ZooKeeper ft. Colin McCabe and Jason Gustafson

Tim Berglund sits down with Colin McCabe and Jason Gustafson to talk about KIP-500. The pair, who work on the Kafka Core Engineering Team, discuss the history of Kafka, the creation of KIP-500, and what it will do for the community as a whole. They break down ZooKeeper's role in Kafka, the implications of removing ZooKeeper dependency, replacing it with a self-managed metadata quorum, and how they've been combatting security, stability, and compatibility issues. With pending improvements towards scalability and inter-broker communication, and now that KIP-500 has been adopted within the community—there's a lot covered in this episode that you won't want to miss!EPISODE LINKSKIP-500: Replace ZooKeeper with a Self-Managed Metadata QuorumKIP-497: Add inter-broker API to alter ISRZooKeeper Atomic BroadcastThe Atomic Broadcast ProblemRAFT Animated GuideRAFT (Computer Science)Join the Confluent Community Slack
9/18/201943 minutes, 46 seconds
Episode Artwork

Should You Run Apache Kafka on Kubernetes? ft. Balthazar Rouberol

When it comes to deploying applications at scale without needing to integrate different pieces of infrastructure yourself, the answer nowadays is increasingly Kubernetes. Kubernetes provides all the building blocks that are needed, and a lot of thought is required to truly create an enterprise-grade Apache Kafka® platform that can be used in production. But before running Kafka on Kubernetes, there are some factors to consider. What are the maturing stages of Kubernetes adoption? How did Datadog experience these stages? Balthazar Rouberol shares what to think about before hopping on Kubernetes hype train.EPISODE LINKSKafka-Kit: Tools for Scaling KafkaRunning Production Kafka Clusters in KubernetesJoin the Confluent Community Slack
9/16/201929 minutes, 38 seconds
Episode Artwork

Jay Kreps on the Last 10 Years of Apache Kafka and Event Streaming

As Confluent turns five years old, special guest Jay Kreps (Co-founder and CEO, Confluent) brings us back to his early development days of coding Apache Kafka® over a Christmas holiday while working at LinkedIn. Kafka has become a breakthrough open source distributed streaming platform based on an abstraction of the distributed commit log, and his involvement in the project eventually led him to start Confluent with Jun Rao and Neha Narkhede. In this episode, Jay shares about all the highs and lows along the way, including some of his favorite customer success stories with companies like Lyft and Euronext, which empower their real-time businesses through event streaming with Confluent Cloud.Starting a company certainly comes with more than the technology, and Jay also reflects on some of the challenges around funding, support, and introducing Confluent to the rest of the world. How they have brought us from the beginning to now yields some wise words from Jay to any developer who is interested in establishing their own startup. EPISODE LINKSLyft on Production-Ready Kafka on KubernetesEuronext Stock Exchange Relies on Confluent for Event-Driven Trading PlatformJoin the Confluent Community SlackGet 30% off Kafka Summit registration using the code audio19 
9/12/201948 minutes, 25 seconds
Episode Artwork

Connecting to Apache Kafka with Neo4j

What’s a graph? How does Cypher work? In today's episode of Streaming Audio, Tim Berglund sits down with Michael Hunger (Lead of Neo4j Labs) and David Allen (Partner Solution Architect, Neo4j) to discuss Neo4j basics and get the scoop on major features introduced in Neo4j 3.4 and 3.5. Among these are geospatial and temporal types, but there’s also more to come in 4.0: a multi-database feature, fine-grained security, and reactive drivers/Spring Data Neo4j RX. In addition to sharing a little bit about the history of the integration and features in relation to Apache Kafka®, they also discuss change data capture (CDC), using Neo4j to put graph operations into an event streaming application, and how GraphQL fits in with event streaming and GRANDstack. The goal is to add graph abilities to help any distributed application become more successful.EPISODE LINKSKafka Connect Neo4j SinkNeo4j Streams Kafka IntegrationExtending the Stream/Table Duality into a Trinity, with Graphs (with Will Lyon)Neo4j Online Developer SummitAnnouncing NODES 2019 Global GraphHackJoin the Confluent Community Slack
9/9/201954 minutes, 29 seconds
Episode Artwork

Ask Confluent #15: Attack of the Zombie Controller

Gwen Shapira (Core Kafka Software Engineer, Confluent) sits down to answer the questions you've had about event streaming, Apache Kafka®, Confluent, and everything in between. This includes creating tables in nested JSON topics, how to balance ordering, latency and reliability, building event-based systems, and how to navigate the tricky endOffsets API. She talks about the hardships of fencing Zombie requests, some of the talks given at previous Kafka Summits, and an important question from Ask Confluent #3. EPISODE LINKSKIP-91: Provide Intuitive User Timeouts in The ProducerKIP-79: ListOffsetRequest/ListOffsetResponse v1 and add timestamp search methods to the new consumerKSQL recipe on creating tables in a nested JSON topicData Wrangling with Apache Kafka and KSQLStruct (Nested Data) | Level Up Your KSQLKafka Summit 2018 Keynote (Experimentation Using Event-Based Systems)Apache Kafka 2.3endOffsets documentationJun Rao, Confluent - Kafka Controller: A Deep DiveConfluent Platform 5.3Kafka Summit 2017 Keynote (Go Against the Flow: Databases and Stream Processing)The Event Streaming Platform ExplainedWatch the video version of this podcastJoin the Confluent Community Slack
9/4/201922 minutes, 27 seconds
Episode Artwork

Helping Healthcare with Apache Kafka and KSQL ft. Ramesh Sringeri

In today’s episode of Streaming Audio, Tim Berglund sits down with Senior Applications Developer of Mobile Solutions Ramesh Sringeri to discuss Apache Kafka®—specifically two Kafka use cases that Children’s Healthcare of Atlanta is working on.First, they discuss achieving near-real-time streams of data to support meaningful intracranial pressure prediction and managing intracranial pressure (ICP) in a timely manner to help the care team achieve better outcomes with traumatic brain injuries.Children’s Healthcare of Atlanta is in the process of building machine learning models for predicting ICP values 30 and 60 minutes in the future. This will help the care team better prepare for handling potential adverse conditions, where elevated ICP values could lead to undesirable outcomes. The Children’s team is using Kafka, KSQL, and Kafka Streams programs to build a pipeline in which they can test their machine learning models.Ramesh also shares about the work they’re doing to mitigate alarm fatigue for care providers. According to him, the current generation of monitoring devices are not equipped to set up multiple alarm conditions, and sometimes a combination of measures need to cross thresholds to be of concern. Children’s is able to leverage stream processing and KSQL to set up multiple conditions, reducing the number of meaningless alarms conditions that might condition care providers to ignore them.One of the best parts of it all—with Kafka and KSQL, the Children’s team has been able to quickly build data processing pipelines and address business use cases without having to write a lot of code.EPISODE LINKSJoin the Confluent Community SlackFor more, you can check out ksqlDB, the successor to KSQL.
8/28/201952 minutes, 47 seconds
Episode Artwork

Contributing to Open Source with the Kafka Connect MongoDB Sink ft. Hans-Peter Grahsl

Sink and source connectors are important for getting data in and out of Apache Kafka®. Tim Berglund invites Hans-Peter Grahsl (Technical Trainer and Software Engineer, Netconomy Software & Consulting GmbH) to share about his involvement in the Apache Kafka project, spanning from several conference contributions all the way to his open source community sink connector for MongoDB, now part of the official MongoDB Kafka connector code base. Join us in this episode to learn what it’s like to be the only maintainer of a side project that’s been deployed into production by several companies!EPISODE LINKSMongoDB Connector for Apache KafkaGetting Started with the MongoDB Connector for Apache Kafka and MongoDBKafka Connect MongoDB Sink Community ConnectorKafka Connect MongoDB Sink Community Connector (GitHub)Adventures of Lucy the Havapoo Join the Confluent Community Slack
8/21/201950 minutes, 22 seconds
Episode Artwork

Teaching Apache Kafka Online with Stéphane Maarek

Streaming Audio welcomes Stéphane Maarek (CEO, Datacumulus) on the podcast to discuss how he got started hosting online Apache Kafka® tutorials and teaching on Udemy, the challenges he faces as an instructor, his approach to answering hard questions, and the projects he is currently working on.EPISODE LINKSKSQL Training for Hands-On LearningJoin the Confluent Community Slack
8/19/201942 minutes, 22 seconds
Episode Artwork

Connecting Apache Cassandra to Apache Kafka with Jeff Carpenter from DataStax

Whenever you see an Apache Cassandra™ in the wild, you probably also see an Apache Kafka®️. In this episode, Tim Berglund (Senior Director of Developer Experience, Confluent) and Jeff Carpenter (Director of Developer Advocacy, DataStax) discuss the best way to get those systems talking using the DataStax Apache Kafka Connector and build a real-time data pipeline. EPISODE LINKSAbout the DataStax Apache Kafka ConnectorDataStax Academy: DataStax Apache Kafka Connector CourseJoin the Confluent Community Slack
8/12/201947 minutes, 58 seconds
Episode Artwork

Transparent GDPR Encryption with David Jacot

The General Data Protection Regulation (GDPR) has challenged many enterprises to rethink how they deal with customer data. Viktor Gamov chats with David Jacot about a unique approach to inter-broker traffic encryption that he implemented for his customer’s sidecar pattern use case.EPISODE LINKSLearn about IstioLearn about EnvoyLearn about LinkerdHandling GDPR with Apache Kafka®: How to Comply Without Freaking Out? Join the Confluent Community Slack
8/8/201916 minutes, 45 seconds
Episode Artwork

Confluent Platform 5.3 | What's New in This Release

A quick summary of the most important features in Confluent Platform 5.3. We discuss improved Kubernetes and Ansible support, improvements to Confluent Control Center that give you better insight into the data in your cluster, and an important new set of security features—Role-Based Access Control—aimed at making complex deployments more secure.EPISODE LINKSRead the docsRead the blogWatch the video version of this podcast (featuring an actual stream)Download Confluent Platform 5.3Join us in Confluent Community Slack
7/31/201913 minutes, 2 seconds
Episode Artwork

How to Convert Python Batch Jobs into Kafka Streams Applications with Rishi Dhanaraj

Zenreach is a company that makes tools to help retailers use digital marketing more effectively. If that sounds like a problem that only marketing people would be interested in, that’s because you don’t know what they do! There are all kinds of fascinating technology problems to solve by utilizing event streaming platforms to process data at volume. Rishi Dhanaraj, our guest today, worked at Zenreach as an intern, and took on a big pile of Python batch jobs, turning them into some really interesting Kafka Streams code. Listen in as he walks us through how he did it.EPISODE LINKSA Beginner's Perspective on Kafka Streams: Building Real-Time Walkthrough DetectionReal-Time Presence Detection at Scale with Apache Kafka on AWSJoin us in Confluent Community Slack
7/29/201931 minutes, 2 seconds
Episode Artwork

Ask Confluent #14: In Control of Kafka with Dan Norwood

Is Apache Kafka® actually a database? Can you install Confluent Control Center on Google Cloud Platform (GCP)? All this, plus some tips from Dan Norwood, the first user of Kafka Streams.EPISODE LINKSControl Center Docker imageControl Center Docker configurationComplete Streams exampleWatch the video version of this podcastJoin us in Confluent Community Slack
7/22/201923 minutes, 50 seconds
Episode Artwork

Kafka in Action with Dylan Scott

Author Dylan Scott tells all about his upcoming Manning title Kafka in Action, which shares how Apache Kafka® can be used by beginners who are just starting out their own projects and dispels common Hadoop-related myths, as Kafka has grown to become a powerful event streaming platform beyond big data ecosystems alone. To get 40% off Manning products, use the following code: podcon19EPISODE LINKSJoin us in Confluent Community Slack
7/15/201938 minutes, 15 seconds
Episode Artwork

Change Data Capture with Debezium ft. Gunnar Morling

Friends don’t let friends do dual writes! Gunnar Morling (Software Engineer, Red Hat) joins us on the podcast to share a little bit about what Debezium is, how it works, and which databases it supports. In addition to covering the various use cases and benefits from change data capture (CDC) in the context of microservices—touching on the outbox pattern in particular, Gunnar walks us through the advantages of log-based CDC as implemented through Debezium over polling-based approaches, why you’d want to avoid dual writes to multiple resources, and engaging with members from the community to work collaboratively on Debezium.EPISODE LINKSJoin us in Confluent Community Slack
7/10/201949 minutes, 15 seconds
Episode Artwork

Distributed Systems Engineering with Apache Kafka ft. Jason Gustafson

Ever wonder what it’s like to be a distributed systems engineer at Confluent? Core Kafka Engineer Jason Gustafson dives into the challenges of working on distributed systems, particularly when it comes to a unique system like Apache Kafka®. He also discusses ways in which Confluent is working with the community to solve active problems and what it takes to be a distributed systems engineer.As always, Confluent is looking for engineers who are interested in distributed systems, and you don’t have to have 10 years of experience to do it!EPISODE LINKSKIP-392: Allow consumers to fetch from closest replicaKafka Improvement ProposalsHow to contributeHow Confluent's Engineering Team is Building the Infrastructure for Real-Time Event StreamingJoin us in Confluent Community Slack
7/2/201945 minutes, 56 seconds
Episode Artwork

Apache Kafka 2.3 | What's New in This Release + Updates and KIPs

Tim Berglund (Senior Director of Developer Experience, Confluent) explains what’s new in Apache Kafka® 2.3 and highlights some of the most important Kafka Improvement Proposals (KIPs).EPISODE LINKSRead the blogWatch the video version of this podcast
6/25/201913 minutes, 42 seconds
Episode Artwork

Rolling Kafka Upgrades and Confluent Cloud ft. Gwen Shapira

If you operate a Kafka cluster, hopefully you upgrade your brokers occasionally. Each release of Apache Kafka® includes detailed documentation that describes a tested procedure for doing a rolling upgrade of your cluster. Couldn’t be easier, right? Well, what if you have to do it with hundreds or thousands of brokers, such as you’d have to do if you were running Confluent Cloud? Today, Gwen Shapira shares some of the lessons she’s learned doing just that.EPISODE LINKSFully managed Apache Kafka as a service! Try free.
6/25/201942 minutes, 43 seconds
Episode Artwork

Deploying Confluent Platform, from Zero to Hero ft. Mitch Henderson

Mitch Henderson (Technical Account Manager, Confluent) explains how to plan and deploy your first application running on Confluent Platform. He covers critical factors to consider, like the tools and skills you should have on hand, and how to make decisions about deployment solutions. Mitch also walks you through how to go about setting up monitoring and testing, the marks of success, and what to do after your first project launches successfully.
6/18/201932 minutes, 30 seconds
Episode Artwork

Why Kafka Connect? ft. Robin Moffatt

In this episode, Tim talks to Robin Moffatt about what Kafka Connect is and why you should almost certainly use it if you're working with Apache Kafka®️. Whether you're building database offload pipelines to Amazon S3, ingesting events from external datastores to drive your applications or exposing messages from your microservices for audit and analysis, Kafka Connect is for you. Tim and Robin cover the motivating factors for Kafka Connect, why people end up reinventing the wheel when they're not aware of it and Kafka Connect's capabilities, including scalability and resilience. They also talk about the importance of schemas in Kafka pipelines and programs, and how the Confluent Schema Registry can help.EPISODE LINKSKafka Connect 101 courseIntro to Kafka Connect: Core Components and Architecture ft. Robin MoffattKafka Connect Fundamentals: What is Kafka Connect?
6/12/201946 minutes, 42 seconds
Episode Artwork

Schema Registry Made Simple by Confluent Cloud ft. Magesh Nandakumar

Tim Berglund and Magesh Nandakumar (Software Engineer, Confluent) discuss why schemas matter for building systems on Apache Kafka®, and how Confluent Schema Registry helps with the problem. They talk about how Schema Registry works, how you can collaborate around schema change through `avsc` files, and what it means for this to be available in Confluent Cloud today.EPISODE LINKSSchema Registry 101Schema ManagementMigrate Schemas to Confluent CloudSchemas, Contracts, and CompatibilityFully managed Apache Kafka as a service! Try free.
6/3/201941 minutes, 36 seconds
Episode Artwork

Why is Stream Processing Hard? ft. Michael Drogalis

Tim Berglund and Michael Drogalis (Product Lead for Kafka Streams and KSQL, Confluent) talk about all things stream processing: why it’s complex, how it's evolved, and what’s on the horizon to make it simpler.
5/29/201945 minutes, 45 seconds
Episode Artwork

Testing Kafka Streams Applications with Viktor Gamov

Tim Berglund is joined by Viktor Gamov (Developer Advocate, Confluent) to discuss various approaches to testing Kafka Streams applications.EPISODE LINKSKafkaEmbeddedTopologyTestDriverMocked Streams (Scala)MockafkaTest containersKafka containers 
5/20/201942 minutes, 33 seconds
Episode Artwork

Chris Riccomini on the History of Apache Kafka and Stream Processing

It’s a problem endemic to the tech world that we are always focused on what’s coming next, that we often forget to look at where we’ve been. Chris Riccomini, who was there at LinkedIn when Apache Kafka® was born, tells us how Kafka and the stream processing framework Samza came about, and also what he’s doing these days at WePay—building systems that use Kafka as a primary datastore.EPISODE LINKSWhen It Absolutely, Positively, Has to be There: Reliability Guarantees in KafkaSo, You Want to Build a Kafka Connector? Source Edition.Kafka is Your Escape Hatch
5/16/201950 minutes, 59 seconds
Episode Artwork

Ask Confluent #13: Machine Learning with Kai Waehner

Gwen and Kai chat about machine learning architectures, and whether software engineers and data scientists can learn to get along.EPISODE LINKSBlogs on deploying machine learning workloads: Machine Learning with Python, Jupyter, KSQL and TensorFlowHow to Build and Deploy Scalable Machine Learning in Production with Apache KafkaUsing Apache Kafka to Drive Cutting-Edge Machine LearningKIP-392: Allow consumers to fetch from closest replica Watch the video version of this podcast
5/8/201933 minutes, 15 seconds
Episode Artwork

Diving into Exactly Once Semantics with Guozhang Wang

It has been said that in distributed messaging, there are two hard problems: 2) exactly once delivery, 1) guaranteed order of messages and 2) exactly once delivery. Apache Kafka® has offered exactly once processing since version 0.11, which allows properly configured producers and consumers to make the guarantee that each message will be processed exactly one time. In this episode, Kafka Streams engineer Guozhang Wang walks through the implementation of transactional messaging in Kafka in some detail, including the idempotent producer API, the transaction coordinator responsible for managing the transaction log and consumer configurations. It’s a complex topic, but he takes us through it carefully and completely.EPISODE LINKSTransactions in Apache Kafka Enabling Exactly Once in Kafka Streams KIP-98: Exactly Once Delivery and Transactional MessagingKIP-129: Streams Exactly-Once Semantics
4/22/201947 minutes, 53 seconds
Episode Artwork

Ask Confluent #12: In Search of the Lost Offsets

Stanislav Kozlovski joins us to discuss common pitfalls when using Kafka consumers and a new KIP that promises to make consumer restarts much smoother.EPISODE LINKSKIP-345: Static consumer membership KIP-211: Documents the current behavior of offset expirationWatch the video version of this podcast
4/17/201922 minutes, 4 seconds
Episode Artwork

Ben Stopford on Microservices and Event Streaming

Microservices are pretty ubiquitous these days. Really “SOA done right,” they reimagine the services pattern in the context of the world we live in today, nearly two decades since the first big service-oriented systems hit production. But what have we learned in this time? There are plenty of war stories. System designers have explored different architectural patterns—REST, events and databases of all types. In this podcast, Tim Berglund and Ben Stopford explore the event-driven paradigm and how it relates to the microservice architectures we build today. Ben dives deep into coupling, evolution and challenges of our increasingly data-oriented culture. He also talks about the future, where data are events and events are data, and touches on real-time architectures that retain the decoupling properties needed to be pluggable, and to evolve. Powerful stuff.EPISODE LINKSDesigning Event-Driven Systems Building a Microservices Ecosystem with Kafka Streams and KSQL
4/8/201958 minutes, 15 seconds
Episode Artwork

Magnus Edenhill on librdkafka 1.0

After several years of development, librdkafka has finally reached 1.0! It remains API compatible with older versions of the library, so you won’t need to make any changes to your application. There are, however, several important new features like the idempotent producer, sparse broker connections, support for the vaunted KIP-62 and a complete makeover for the C#/.NET client.EPISODE LINKSlibrdkafka v1.0.0 release notes
4/3/201946 minutes, 47 seconds
Episode Artwork

Ask Confluent #11: More Services, More Metrics, More Fun

Do metrics for detecting clients from old versions actually exist? Or is Gwen making features up? This and more useful advice is coming up on today's episode of Ask Confluent.EPISODE LINKSThe Java property that will refresh DNS cache frequently: java.security.Security.setProperty(“networkaddress.cache.ttl” , “60");Improvements to DNS lookups in Confluent Platform 5.1.2 (Apache Kafka 2.1.1):KAFKA-7755KAFKA-7890More reasons to upgrade to Confluent Platform 5.1.2Monitoring clients with old versions:KIP-188 has lots of important new metrics If you are worried about “down-conversion” as discussed in Ask Confluent #5, you want to monitor: MBean: kafka.server:type=BrokerTopicMetrics,name=FetchMessageConversionsPerSec,topic=([-.\w]+)KIP-188 also added a metric for temp memory usage (memory used for conversion and compression) that can be usefulIn KIP-272, we’ve added version tag to request metrics, so you can see how many requests per sec you get from each versionRecommendations for Kafka Summit NYCWatch the video version of this podcast
3/26/201914 minutes, 28 seconds
Episode Artwork

It’s Time for Streaming to Have a Maturity Model ft. Nick Dearden

Nick Dearden explains the five stages of streaming maturity. They are not denial, anger, bargaining, depression and acceptance—that’s the Kübler-Ross model, and it’s for bad things. This one is for awesome things, and takes you from the first streaming project you ever build all the way to a state where an entire organization is transformed to think in terms of real-time, event-driven systems. If you have ever found yourself trying to get streaming technology adopted, this episode is for you!EPISODE LINKSFive Stages to Streaming Platform Adoption
3/18/201936 minutes, 56 seconds
Episode Artwork

Containerized Apache Kafka On Kubernetes with Viktor Gamov

Kubernetes provides all the building blocks needed to run stateful workloads, but creating a truly enterprise-grade Apache Kafka® platform that can be used in production is not always intuitive. In this episode, Tim Berglund and Viktor Gamov address some of the challenges and pitfalls of managing Kafka on Kubernetes at scale. They also share lessons learned from the development of the Confluent Operator for Kubernetes, and answer questions like:-What is Kubernetes?-What are stateful workloads?-Why are they hard?-Will Confluent Operator make it easier?EPISODE LINKSJoin the #kubernetes Slack channelKafka on Kubernetes: Does it really have to be “The Hard Way”?
3/11/201941 minutes, 45 seconds
Episode Artwork

Catch Your Bus with KSQL: A Stream Processing Recipe by Leslie Kurt

We all know that feeling of waiting when your ride is running late. Leslie Kurt shares about how you can use KSQL to calculate the difference between the expected arrival time and real-time updates of a bus as it executes its route. Listen as Leslie walks you through fundamental concepts like KTables, Kafka Streams, persistent queries and Confluent MQTT Proxy, as well as other use cases that involve a similar mechanism of capturing Unix timestamps and performing a stream processing operation on these timestamps.EPISODE LINKSAbout KSQLStream Processing CookbookKSQL Recipe: Calculating Bus Delay TimeFor more, you can check out ksqlDB, the successor to KSQL.
3/4/201919 minutes, 27 seconds
Episode Artwork

KTable Update Suppression (and a Bunch About KTables) ft. John Roesler

When you are dealing with streaming data, it might seem like tables are things that dwell in the far-off land of relational databases, outside of Apache Kafka and your event streaming system. But then the Kafka Streams API gives us the KTable abstraction, which lets us create tabular views of data in Kafka topics. Apache Kafka 2.1 featured an interesting change to the table API—commonly known to the world as KIP-328—that gives you better control over how updates to tables are emitted into destination topics. What might seem like a tiny piece of minutia gives us an opportunity to explore important parts of the Streams API, and unlocks some key new use cases. Join John Roesler for a clear explanation of the whole thing.
2/27/201945 minutes, 56 seconds
Episode Artwork

Splitting and Routing Events with KSQL ft. Pascal Vantrepote

Tim Berglund chats with System Engineer Pascal Vantrepote about a KSQL recipe he created based on a real-life customer use case in the financial services industry. They also discuss the advantages of KSQL, such as its expressiveness and ease of deployment in places where you’re not already writing a Java application.EPISODE LINKSAbout KSQL Stream Processing CookbookKSQL Recipe: Data Routing Joined with a KTableFor more, you can check out ksqlDB, the successor to KSQL.
2/25/201920 minutes, 42 seconds
Episode Artwork

Ask Confluent #10: Cooperative Rebalances for Kafka Connect ft. Konstantine Karantasis

Want to know how Kafka Connect distributes tasks to workers? Always thought Connect rebalances could be improved? In this episode of Ask Confluent, Gwen Shapira speaks with Konstantine Karantasis, software engineer at Confluent, about the latest improvements to Kafka Connect and how to run the Confluent CLI on Windows.EPISODE LINKSImproved rebalancing for Kafka ConnectImproved rebalancing for Kafka StreamsThe "what would Kafka do?" scenario from Mark PapadakisThe future of retail at NordstromWatch the video version of this podcast 
2/20/201921 minutes, 29 seconds
Episode Artwork

The Future of Serverless and Streaming with Neil Avery

Neil Avery explores the intersection between FaaS and event streaming applications before taking a quick detour back in time to understand how we've gotten to this point in event-driven applications. He'll explain the pros and cons of FaaS, and cover how in its current state cold starts and latency concerns need to be part of the bigger picture when building streaming applications. Finally, Neil shares five rules that will help you understand how FaaS fits with the event streaming application.EPISODE LINKSJourney to Event Driven – Part 1: Why Event-First Thinking Changes EverythingJourney to Event Driven – Part 2: Programming Models for the Event-Driven ArchitectureJourney to Event Driven – Part 3: The Affinity Between Events, Streams and ServerlessJourney to Event Driven – Part 4: Four Pillars of Event Streaming Microservices
2/14/201941 minutes
Episode Artwork

Using Terraform and Confluent Cloud with Ricardo Ferreira

Tim Berglund hosts Developer Advocate Ricardo Ferreira to discuss the concept of infrastructure as code, as well as the differences between Terraform, Ansible, Puppet and Chef. They also chat about why Terraform is such a big deal, some of the challenges involved with learning it and how Confluent leverages Terraform to achieve multi-cloud support for Confluent Cloud and tools for Confluent Platform.EPISODE LINKSTerraformTools for Confluent Cloud ClustersFully managed Apache Kafka as a service! Try free.
1/23/201928 minutes, 57 seconds
Episode Artwork

Ask Confluent #9: With and Without ZooKeeper

Gwen asks: What happens when garbage collection causes Kafka to pause? And how do we run a Schema Registry cluster? We’ll find out in this episode of Ask Confluent.In "Ask Confluent," Gwen Shapira (Software Engineer, Confluent) and guests respond to a handful of questions and comments from Twitter, YouTube and elsewhere.EPISODE LINKSZooKeeper connection timeout configuration: zookeeper.connection.timeout.ms, as we said, this defaults to 6,000Schema Registry failover instructionsWatch the video version of this podcast
1/8/201915 minutes, 11 seconds
Episode Artwork

Ask Confluent #8: Guozhang Wang on Kafka Streams Standby Tasks

Gwen is joined in studio by special guest Guozhang Wang, Kafka Streams pioneer and engineering lead at Confluent. He’ll talk to us about standby tasks and how one deserializes message headers. In "Ask Confluent," Gwen Shapira (Data Architect, Confluent) and guests respond to a handful of questions and comments from Twitter, YouTube and elsewhere.EPISODE LINKSDocumentation of standby tasks, including configsEvents with different schema in same topicHow to populate a database from Kafka and solve the parent-child relation problemWatch the video version of this podcast
12/18/201822 minutes, 9 seconds
Episode Artwork

Ask Confluent #7: Kafka Consumers and Streams Failover Explained ft. Matthias Sax

Gwen is joined in studio by special guest Matthias J. Sax, a software engineer at Confluent. He’ll talk to us about Kafka consumers and Kafka Streams failover. In "Ask Confluent," Gwen Shapira (Data Architect, Confluent) and guests respond to a handful of questions and comments from Twitter, YouTube and elsewhere.EPISODE LINKSWatch the video version of this podcast
12/3/201823 minutes, 51 seconds
Episode Artwork

Ask Confluent #6: Kafka, Partitions, and Exactly Once ft. Jason Gustafson

Gwen is joined in studio by special guest Jason Gustafson, a Kafka PMC member and engineer at Confluent. He’ll talk to us about the big questions on Kafka architecture— number of partitions and exactly once. In "Ask Confluent," Gwen Shapira (Data Architect, Confluent) and guests respond to a handful of questions and comments from Twitter, YouTube and elsewhere.EPISODE LINKSHardening Kafka ReplicationKafka open issuesWatch the video version of this podcast
11/5/201822 minutes, 27 seconds
Episode Artwork

Kafka Summit SF 2018 Panel | Microsoft, Slack, Confluent, University of Cambridge

Neha Narkhede leads a panel discussion at Kafka Summit SF 2018 with Kevin Scott (CTO, Microsoft), Julia Grace (Head of Infrastructure Engineering, Slack), Martin Kleppman (Researcher, U. of Cambridge), Jay Kreps (Co-founder and CEO, Confluent) and Neha Narkhede (Co-founder and CTO at Confluent).
10/18/201834 minutes, 52 seconds
Episode Artwork

Kafka Streams in Action with Bill Bejeck

Tim Berglund interviews Bill Bejeck about the Kafka Streams API and his new book, Kafka Streams in Action. 
9/27/201849 minutes, 8 seconds
Episode Artwork

Joins in KSQL 5.0 with Hojjat Jafarpour

KSQL 5.0 now supports stream-stream, stream-table and table-table joins. Tim Berglund interviews Hojjat Jafarpour about all three join types, how they work, what their limitations are and the new kinds of operations they unlock.For more, you can check out ksqlDB, the successor to KSQL.
9/20/201829 minutes, 5 seconds
Episode Artwork

Ask Confluent #5: Kafka, KSQL and Viktor Gamov

Gwen is joined in studio by co-host Tim Berglund and special guest, Viktor Gamov, a new member of Confluent’s Developer Experience Team specializing in Kafka, KSQL and Kubernetes. In this episode, we’ll find out: Does Viktor know what he’s talking about?EPISODE LINKSWatch the video version of this podcast
9/10/201831 minutes, 14 seconds
Episode Artwork

KSQL Use Cases with Nick Dearden

A discussion about how people actually use KSQL with Nick Dearden, stream processing expert at Confluent. Try KSQL!For more, you can check out ksqlDB, the successor to KSQL.
9/6/201832 minutes, 5 seconds
Episode Artwork

Nested Data in KSQL with Hojjat Jafarpour

Interesting data isn't a polite little list of scalar types. Sometimes you have more complex structures and things like nesting. We'll see how KSQL supports that today as Tim Berglund discusses nested data in KSQL with Hojjat Jafarpour, a software engineer on the KSQL team at Confluent. EPISODE LINKSKSQL demos and infoKSQL GitHub KSQL Slack (#ksql channel) For more, you can check out ksqlDB, the successor to KSQL.
8/29/201813 minutes, 20 seconds
Episode Artwork

UDFs and UDAFs in KSQL 5.0 with Hojjat Jafarpour

KSQL has a solid library of built-in functions, but no library is ever good enough. What if you want to write your own? We’ll learn how today with Hojjat Jafarpour, a software engineer on the KSQL team at Confluent.For more, you can check out ksqlDB, the successor to KSQL.
8/24/201818 minutes, 36 seconds
Episode Artwork

Ask Confluent #4: The GitHub Edition

Want to see a feature implemented in KSQL or other Kafka-related project? Gwen answers your questions from YouTube and walks through how to use GitHub issues to request features. This is the episode #4 of "Ask Confluent," a segment in which Gwen Shapira and guests respond to a handful of questions and comments from Twitter, YouTube and elsewhere.EPISODE LINKSWatch the video version of this podcast
8/16/201813 minutes, 59 seconds
Episode Artwork

Deep Dive into KSQL with Hojjat Jafarpour

Ever wonder what actually goes on when you run a KSQL query? Today, we take a deep dive into KSQL with Hojjat Jafarpour, a software engineer on the KSQL team at Confluent.For more, you can check out ksqlDB, the successor to KSQL.
8/13/201833 minutes, 18 seconds
Episode Artwork

Ask Confluent #3: Kafka Upgrades, Cloud APIs and Data Durability

Tim Berglund and Gwen Shapira have a discussion with Koelli Mungee (Customer Operations Lead, Confluent) and cover the latest Apache Kafka upgrades, cloud APIs, and data durability. This is episode #3 of "Ask Confluent," a segment in which Gwen Shapira and guests respond to a handful of questions and comments from Twitter, YouTube, and elsewhere.EPISODE LINKSWatch the video version of this podcastFully managed Apache Kafka as a service! Try free.
7/20/201822 minutes, 34 seconds
Episode Artwork

Ask Confluent #2: Consumers, Culture and Support

Gwen Shapira answers your questions and interviews Sam Hecht (Head of Support, Confluent). This is the second episode of "Ask Confluent," a segment in which Gwen Shapira and guests respond to a handful of questions and comments from Twitter, YouTube and elsewhere.EPISODE LINKSWatch the video version of this podcast
7/2/201824 minutes, 22 seconds
Episode Artwork

Ask Confluent #1: Kubernetes, Confluent Operator, Kafka and KSQL

Tim Berglund and Gwen Shapira discuss Kubernetes, Confluent Operator, Kafka, KSQL, and more. This is the first episode of "Ask Confluent," a segment in which Gwen Shapira and guests respond to a handful of questions and comments from Twitter, YouTube and elsewhere.EPISODE LINKSWatch the video version of this podcast
6/20/201822 minutes, 54 seconds