alunablue - stock.adobe.com

How Confluent is maintaining its edge in event streaming

Confluent co-founder Jun Rao talks up the company’s business and how it competes with hyperscalers and other suppliers of managed Kafka services

Like the human central nervous system that transmits electrical and chemical messages to and from the brain to regulate our body functions, an event streaming platform facilitates the capture and processing of event messages from different IT systems in a variety of use cases, from monitoring vehicle fleets to automating transactions.

One of the most popular technology platforms that underpins the event-driven architecture is Apache Kafka, the open source software that was originally developed by three engineers at LinkedIn to manage data feeds at the professional networking platform.

In 2014, the trio – Jay Kreps, Jun Rao and Neha Narkhede – started Confluent to commercialise the technology, offering enterprise grade capabilities for Kafka through its Confluent Cloud. The company went public in June 2021 and raised over $800m at the time.

In an interview with Computer Weekly, Rao, one of Confluent’s co-founders, gave an update on the company’s business, how it competes with hyperscalers and other suppliers of managed Kafka services, as well as new capabilities that enterprises can expect in future.

It’s been two years since Confluent went public. How has the business been doing so far?

Rao: Confluent’s business has been doing pretty well. We’ve created a new infrastructure category which we call data in motion. This is in contrast with data at rest which is what a lot of databases and data warehouses focus on. Those systems record data about events in your business and it’s less about leveraging the data in real-time, because in most cases, the data is used in batch processes that are run weekly or daily.

With data in motion, we not only record everything that’s happening in your business, but we also provide opportunities for the business to take action in real-time in a matter of seconds or even milliseconds as events are occurring.

We are at the phase where we have passed the early adoption phase. Kafka was started at LinkedIn and was later adopted by many other tech companies in Silicon Valley, whether it’s Netflix, Uber or Pinterest. We’ve also seen adoption in other industries such as banks and financial services, retail, gaming and government. Many organisations are now comfortable with this new category of infrastructure, and they are starting to build more applications on this platform.

And so, we’re seeing pretty fast growth. In our latest quarter, despite the global economic slowdown, our revenues still grew 38% year-on-year. The opportunity ahead of us is still strong in the infrastructure category, because there are a lot of companies that are just at the beginning of their data in motion journey.

A big part of our value proposition is being the platform where all the data across your company can be integrated at scale and in real time. We have plans to integrate with popular AI systems, so customers can feed data into those systems to do AI training and inferencing
Jun Rao, Confluent

With Kafka becoming more mainstream, we’ve seen more players in the market such as hyperscalers that offer managed Kafka services. How does Confluent stay ahead of the competition, including open source projects like Pulsar? Do you see any potential of working closely with them as well?

Rao: I would say two things. First, having competition is not necessarily a bad thing because in general, if you have a large opportunity in a large market, then you would often have competition. In some sense, having competition is a good validation of the technology and it’s not like we’re the only ones saying that this is a new trend. Red Hat, Amazon and Google all have offerings in this category.

Now, how do we compete with those companies? First, I think because we’re relatively early in this category, we have the first-mover advantage because we understand the problems that businesses have, and we’ve been able to build up our offerings to help enterprises adopt the technology better.

The value of our platform lies in three areas, starting with cloud. Traditionally, businesses take open source software and manage it on their own, but with infrastructure software, especially a distributed system, it’s not easy. It’s not the core business for a lot of enterprises, so we think managed services in the cloud is a better way of running software.

Today, our cloud offering is the fastest growing part of our business and we’ve recently announced Kora, the cloud-native engine that powers Confluent Cloud. Under the covers, it has multiple innovations that take advantage of cloud resources more judiciously so that we can build a service that’s better in terms of cost, performance, scalability, elasticity, and so on. That’s one of our advantages because we started with cloud early.

The second thing is our ecosystem which needs to work together with Kafka to deliver business value. To that, we have an integration layer through our connector ecosystem so our customers can tap data in different databases like MongoDB, Oracle and MySQL. We also offer stream processing capabilities through Apache Flink on top of the database storage layer, so you can transform and enrich the data, or do window-based aggregation before you take advantage of the data. With both integration and processing layers, and with stronger governance, we hope we are offering a more complete solution.

Finally, we support portability, which is increasingly important for organisations that are running hybrid multicloud environments. Increasingly, more businesses want to run the same software stack everywhere. We’re not tied to a particular cloud vendor, and we offer hosted services in all three major public clouds. You can also use our platform in your private cloud if that’s your preferred way. That portability is important because it allows people to choose their environment freely as their business needs change.

Some of your customers see your close connections with the Kafka community as one of the key benefits of working with Confluent. Could you talk about your work with the Kafka community and how it influences your thinking around product development?

Rao: The Apache Foundation has a unique way of attracting developers who increasingly have a bigger say in choosing which enterprise software to use. They want a tool that can solve their problems, so how do you reach those developers so they can try your tool, ask questions and share their experience with others? That’s where the open source community comes in and we will continue to build our ground around that.

In our latest quarter, despite the global economic slowdown, our revenues still grew 38% year-on-year. The opportunity ahead of us is still strong in the infrastructure category, because there are a lot of companies that are just at the beginning of their data in motion journey
Jun Rao, Confluent

We want to see the success of the open source community and Apache Kafka, and we’re contributing to Kafka pretty heavily. But while open source is good for building technology, many businesses want something to solve a problem. So, as a vendor based on open source technology, we feel we can offer additional value and build a more complete solution.

For example, by offering a hosted service, you don’t have to do the operational aspects, making it easier for you to adopt the technology so you can focus on your core business. We view the open source community as important, but we can create a strong and thriving business on top of open source technology.

Do you see interesting use cases or particular adoption challenges with Kafka that companies here in the Asia-Pacific region tend to grapple with compared to their counterparts elsewhere?

Rao: I visited Singapore and Sydney recently. Since the last time I was Sydney in 2019, the general trend there was encouraging, with deeper adoption of the technology and more applications being built on top of it.

I’m also seeing more cloud adoption compared to four years ago. For example, Trust Bank, a digital bank in Singapore, is using Kafka as a core part of its event-driven architecture to innovate on things you couldn’t do with traditional banks. For example, all their data is integrated in real time, allowing them to share data across different applications so they can improve user experience, facilitate customer onboarding and improve fraud detection. So, overall, I see strong adoption of the technology in the region, particularly among those in banking, travel and government, as well as small and medium-sized businesses.

What are your thoughts on driving greater adoption, particularly those that are behind the curve in embracing event-driven architecture?

Rao: Many traditional businesses are modernising their infrastructure and historically their applications were built on legacy systems which can be costly. They may not have the right tooling or programming interface that modern developers want. But they also have a lot of valuable data locked in legacy systems. We see lot of businesses leveraging Kafka to get data out of those systems using our connectors. Once they get data out into Kafka, they can work in a more modern environment, leverage their data, share it more freely and start building new modern applications.

How do you see the Confluent platform evolving moving forward? What are your thoughts around integrating generative AI capabilities to make the jobs of developers easier?

Rao: There are a few big areas. One is the cloud-native architecture, which takes time to build but once you do it, it can offer a big differentiation in the kinds of hosted services you can provide in terms of reliability, scalability and performance, among other things. I mentioned the Kora engine earlier. We designed that for cloud, and we’ll continue to evolve that to be more cloud-friendly and cloud-native over time.

Another big area is the integration with the computation and processing components. We want to make the job of building event-driven applications easier, which will require the database storage and event processing capabilities. I mentioned about our support for Apache Flink – we will continue to evolve that service and we hope to make it available by the end of this year or early next year.

As for generative AI, I’d say it has a couple of implications for us. First, generative AI offers another opportunity for people to leverage data. You need to feed it with the data you have in your business, because the more data you have, the more effective the model will be and the better decisions you can make out of it.

A big part of our value proposition is being the platform where all the data across your company can be integrated at scale and in real time. We have plans to integrate with popular AI systems, so customers can feed data into those systems to do AI training and inferencing.

With generative AI, there are now better models that people can use to do inferencing and take actions. People still want to be able to leverage data in as real time as possible, because that provides a differentiated experience for users and businesses. So, the sooner you can do this kind of inference with generative AI models, the faster you can get value from your investments. The event driven platform we’re building is still critical for that because we allow people to integrate their data with generative AI models for inferencing in real time.

Read more about Kafka in APAC

Read more on Open source software

CIO
Security
Networking
Data Center
Data Management
Close