Services for Organizations

Using our research, best practices and expertise, we help you understand how to optimize your business processes using applications, information and technology. We provide advisory, education, and assessment services to rapidly identify and prioritize areas for improvement and perform vendor selection

Consulting & Strategy Sessions

Ventana On Demand

    Services for Investment Firms

    We provide guidance using our market research and expertise to significantly improve your marketing, sales and product efforts. We offer a portfolio of advisory, research, thought leadership and digital education services to help optimize market strategy, planning and execution.

    Consulting & Strategy Sessions

    Ventana On Demand

      Services for Technology Vendors

      We provide guidance using our market research and expertise to significantly improve your marketing, sales and product efforts. We offer a portfolio of advisory, research, thought leadership and digital education services to help optimize market strategy, planning and execution.

      Analyst Relations

      Demand Generation

      Product Marketing

      Market Coverage

      Request a Briefing



        Matt Aslett's Analyst Perspectives

        << Back to Blog Index

        Confluent Enables Enterprises to Operate in Real Time

        I have written on multiple occasions about the increasing proportion of enterprises embracing the processing of streaming data and events alongside traditional batch-based data processing. I assert that, by 2026, more than three-quarters of enterprises’ standard information architectures will include streaming data and event processing, allowing enterprises to be more responsive and provide better customer experiences.

        Although many enterprises have adopted products to store and process data in motion, often those systems are deployed in parallel to those used to store and process data at rest rather than enabling a holistic view of all ISG_Research_2024_Assertion_StreamEvents_Standard_Info_Architecture_99_Sdata—in motion and at rest—across an enterprise. Several software providers, including Confluent, are working to address this. Earlier this year, the company announced the launch of a new capability called Tableflow, designed to enable enterprises to quickly convert streaming data topics and schema into tables for storage and processing in a data warehouse, data lake or analytics engine.

        Confluent was founded in 2014 by the creators of the open-source Apache Kafka distributed event streaming platform. Based on a publish-and-subscribe messaging model for communicating events and event streams, Apache Kafka was originally developed at LinkedIn to store and process data related to member activity as well as logs and metrics.

        Apache Kafka has been widely adopted by thousands of enterprises to support real-time data processing by capturing event data from sensors, applications and databases and processing and analyzing it in real time as it flows through the organization. It also forms the basis of Confluent’s product portfolio, which includes the Confluent Platform distribution of Apache Kafka for self-managed deployment on-premises and in the cloud, as well as the Confluent Cloud managed service.

        The company reported total revenue of $777 million in fiscal year 2023, an increase of 33% on $586 million the previous year, and forecasts total revenue of approximately $950 million in fiscal 2024. Revenue in the first quarter of 2024 was $217 million, up 25%. In addition to benefitting from increased adoption of streaming data and event processing in general as well as expansion and greater maturity among existing customers, the company has also expanded its addressable market by developing capabilities for streaming data governance as well as adding stream processing and analytics capabilities through the early 2023 acquisition of Immerok, one of the primary companies behind the Apache Flink stream processing engine.

        As I previously described, Confluent Cloud is more than just a hosted version of Apache Kafka. The company’s Kora engine was designed to provide a cloud-native experience for Kafka, including support for tiered storage, elastic scaling, high availability and improved performance. Additionally, the company has invested in security and governance capabilities, with its Stream Governance suite providing capabilities for schema management and data quality as well as self-service data discovery and classification and stream lineage. The acquisition of Immerok further extended the differentiation of Confluent Cloud with the addition of the Confluent Cloud for Apache Flink serverless stream processing service, which provides an engine for performing stateful computations on unbounded and bounded streams of events.

        Confluent Cloud automatically interprets Apache Kafka topics as Apache Flink tables, enabling users to create applications for SQL-based streaming and batch analytics of event data, including filtering, joining and enriching data streams without specialist Flink expertise. Confluent Cloud for Apache Flink provides the SQL Workspaces user interface for workers to write SQL statements executed against a serverless managed Apache Flink compute pool.

        The general availability of Confluent Cloud for Apache Flink was announced at the Kafka Summit London in March of this year, where Confluent also unveiled Tableflow. A new feature on Confluent Cloud, Tableflow automatically materializes Apache Kafka topics and schemas as Parquet files to be persisted in a data warehouse, data lake or cloud storage using the Apache Iceberg open table format. Tableflow ensures Iceberg tables are continuously updated with the latest streaming data and enables batch processing of historical event data using Iceberg-compatible SQL analytics engines. The streaming data stored as Iceberg tables can also be consumed as Kafka topics.

        More recently, Confluent announced the addition of AI Model Inference to Confluent Cloud for Apache Flink, enabling enterprises to incorporate machine learning into streaming data pipelines. Now available for early access, AI Model Inference is designed to allow users to create SQL statements using Confluent Cloud for Apache Flink to calls to external artificial intelligence services, including Amazon SageMaker, Google Cloud Vertex, Microsoft Azure and OpenAI to coordinate data processing and AI workflows and ensure that AI models have access to streaming data as it is updated in real time.

        As I previously stated, the execution of business events has always occurred in real time. Batch processing is an artificial construct driven by the limitations of traditional data processing capabilities that require enterprisesISG_BR_AD_Frequency_of_Data_Analysis_2024 to process data minutes, hours or even days after an event. The reliance on batch data processing is so entrenched that data streaming is often seen as a niche activity, separate from the primary focus on processing data at rest. Less than one-quarter (22%) of enterprises are currently analyzing data in real time. Those that do not run the risk of failing to operate at the pace of the real world.

        Capabilities such as Confluent Cloud for Apache Flink and Tableflow are changing assumptions about event and stream data processing. I would encourage enterprises evaluating data architecture to consider streaming data platforms and Confluent Cloud alongside more traditional data platforms to provide a holistic view of all data—in motion and at rest.

        Regards,

        Matt Aslett

        Authors:

        Matt Aslett
        Director of Research, Analytics and Data

        Matt Aslett leads the software research and advisory for Analytics and Data at Ventana Research, now part of ISG, covering software that improves the utilization and value of information. His focus areas of expertise and market coverage include analytics, data intelligence, data operations, data platforms, and streaming and events.

        JOIN OUR COMMUNITY

        Our Analyst Perspective Policy

        • Ventana Research’s Analyst Perspectives are fact-based analysis and guidance on business, industry and technology vendor trends. Each Analyst Perspective presents the view of the analyst who is an established subject matter expert on new developments, business and technology trends, findings from our research, or best practice insights.

          Each is prepared and reviewed in accordance with Ventana Research’s strict standards for accuracy and objectivity and reviewed to ensure it delivers reliable and actionable insights. It is reviewed and edited by research management and is approved by the Chief Research Officer; no individual or organization outside of Ventana Research reviews any Analyst Perspective before it is published. If you have any issue with an Analyst Perspective, please email them to ChiefResearchOfficer@ventanaresearch.com

        View Policy

        Subscribe to Email Updates



        Analyst Perspectives Archive

        See All