The server is a key component of enterprise computing, providing the functional compute resources required to support software applications. Historically, the server was so fundamentally important that it – along with the processor, or processor core – was also a definitional unit by which software was measured, priced and sold. That changed with the advent of cloud-based service delivery and consumption models.
Over a decade ago, I coined the term NewSQL to describe the new breed of horizontally scalable, relational database products. The term was adopted by a variety of vendors that sought to combine the transactional consistency of the relational database model with elastic, cloud-native scalability. Many of the early NewSQL vendors struggled to gain traction, however, and were either acquired or ceased operations before they could make an impact in the crowded operational data platforms market. Nonetheless, the potential benefits of data platforms that span both on-premises and cloud resources remain. As I recently noted, many of the new operational database vendors have now adopted the term “distributed SQL” to describe their offerings. In addition to new terminology, a key trend that separates distributed SQL vendors from the NewSQL providers that preceded them is a greater focus on developers, laying the foundation for the next generation of applications that will depend on horizontally scalable, relational-database functionality. Yugabyte is a case in point.
I recently wrote about the importance of data pipelines and the role they play in transporting data between the stages of data processing and analytics. Healthy data pipelines are necessary to ensure data is integrated and processed in the sequence required to generate business intelligence. The concept of the data pipeline is nothing new of course, but it is becoming increasingly important as organizations adapt data management processes to be more data driven.
Topics: business intelligence, Analytics, Data Governance, Data Integration, Data, Digital Technology, Digital transformation, data lakes, AI and Machine Learning, data operations, digital business, data platforms, Analytics & Data, Streaming Data & Events
I recently described the growing level of interest in data mesh which provides an organizational and cultural approach to data ownership, access and governance that facilitates distributed data processing. As I stated in my Analyst Perspective, data mesh is not a product that can be acquired or even a technical architecture that can be built. Adopting the data mesh approach is dependent on people and process change to overcome traditional reliance on centralized ownership of data and infrastructure and adapt to its principles of domain-oriented ownership, data as a product, self-serve data infrastructure and federated governance. Many organizations will need to make technological changes to facilitate adoption of data mesh, however. Starburst Data is associated with accelerating analysis of data in data lakes but is also one of several vendors aligning their products with data mesh.
Data mesh is the latest trend to grip the data and analytics sector. The term has been rapidly adopted by numerous vendors — as well as a growing number of organizations —as a means of embracing distributed data processing. Understanding and adopting data mesh remains a challenge, however. Data mesh is not a product that can be acquired, or even a technical architecture that can be built. It is an organizational and cultural approach to data ownership, access and governance. Adopting data mesh requires cultural and organizational change. Data mesh promises multiple benefits to organizations that embrace this change, but doing so may be far from easy.
Topics: business intelligence, Analytics, Data Governance, Data Integration, Data, Digital Technology, Digital transformation, data lakes, data operations, digital business, data platforms, Analytics & Data, Streaming Data & Events
The term NoSQL has been a misnomer ever since it appeared in 2009 to describe a group of emerging databases. It was true that a lack of support for Structured Query Language (SQL) was common to the various databases referred to as NoSQL. However, it was always one of a number of common characteristics, including flexible schema, distributed data processing, open source licensing, and the use of non-relational data models (key value, document, graph) rather than relational tables. As the various NoSQL databases have matured and evolved, many of them have added support for SQL terms and concepts, as well as the ability to support SQL format queries. Couchbase has been at the forefront of this effort, recognizing that to drive greater adoption of NoSQL databases in general (and its distributed document database in particular) it was wise to increase compatibility with the concepts, tools and skills that have dominated the database market for the past 50 years.
Breaking into the database market as a new vendor is easier said than done given the dominance of the sector by established database and data management giants, as well as the cloud computing providers. We recently described the emergence of a new breed of distributed SQL database providers with products designed to address hybrid and multi-cloud data processing. These databases are architecturally and functionally differentiated from both the traditional relational incumbents (in terms of global scalability) and the NoSQL providers (in terms of the relational model and transactional consistency). Having differentiated functionality is the bare minimum a new database vendor needs to make itself known in a such a crowded market, however.