Native support for semi-structured data in Apache Druid
Apr 26, 2023
Karthik Kasibhatla
Background
We’re excited to announce that we’ve added native support for ingesting and storing semi-structured data as-is in Apache Druid 24.0. In the real world, data often comes in semi-structured shapes- data from web APIs, data that originates from mobile and IoT devices etc. However, many databases require flattening of these nested shapes before storing and processing in order to provide good performance during querying. Take a simple event processing use case in a company- various teams are logging events into a multi-tenant table, but each team has its own setup and cares about different metadata fields for analyses. To handle the data, ETL/ELT pipelines need to be set up to flatten the data and prior schema has to be agreed upon upfront. As a result, not only is the developer flexibility severely limited, but also the rich relationships between the various values within the nested structures is completely lost due to flattening them out.
The advent of document stores such as MongoDB has allowed for nested objects to be stored in their native form, generally improving flexibility and developer experience working with nested data. However, these document stores come with their own set of limitations for real-time analytics, making them unsuitable for building data applications . For example, MongoDB doesn’t have native support for SQL and developers could only query the document stores using specific APIs, whereas SQL is better suited for analytical workloads
Apache Druid has a lot of tricks up its sleeve to support low latency queries, often with sub-second latency, on very large data sets but Druid also only worked with fully flattened data as the Druid segments were only able to natively store data in that format. This could be accomplished using the flattenSpec during ingestion.
With this new capability, developers can now ingest and query nested fields and retain the performance they’ve come to expect from Druid on fully flattened columns- enjoying the best of both worlds in-terms of flexibility and performance for their data applications. For the most part, our internal benchmark exercises show that query performance on nested columns is very similar to flattened data or better.
How does it work?
So what does semi-structured data actually look like? We use the sample data in nested_example_data.json for illustrative purposes. When pretty-printed, a sample row from the file looks like this-
Apache Druid 24.0 supports both native and SQL ingestion for batch data (check out Druid’s Multi-Stage Query framework) and native ingestion for streaming data with nested columns.
While this capability is specifically built so users can ingest nested data as-is and query it back out, for SQL-based batch ingestion, the SQL JSON functions can be optionally used to extract nested properties during ingestion as illustrated below.
For classic batch ingestion, nested data can be transformed via the transformSpec within the ingestion spec itself. For example, the below ingestion spec extracts firstName, lastName and address from shipTo and creates a composite JSON object containing product, details and department.
During query time, the SQL functions can be used to enable aggregating and filtering queries on raw nested data that is ingested as-is. Below is a screenshot of a sample query run on the data set.
What’s next?
Apache Druid 24.0 has support for a vast array of capabilities to support nested data in Druid. However, we’re excited to add more functionality and improvements in the upcoming releases.
We’re keen to add support for nested columns for Avro, Parquet, ORC, Protobuf in addition to JSON format, bringing the native support of semi-structured data much closer to parity with Druid’s flattenSpec.
With the current support for nested columns, users can extract individual keys and values from within the nested structures. We’d like to give users the ability to define an allow and/or deny list so that users don’t spend resources decomposing fields that are only used in their raw form.
This is an exciting new capability we’ve introduced in Apache Druid 24.0 and we’re just getting started.
Want to contribute?
We’re always on the lookout for more Druid contributors. Check out our incredible Druid community and if you find an area which you feel excited about, just jump in!
Other blogs you might find interesting
No records found...
Jun 01, 2023
Introducing Schema Auto-Discovery in Apache Druid
In this blog article I’ll unpack schema auto-discovery, a new feature now available in Druid 26.0, that enables Druid to automatically discover data fields and data types and update tables to match changing...
Apache Druid® 26.0, an open-source distributed database for real-time analytics, has seen significant improvements with 411 new commits, a 40% increase from version 25.0. The expanded contributor base of 60...
Should You Build or Buy Security Analytics for SecOps?
When should you build—or buy—a security analytics platform for your environment? Here are some common considerations—and how Apache Druid is the ideal foundation for any in-house security solution.
Druid now has a new function, Unnest. Unnest explodes an array into individual elements. This blog contains design methodology and examples for this new Unnest function both from native and SQL binding perspectives.
What’s new in Imply Polaris – Our Real-Time Analytics DBaaS
Every week we add new features and capabilities to Imply Polaris. This month, we’ve expanded security capabilities, added new query functionality, and made it easier to monitor your service with your preferred...
How to Build a Sentiment Analysis Application with ChatGPT and Druid
Leveraging ChatGPT for sentiment analysis, when combined with Apache Druid, offers results from large data volumes. This integration is easily achievable, revealing valuable insights and trends for businesses...
In this blog, we will compare Snowflake and Druid. It is important to note that reporting data warehouses and real-time analytics databases are different domains. Choosing the right tool for your specific requirements...
Learn how to achieve sub-second responses with Apache Druid
Learn how to achieve sub-second responses with Apache Druid. This article is an in-depth look at how Druid resolves queries and describes data modeling techniques that improve performance.
Apache Druid uses load rules to manage the ageing of segments from one historical tier to another and finally to purge old segments from the cluster. In this article, we’ll show what happens when you make...
Real-Time Analytics: Building Blocks and Architecture
This blog identifies the key technical considerations for real-time analytics. It answers what is the right data architecture and why. It spotlights the technologies used at Confluent, Reddit, Target and 1000s...
What’s new in Imply Polaris – Our Real-Time Analytics DBaaS
This blog explains some of the new features, functionality and connectivity added to Imply Polaris over the last two months. We've expanded ingestion capabilities, simplified operations and increased reliability...
Wow, that was easy – Up and running with Apache Druid
The objective of this blog is to provide a step-by-step guide on setting up Druid locally, including the use of SQL ingestion for importing data and executing analytical queries.
Tales at Scale Podcast Kicks off with the Apache Druid Origin Story
Tales at Scale cracks open the world of analytics projects and shares stories from developers and engineers who are building analytics applications or working within the real-time data space. One of the key...
Real-time Analytics Database uses partitioning and pruning to achieve its legendary performance
Apache Druid uses partitioning (splitting data) and pruning (selecting subset of data) to achieve its legendary performance. Learn how to use the CLUSTERED BY clause during ingestion for performance and high...
Easily embed analytics into your own apps with Imply’s DBaaS
This blog explains how developers can leverage Imply Polaris to embed robust visualization options directly into their own applications without them having to build a UI. This is super important because consuming...
Building an Event Analytics Pipeline with Confluent Cloud and Imply’s real time DBaaS, Polaris
Learn how to set up a pipeline that generates a simulated clickstream event stream and sends it to Confluent Cloud, processes the raw clickstream data using managed ksqlDB in Confluent Cloud, delivers the processed...
We are excited to announce the availability of Imply Polaris in Europe, specifically in AWS eu-central-1 region based in Frankfurt. Since its launch in March 2022, Imply Polaris, the fully managed Database-as-a-Service...
This is a what's new to Imply in Dec 2022. We’ve added two new features to Imply Polaris to make it easier for your end users to take advantage of real-time insights.
Combating financial fraud and money laundering at scale with Apache Druid
Learn how Apache Druid enables financial services firms and FinTech companies to get immediate insights from petabytes-plus data volumes for anti-fraud and anti-money laundering compliance.
Imply Pivot delivers the final mile for modern analytics applications
This blog is focused on how Imply Pivot delivers the final mile for building an anlaytics app. It showcases two customer examples - Twitch and ironsource.
For decades, analytics has been defined by the standard reporting and BI workflow, supported by the data warehouse. Now, 1000s of companies are realizing an expansion of analytics beyond reporting, which requires...
Apache Druid is at the heart of Imply. We’re an open source business, and that’s why we’re committed to making Druid the best open source database for modern analytics applications
Tales at Scale Podcast: Who Really Needs Real-Time Data?
Gwen Shapira, co-founder and CPO of Nile joins us to help define real-time data, discuss who needs it (and who probably doesn't) and how to not build yourself into a corner with your architecture. When you're...
When it comes to modern data analytics applications, speed is of the utmost importance. In this blog we discuss two approximation algorithms which can be used to greatly enhance speed with only a slight reduction...
The next chapter for Imply Polaris: celebrating 250+ accounts, continued innovation
Today we announced the next iteration of Imply Polaris, the fully managed Database-as-a-Service that helps you build modern analytics applications faster, cheaper, and with less effort. Since its launch in...
We obviously talk a lot about #ApacheDruid on here. But what are folks actually building with Druid? What is a modern analytics application, exactly? Let's find out
Elasticity is important, but beware the database that can only save you money when your application is not in use. The best solution will have excellent price-performance under all conditions.
Druid 0.23 – Features And Capabilities For Advanced Scenarios
Many of Druid’s improvements focus on building a solid foundation, including making the system more stable, easier to use, faster to scale, and better integrated with the rest of the data ecosystem. But for...
Apache Druid 0.23.0 contains over 450 updates, including new features, major performance enhancements, bug fixes, and major documentation improvements.
Imply Polaris is a fully managed database-as-a-service for building realtime analytics applications. John is the tech lead for the Polaris UI, known internally as the Unified App. It began with a profound question:...
There is a new category within data analytics emerging which is not centered in the world of reports and dashboards (the purview of data analysts and data scientists), but instead centered in the world of applications...
We are in the early stages of a stream revolution, as developers build modern transactional and analytic applications that use real-time data continuously delivered.
Developers and architects must look beyond query performance to understand the operational realities of growing and managing a high performance database and if it will consume their valuable time.
Building high performance logging analytics with Polaris and Logstash
When you think of querying with Apache Druid, you probably imagine queries over massive data sets that run in less than a second. This blog is about some of the things we did as a team to discover the user...
Horizontal scaling is the key to performance at scale, which is why every database claims this. You should investigate, though, to see how much effort it takes, especially compared to Apache Druid.
When you think of querying with Apache Druid, you probably imagine queries over massive data sets that run in less than a second. This blog is about some of the things we did as a team to discover the user...
Building Analytics for External Users is a Whole Different Animal
Analytics aren’t just for internal stakeholders anymore. If you’re building an analytics application for customers, then you’re probably wondering…what’s the right database backend?
After over 30 years of working with data analytics, we’ve been witness (and sometimes participant) to three major shifts in how we find insights from data - and now we’re looking at the fourth.
Every year industry pundits predict data and analytics becoming more valuable the following year. But this doesn’t take a crystal ball to predict. There’s instead something much more interesting happening...
Today, I'm prepared to share our progress on this effort and some of our plans for the future. But before diving further into that, let's take a closer look at how Druid's core query engine executes queries,...
Product Update: SSO, Cluster level authorization, OAuth 2.0 and more security features
When you think of querying with Apache Druid, you probably imagine queries over massive data sets that run in less than a second. This blog is about some of the things we did as a team to discover the user...
When you think of querying with Apache Druid, you probably imagine queries over massive data sets that run in less than a second. This blog is about some of the things we did as a team to discover the user...
Druid Nails Cost Efficiency Challenge Against ClickHouse & Rockset
To make a long story short, we were pleased to confirm that Druid is 2 times faster than ClickHouse and 8 times faster than Rockset with fewer hardware resources!.
Unveiling Project Shapeshift Nov. 9th at Druid Summit 2021
There is a new category within data analytics emerging which is not centered in the world of reports and dashboards (the purview of data analysts and data scientists), but instead centered in the world of applications...
How we made long-running queries work in Apache Druid
When you think of querying with Apache Druid, you probably imagine queries over massive data sets that run in less than a second. This blog is about some of the things we did as a team to discover the user...
Uneven traffic flow in streaming pipelines is a common problem. Providing the right level of resources to keep up with spikes in demand is a requirement in order to deliver timely analytics.
Community Discoveries: multi-value dimensions in Apache Druid
Hellmar Becker is an Imply solutions engineer based in Germany, where he has been delving into the nooks-and-crannies of multi-valued dimension support in Druid. In this interview, Hellmar explains why...
Community Spotlight: Apache Pulsar and Apache Druid get close…
The community team at Imply spoke with an Apache Pulsar community member, Giannis Polyzos, about how collaboration between open source communities generates great things, and more specifically, about how...
Meet the team: Abhishek Agarwal, engineering lead in India
Abhishek is Imply’s first engineer in India. We spoke to him about setting up our operations in Bangalore and asked what kind of local talent the company is looking for.
Jihoon Son is a software engineer at Imply who works on Apache Druid®. He explains what drew him to Imply five years ago and why he’s even more inspired by the company today.
Community Spotlight: Sparking that connection with Apache Druid
It’s been nearly 10 years now since Druid was open sourced “to help other organizations solve their real-time data analysis and processing needs”. This has happened not because of one person or one...
Community Spotlight: Augmented analytics on business metrics by Cuebook with Apache Druid®
Cuebook is putting you, decision-maker, back in the driving seat, powered by Apache Druid®. In this interview with their founder and CEO, we learn their reason for being, their open source Cuelake tooling,...
Empowering all types of users to analyze data incredibly quickly from wherever it sits provides huge value to organizations. Citizen data scientists and decision scientists are able to make empirically-backed,...
Our vision at Imply has always been to create a new category for data analytics, analytics-in-motion, and enable organizations to unlock workflows they’ve never been able to do before. With the most recent...
Community Spotlight: Avesta powers next-generation applications with Apache Druid
When considering various real-time analytics solutions, Apache Druid quickly became the clear choice: Avesta uses only open-source products and libraries. And today, they’re using Druid as a central component...
The traditional BI workflow starts with a strategic question. Such a question is not too time-sensitive—days or weeks is okay—and the question is pretty complex to answer.
How we enabled the “Go Fast” button on TopN queries: Hint: we used vectorized virtual columns (which is new in Apache Druid 0.20.0)
Apache Druid is a fast, modern analytics database designed for workflows where fast, ad-hoc analytics, instant data visibility, or supporting high concurrency is important. Multiple factors contribute to...
How Sift is accurately identifying anomalies in real time by using Imply Druid
As the leader in Digital Trust & Safety and a pioneer in using machine learning to fight fraud, Sift regularly deploys new machine learning models into production. Sift’s customers use the scores generated...
Making the impossible, possible: A GameAnalytics case study
We’ve had the pleasure of speaking with Ioana Hreninciuc, CEO of GameAnalytics, to learn just how they use Imply to make their next-generation data stack possible.
Make your real-time dashboards blazing fast with per-segment caching
Imagine a scenario where Druid is collecting metrics about a huge microservices application —there’s a continuous stream of metrics coming in about the different services from this application.
Community Spotlight: smart advertising from Sage+Archer + Apache Druid
Out-of-home advertising has changed. Gone are static, uncompromisingly homogenous posters, replaced instead with bright and fluid installations. Installations that make smart decisions about what and when...
Some time ago, Dana Assa and I wrote a detailed blog post about Data retention and deletion in Apache Druid. Our intention was to help Druid database users and provide guidance on how to control the TTL...
Hawk is the first independent European platform to offer a transparent and technological advertising experience across all screens: Desktop, Mobile, CTV, DOOH & Digital Audio.
If you thought you had perfect rollups before, you might have been wrong!
In Apache Druid, you can roll up duplicate rows into a single row to optimize storage and improve query performance. Rollup pre-aggregates data at ingestion time, which reduces the amount of data the query...
Imply’s real-time analytics maturity model to create better customer experiences
Imply’s real-time Druid database today powers the analytics needs of over 100 customers across industries such as Banking, Retail, Manufacturing, and Technology. We have observed that the majority of prospects...
What I wish I knew about Imply when I was developing in-house analytics
Like a lot of engineers at Imply, I got my start here after having worked on an analytics solution for a previous employer. In my case, it was a large non-tech company going through a digital transformation.
Imply allows Kueez's data analysts, content editors, and growth teams to optimize their campaigns in real-time. With open-source Druid, they struggled to keep their system up and running, their queries were...