Contributing authors: Kashif Faraz and Eric Tschetter, design partner for multi-dimension range partitioning Reviewer: Abhishek Agarwal
Here at Imply, we run Druid clusters which churn hundreds of million rows of data every hour. And that’s only the metric data generated to monitor the actual analytical data!
For a complex system like Druid that handles enormous data loads, every component must be robust and reliable. At the same time, the system must be ever evolving.
Recently, we implemented a feature that decreased storage size by 40% while improving query speeds by 75% with absolutely no loss in data fidelity. Here we discuss the twists and tweaks of an integral cog of the Druid machinery which made these improvements possible: partitioning.
(TL;DR: If you are a person of few words, please refer to the table at the end to choose the best partitioning scheme for your use cases.)
The partitioning motivation
Anyone who works with data is already familiar with techniques such as partitioning, sharding and bucketing as a means to improve performance. But for the joy of listing things, let’s rehash some of the potential benefits partitioning could offer in Druid:
Parallelism: Multiple processes can operate on different portions of data simultaneously.
Distributed storage: Data can be spread across several data servers thus allowing even commodity hardware to meet memory and disk requirements.
Improved I/O management: Reasonably sized partitions make reading from/writing to disk easier and transmission over network less prone to failure.
Granular replication: With the same factor, say 2, replication at the partition level allows better fault tolerance than replication of unpartitioned data as a whole.
Perhaps we needed that refresher after all. It reminds us that partitioning is not a luxury, but rather a necessity for a scalable database system. Now let’s take a moment to consider what kind of a partitioning scheme would actually meet our needs.
The paradoxical partition
‘Spread out’ for Write
Streaming ingestion is a popular method of loading data into Druid. To ensure rapid event processing and to avoid lag, the streaming pipeline should organise data in a write-optimized manner that minimizes hot-spots. This would mean spreading out the incoming data across several partitions to evenly distribute the ingestion workload among multiple processes and servers.
‘Group up’ for Read
Druid loads the absolute minimum amount of data necessary for a query to optimize performance. It uses indexes and other metadata to aggressively prune out rows which are not needed for a query.
For example, imagine we have data for a whole year with 365 partitions, one for each day. When querying for the first week of January, we need to look at only 7 partitions and ignore the remaining 358. This would help us save significantly on compute resources and/or query time. But if the data for a single day were spread across many partitions, this strategy would be less effective. Simply put, we must partition by time to actually benefit time-based queries.
The practice of storing similar data together is referred to as ‘data locality’. The similarity can be on the basis of the values of one or more columns (typically dimensions). Along with query performance, locality also helps in reducing disk footprint as co-locating similar data allows for better compression.
Hot-spot or not?
It is evident that both ingestion and queries can benefit through partitioning but they have contradictory requirements. While the former requires eliminating hot-spots by spreading data out, the latter benefits from similar data being stored together, effectively creating hot-spots.
A timely solution
Despite the differences between read and write operations, the time dimension plays a key role in both of them. This is only to be expected since all event-oriented data and the insights drawn from it are more meaningful when considered over a time duration. Even simple questions like “What is the total number of users?”, “What is the increase in sales?” are more meaningful when you add the words “today”, “since last month”, or “over all time” to them.
Thus, Druid always partitions data by the timestamp dimension at a chosen segment granularity. For example, choosing a granularity of ‘DAY’ would divide the data timeline into several chunks, each containing the whole data for a day.
Such a time-based partitioning certainly benefits queries with a time filter. However, it might not work as well for write-time parallelism. This is because consecutive events in the input data stream, with adjacent timestamps, would be written to the same partition. Another concern is that as data grows and you start seeing billions of events an hour, even the finest time granularities will not yield partitions of a manageable size.
The solution is secondary partitioning to further divide each of the time chunks into smaller partitions (aka segments). This enables us to retain write-time parallelism and create appropriately-sized partitions while continuing to benefit time-based queries. Secondary partitioning can also create more opportunities for pruning out unnecessary rows based on column filters in a query.
Re-organization by auto-compaction
Partitioning by time takes us only half of the way because we are still left with the question, how should the secondary partitioning work? Should it spread out incoming data in favour of write-time speed or should it group similar data together to improve read-time performances? Unfortunately, these requirements are mutually exclusive and we cannot have a single data layout that satisfies both of them.
The good news is that Druid is a WORM (write once read many) system. We need to optimize for write only during ingestion as all later operations will only be read. This means that data can be ingested from the stream in a write-optimized manner which spreads it out in favour of parallelism. After the data is old enough to have no more writes, Druid can reorganize it in favor of data locality.
This convenient process of rearranging data in Druid is known as ‘compaction’, so called because the resulting column-based partitioning allows for rollup as well as greater compression. You can schedule compaction to run periodically with configurations defined to attain the best data layout for your querying use cases.
Row distribution
Now that compaction allows us to choose different partitioning schemes for ingest-time and query-time, let’s find the best candidates for the two. Until recently, Druid supported the following secondary partitioning schemes:
Dynamic: The default scheme, which creates a new partition after every N rows. It provides no data locality but creates uniformly sized partitions.
Hashed: A dimension-based scheme, which creates partitions based on the hashed value of chosen partition dimensions. It offers low data locality and creates more or less uniform partitions.
Single-Dim: A dimension-based scheme, which uses the range of a single dimension to create partitions. While it boasts of a high data locality, it can often create partitions that are much larger than the target size.
Let’s have a closer look at each scheme by applying it to the following sample dataset from a typical e-commerce website.
id
category
sub_category
num_items
checkout_price
1
electronics
tablet
1
107
2
apparel
kids
3
316
3
electronics
phone
1
405
4
furniture
table
1
126
5
apparel
shirts
3
352
6
electronics
laptop
1
700
7
kitchen
cutlery
10
525
8
electronics
laptop
1
950
Dynamic Partitioning
With a target partition size of 2 rows each, dynamic partitioning of this dataset looks something like this:
id
category
sub_category
partition number
1
electronics
tablet
0
2
apparel
kids
0
3
electronics
phone
1
4
furniture
table
1
5
apparel
shirts
2
6
electronics
laptop
2
7
kitchen
cutlery
3
8
electronics
laptop
3
By definition, the dynamic scheme creates partitions of size as close as possible to the target size. It is best suited for write-time operations as incoming data is written to uniformly sized partitions without the need to shuffle rows. The primary drawback is that dynamic partitioning doesn’t care about data organization at all. Note that each of the 4 partitions has a row for ‘electronics’. There is no data locality because the column values play no role in determining the partitions.
Hashed Partitioning
Hashed partitioning allows us to partition on any number of dimensions, so let’s use both category and sub_category. The number of partitions is determined by the given target size.
We need a hashing function to map the dimension values to a valid partition number. While the function used in practice is a 32-bit Murmur 3 Hash, let’s define a simpler one here that we can actually follow:
Partition Number = (length(category) + length(sub_category)) % 4
i.e. the partition number is the total length of category and sub_category columns taken as a modulo of 4.
id
category
sub_category
total length
partition mumber
1
electronics
tablet
17
1
2
apparel
kids
11
3
3
electronics
phone
16
0
4
furniture
table
14
2
5
apparel
shirts
13
1
6
electronics
laptop
17
1
7
kitchen
cutlery
14
2
8
electronics
laptop
17
1
The distribution above demonstrates the following:
There is some data locality because all rows for (‘electronics’, ‘laptop’) are assigned to partition 1. With hashed partitioning, rows with the same combination of dimension values always end up in the same partition. This implies that if a query were only looking for rows with category=‘electronics’ AND sub_category=‘laptop’, we could safely ignore all the other partitions.
The data locality is not perfect because rows for other ‘electronics’ items have ended up in other partitions.
The distribution is not as uniform as dynamic partitioning: one segment has 4 rows and one has only a single row
It is rather difficult to predict the final layout due to the complexity introduced by the hashing function (even a simple one).
Single-Dim Range Partitioning
Let’s partition our sample data on the dimension category with the same target partition size of 2 rows each. To determine the distribution, we must first identify the range of the partitioning dimension and then create boundaries at appropriate intervals. The distribution looks something like this:
id
category (sorted)
sub_category
partition number
2
apparel
kids
0
5
apparel
shirts
0
1
electronics
tablet
1 (boundary)
3
electronics
phone
1
6
electronics
laptop
1
8
electronics
laptop
1
4
furniture
table
2 (boundary)
7
kitchen
cutlery
2
The above layout demonstrates great locality because all the rows for ‘electronics’ are in partition 1. But at the same time, that partition has 4 rows: twice the target size! This is because with single-dim, all rows for a given value of the partitioning dimension (here category) always go to the same partition. So while querying for rows with say category=‘furniture’ or category=‘kitchen’, we would need to look only at partition 2.
Multi-dimension generalization
Single-dim partitioning offers a high data locality which benefits both query performance and storage efficiency. But there are two primary challenges with using single-dim:
Only one partition dimension: It is difficult to choose a single dimension that is likely to satisfy all the requirements of partitioning, i.e. improve locality, boost query perf, etc.
Uneven partition sizes: In the example above, we saw that partitioning on a single dimension can result in uneven partitions as real-world data is often skewed.
If we could somehow mitigate these drawbacks, single-dim partitioning could very well become the best read-time strategy. Driven by this lucrative possibility, we began to investigate the effects of including more dimensions in range partitioning. For example, if we were to partition on the ranges of the tuple (category, sub_category) instead of just category, the distribution for the above dataset would be as follows:
id
category (sorted)
sub_category (sorted with category)
partition number
2
apparel
kids
0
5
apparel
shirts
0
6
electronics
laptop
1 (boundary)
8
electronics
laptop
1
3
electronics
phone
2 (boundary)
1
electronics
tablet
2
4
furniture
table
3 (boundary)
7
kitchen
cutlery
3
We can see that the partitions are more evenly sized as compared to single-dim. Adding sub_category allows the ranges of both the dimensions to help determine the partition boundaries. So the rows for ‘electronics’ can now be split across two partitions without compromising on data locality.
Safe experimentation
Encouraged by these observations and some proof of concept work, we ventured a little deeper into the world of multiple dimensions – a Druid multiverse if you will. We implemented the prototype of a partitioning scheme that could work across any number of dimensions, thus creating multi-dimension range partitioning aka ‘multi-dim’.
Imply Clarity is an APM product which allows users to monitor their Druid clusters’ operational health. It is powered by a large Druid deployment itself and provides deep visibility into query performances and data flow. Built specifically for Druid, Clarity makes it very easy to pin-point performance bottlenecks and hot-spots on Druid clusters.
Given the huge data load churned by the Druid cluster for Clarity, we found it to be the perfect avenue for further experimentation. What better way to validate our theories than to test them on production data (with supervision, of course)? With a solid 75% decrease in query times and a 40% decrease in storage size over un-compacted data, multi-dim proved itself to be the best read-time partitioning scheme for most, if not all use cases.
A comparative conclusion
Druid always partitions data by the timestamp dimension to benefit time-based analytical queries. A secondary partitioning is needed to further break down the time chunks into manageable partition sizes.
Based on the observations above, we can all agree that ‘dynamic’ is the best scheme for write-time partitioning, especially with streaming ingestion. A periodic auto-compaction can then reorganize the data with a suitable read-time partitioning.
To help you choose the best scheme for read-time, we have compiled a summary of all the options now available in Druid. Here’s a hint, go for the green one!
Too good to be kept a secret, multi-dim will soon be available in Apache Druid 0.23 for everyone to use. In the meantime, it already comes packaged with Imply 2021.12(refer to docs here). Feel free to try it out on any dataset of your choice (even prod). Stay tuned for our upcoming blog where we take you on the journey of our own multidimensional adventures!
Other blogs you might find interesting
No records found...
Sep 21, 2023
Migrate Analytics Data from MongoDB to Apache Druid
This blog presents a concise guide on migrating data from MongoDB to Druid. It includes Python scripts to extract data from MongoDB, save it as CSV, and then ingest it into Druid. It also touches on maintaining...
How Druid Facilitates Real-Time Analytics for Mass Transit
Mass transit plays a key role in reimagining life in a warmer, more densely populated world. Learn how Apache Druid helps power data and analytics for mass transit.
Migrate Analytics Data from Snowflake to Apache Druid
This blog outlines the steps needed to migrate data from Snowflake to Apache Druid, a platform designed for high-performance analytical queries. The article covers the migration process, including Python scripts...
Apache Kafka, Flink, and Druid: Open Source Essentials for Real-Time Applications
Apache Kafka, Flink, and Druid, when used together, create a real-time data architecture that eliminates all these wait states. In this blog post, we’ll explore how the combination of these tools enables...
Visualizing Data in Apache Druid with the Plotly Python Library
In today's data-driven world, making sense of vast datasets can be a daunting task. Visualizing this data can transform complicated patterns into actionable insights. This blog delves into the utilization of...
Bringing Real-Time Data to Solar Power with Apache Druid
In a rapidly warming world, solar power is critical for decarbonization. Learn how Apache Druid empowers a solar equipment manufacturer to provide real-time data to users, from utility plant operators to homeowners
When to Build (Versus Buy) an Observability Application
Observability is the key to software reliability. Here’s how to decide whether to build or buy your own solution—and why Apache Druid is a popular database for real-time observability
How Innowatts Simplifies Utility Management with Apache Druid
Data is a key driver of progress and innovation in all aspects of our society and economy. By bringing digital data to physical hardware, the Internet of Things (IoT) bridges the gap between the online and...
Three Ways to Use Apache Druid for Machine Learning Workflows
An excellent addition to any machine learning environment, Apache Druid® can facilitate analytics, streamline monitoring, and add real-time data to operations and training
Apache Druid® is an open-source distributed database designed for real-time analytics at scale. Apache Druid 27.0 contains over 350 commits & 46 contributors. This release's focus is on stability and scaling...
Unleashing Real-Time Analytics in APJ: Introducing Imply Polaris on AWS AP-South-1
Imply, the company founded by the original creators of Apache Druid, has exciting news for developers in India seeking to build real-time analytics applications. Introducing Imply Polaris, a powerful database-as-a-Service...
In this guide, we will walk you through creating a very simple web app that shows a different embedded chart for each user selected from a drop-down. While this example is simple it highlights the possibilities...
Automate Streaming Data Ingestion with Kafka and Druid
In this blog post, we explore the integration of Kafka and Druid for data stream management and analysis, emphasizing automatic topic detection and ingestion. We delve into the creation of 'Ingestion Spec',...
This guide explores configuring Apache Druid to receive Kafka streaming messages. To demonstrate Druid's game-changing automatic schema discovery. Using a real-world scenario where data changes are handled...
Imply Polaris, our ever-evolving Database-as-a-Service, recently focused on global expansion, enhanced security, and improved data handling and visualization. This fully managed cloud service, based on Apache...
Introducing hands-on developer tutorials for Apache Druid
The objective of this blog is to introduce the new set of interactive tutorials focused on the Druid API fundamentals. These tutorials are available as Jupyter Notebooks and can be downloaded as a Docker container.
In this blog article I’ll unpack schema auto-discovery, a new feature now available in Druid 26.0, that enables Druid to automatically discover data fields and data types and update tables to match changing...
Druid now has a new function, Unnest. Unnest explodes an array into individual elements. This blog contains design methodology and examples for this new Unnest function both from native and SQL binding perspectives.
What’s new in Imply Polaris – Our Real-Time Analytics DBaaS
Every week we add new features and capabilities to Imply Polaris. This month, we’ve expanded security capabilities, added new query functionality, and made it easier to monitor your service with your preferred...
Apache Druid® 26.0, an open-source distributed database for real-time analytics, has seen significant improvements with 411 new commits, a 40% increase from version 25.0. The expanded contributor base of 60...
How to Build a Sentiment Analysis Application with ChatGPT and Druid
Leveraging ChatGPT for sentiment analysis, when combined with Apache Druid, offers results from large data volumes. This integration is easily achievable, revealing valuable insights and trends for businesses...
In this blog, we will compare Snowflake and Druid. It is important to note that reporting data warehouses and real-time analytics databases are different domains. Choosing the right tool for your specific requirements...
Learn how to achieve sub-second responses with Apache Druid
Learn how to achieve sub-second responses with Apache Druid. This article is an in-depth look at how Druid resolves queries and describes data modeling techniques that improve performance.
Apache Druid uses load rules to manage the ageing of segments from one historical tier to another and finally to purge old segments from the cluster. In this article, we’ll show what happens when you make...
Real-Time Analytics: Building Blocks and Architecture
This blog identifies the key technical considerations for real-time analytics. It answers what is the right data architecture and why. It spotlights the technologies used at Confluent, Reddit, Target and 1000s...
What’s new in Imply Polaris – Our Real-Time Analytics DBaaS
This blog explains some of the new features, functionality and connectivity added to Imply Polaris over the last two months. We've expanded ingestion capabilities, simplified operations and increased reliability...
Wow, that was easy – Up and running with Apache Druid
The objective of this blog is to provide a step-by-step guide on setting up Druid locally, including the use of SQL ingestion for importing data and executing analytical queries.
Tales at Scale Podcast Kicks off with the Apache Druid Origin Story
Tales at Scale cracks open the world of analytics projects and shares stories from developers and engineers who are building analytics applications or working within the real-time data space. One of the key...
Real-time Analytics Database uses partitioning and pruning to achieve its legendary performance
Apache Druid uses partitioning (splitting data) and pruning (selecting subset of data) to achieve its legendary performance. Learn how to use the CLUSTERED BY clause during ingestion for performance and high...
Easily embed analytics into your own apps with Imply’s DBaaS
This blog explains how developers can leverage Imply Polaris to embed robust visualization options directly into their own applications without them having to build a UI. This is super important because consuming...
Building an Event Analytics Pipeline with Confluent Cloud and Imply’s real time DBaaS, Polaris
Learn how to set up a pipeline that generates a simulated clickstream event stream and sends it to Confluent Cloud, processes the raw clickstream data using managed ksqlDB in Confluent Cloud, delivers the processed...
We are excited to announce the availability of Imply Polaris in Europe, specifically in AWS eu-central-1 region based in Frankfurt. Since its launch in March 2022, Imply Polaris, the fully managed Database-as-a-Service...
Should You Build or Buy Security Analytics for SecOps?
When should you build—or buy—a security analytics platform for your environment? Here are some common considerations—and how Apache Druid is the ideal foundation for any in-house security solution.
Combating financial fraud and money laundering at scale with Apache Druid
Learn how Apache Druid enables financial services firms and FinTech companies to get immediate insights from petabytes-plus data volumes for anti-fraud and anti-money laundering compliance.
This is a what's new to Imply in Dec 2022. We’ve added two new features to Imply Polaris to make it easier for your end users to take advantage of real-time insights.
Imply Pivot delivers the final mile for modern analytics applications
This blog is focused on how Imply Pivot delivers the final mile for building an anlaytics app. It showcases two customer examples - Twitch and ironsource.
For decades, analytics has been defined by the standard reporting and BI workflow, supported by the data warehouse. Now, 1000s of companies are realizing an expansion of analytics beyond reporting, which requires...
Apache Druid is at the heart of Imply. We’re an open source business, and that’s why we’re committed to making Druid the best open source database for modern analytics applications
When it comes to modern data analytics applications, speed is of the utmost importance. In this blog we discuss two approximation algorithms which can be used to greatly enhance speed with only a slight reduction...
The next chapter for Imply Polaris: celebrating 250+ accounts, continued innovation
Today we announced the next iteration of Imply Polaris, the fully managed Database-as-a-Service that helps you build modern analytics applications faster, cheaper, and with less effort. Since its launch in...
We obviously talk a lot about #ApacheDruid on here. But what are folks actually building with Druid? What is a modern analytics application, exactly? Let's find out
Elasticity is important, but beware the database that can only save you money when your application is not in use. The best solution will have excellent price-performance under all conditions.
Druid 0.23 – Features And Capabilities For Advanced Scenarios
Many of Druid’s improvements focus on building a solid foundation, including making the system more stable, easier to use, faster to scale, and better integrated with the rest of the data ecosystem. But for...
Apache Druid 0.23.0 contains over 450 updates, including new features, major performance enhancements, bug fixes, and major documentation improvements.
Imply Polaris is a fully managed database-as-a-service for building realtime analytics applications. John is the tech lead for the Polaris UI, known internally as the Unified App. It began with a profound question:...
There is a new category within data analytics emerging which is not centered in the world of reports and dashboards (the purview of data analysts and data scientists), but instead centered in the world of applications...
We are in the early stages of a stream revolution, as developers build modern transactional and analytic applications that use real-time data continuously delivered.
Developers and architects must look beyond query performance to understand the operational realities of growing and managing a high performance database and if it will consume their valuable time.
Building high performance logging analytics with Polaris and Logstash
When you think of querying with Apache Druid, you probably imagine queries over massive data sets that run in less than a second. This blog is about some of the things we did as a team to discover the user...
Horizontal scaling is the key to performance at scale, which is why every database claims this. You should investigate, though, to see how much effort it takes, especially compared to Apache Druid.
When you think of querying with Apache Druid, you probably imagine queries over massive data sets that run in less than a second. This blog is about some of the things we did as a team to discover the user...
Building Analytics for External Users is a Whole Different Animal
Analytics aren’t just for internal stakeholders anymore. If you’re building an analytics application for customers, then you’re probably wondering…what’s the right database backend?
After over 30 years of working with data analytics, we’ve been witness (and sometimes participant) to three major shifts in how we find insights from data - and now we’re looking at the fourth.
Every year industry pundits predict data and analytics becoming more valuable the following year. But this doesn’t take a crystal ball to predict. There’s instead something much more interesting happening...
Today, I'm prepared to share our progress on this effort and some of our plans for the future. But before diving further into that, let's take a closer look at how Druid's core query engine executes queries,...
Product Update: SSO, Cluster level authorization, OAuth 2.0 and more security features
When you think of querying with Apache Druid, you probably imagine queries over massive data sets that run in less than a second. This blog is about some of the things we did as a team to discover the user...
Druid Nails Cost Efficiency Challenge Against ClickHouse & Rockset
To make a long story short, we were pleased to confirm that Druid is 2 times faster than ClickHouse and 8 times faster than Rockset with fewer hardware resources!.
Unveiling Project Shapeshift Nov. 9th at Druid Summit 2021
There is a new category within data analytics emerging which is not centered in the world of reports and dashboards (the purview of data analysts and data scientists), but instead centered in the world of applications...
How we made long-running queries work in Apache Druid
When you think of querying with Apache Druid, you probably imagine queries over massive data sets that run in less than a second. This blog is about some of the things we did as a team to discover the user...
Uneven traffic flow in streaming pipelines is a common problem. Providing the right level of resources to keep up with spikes in demand is a requirement in order to deliver timely analytics.
Community Discoveries: multi-value dimensions in Apache Druid
Hellmar Becker is an Imply solutions engineer based in Germany, where he has been delving into the nooks-and-crannies of multi-valued dimension support in Druid. In this interview, Hellmar explains why...
Community Spotlight: Apache Pulsar and Apache Druid get close…
The community team at Imply spoke with an Apache Pulsar community member, Giannis Polyzos, about how collaboration between open source communities generates great things, and more specifically, about how...
Meet the team: Abhishek Agarwal, engineering lead in India
Abhishek is Imply’s first engineer in India. We spoke to him about setting up our operations in Bangalore and asked what kind of local talent the company is looking for.
Jihoon Son is a software engineer at Imply who works on Apache Druid®. He explains what drew him to Imply five years ago and why he’s even more inspired by the company today.