Building Analytics for External Users is a Whole Different Animal

Mar 22, 2022
David Wang

Analytics aren’t just for internal stakeholders anymore. If you’re building an analytics application for customers, then you’re probably wondering…what’s the right database backend?  

Your natural instinct might be to use what you know like PostgreSQL or MySQL or even extend a data warehouse beyond its core BI dashboards and reports. But analytics for external users can be revenue-impacting, so you need the right tool for the job.

The key to answering this comes down to the user experience. So let’s unpack the key technical considerations for the users of your external analytics apps.

Avoid the spinning wheel of death

We all know it and we all hate it. The wait-state of queries in a processing queue. It’s one thing to have an internal business analyst wait a few seconds or even several minutes for a report to process; it’s entirely different when the analytics are for external users. 

The root cause of the dreaded wheel comes down to the amount of data to analyze, the processing power of the database, and the number of users and API calls – net, the ability for the database to keep up with the application.  

Now, there are a few ways to build an interactive data experience with any generic OLAP database when there’s a lot of data, but they come at a cost. Precomputing all the queries makes the architecture very expensive and rigid. Aggregating the data first minimizes the insights.  Limiting the data analyzed to only recent events doesn’t give your users the complete picture.

The “no compromise” answer is an optimized architecture and data format built for interactivity at scale – like that of Apache Druid. How so?

First, Druid has a unique distributed and elastic architecture that prefetches data from a shared data layer into a near-infinite cluster of data servers. This architecture enables faster performance than a decoupled query engine like a cloud data warehouse because there’s no data to move and more scalability than a scale-up database like PostgreSQL/MySQL.  

Second, Druid employs automatic (aka auto-magic), multi-level indexing built right into the data format to drive more queries per core. This is beyond the typical OLAP columnar format with addition of a global index, data dictionary, and bitmap index. This maximizes CPU cycles for faster crunching. 

High availability can’t be a ‘nice to have’

If you and your dev team are building a backend for say internal reporting, does it really matter if it goes down for a few minutes or even longer?  Not really.  That’s why there’s always been tolerance for unplanned downtime and maintenance windows in classical OLAP databases and data warehouses.  

But now your team is building an external analytics application that customers will use. An outage here can impact revenue…and definitely your weekend. It’s why resiliency – both high availability and data durability – needs to be a top consideration in the database for external analytics applications. 

Re-thinking resiliency requires thinking about the design criteria.  Can you protect from a node or a cluster-wide failure, how bad would it be to lose data, and what work is involved to protect your app and your data?

We all know servers will fail.  The default way to build resiliency is to replicate nodes and to remember to take backups. But if you’re building apps for customers, the sensitivity to data loss is much higher. The ‘occasional’ backup is just not going to cut it.

The easiest answer is built right into Druid’s core architecture.  Designed to literally withstand anything without losing data (even recent events), Druid features a more capable and simpler approach to resiliency. 

Druid implements HA and durability based on automatic, multi-level replication with shared data in S3/object storage.  It enables the HA properties you expect as well as what you can think of as continuous backup to automatically protect and restore the latest state of the database even if you lose your entire cluster.

More users shouldn’t mean crazy expense

The best applications have the most active users and engaging experience, and for those reasons architecting your backend for high concurrency is really important. The last thing you want are frustrated customers because their applications are getting hung up. 

This is much different than architecting for internal reporting as that concurrent user count is much smaller and finite. So shouldn’t that mean the database you use for internal reporting isn’t the right fit for highly-concurrent applications? Yea, we think so too.

Architecting a database for high concurrency comes down to striking the right balance between CPU usage, scalability, and cost. The default answer for addressing concurrency is to throw more hardware at it. Because logic says if you increase the number of CPUs, you’ll be able to run more queries.  While true, this can be a very expensive approach.

The better approach would be to look at a database like Apache Druid with an optimized storage and query engine that drives down CPU usage. The operative word is “optimized” as the database shouldn’t read data that it doesn’t have to – so then the infrastructure can serve more queries in the same timespan.

Saving lots of money is a big reason why developers turn to Druid for their external analytics applications. Druid has a highly optimized data format that uses a combination of multi-level indexing – borrowed from the search engine world – along with data reduction algorithms to minimize the amount of processing required. 

Net result, Druid delivers far more efficient processing than anything else out there and can support 10s to 1000s of queries per second at TB to PB+ scale.

Lastly, build what you need today but future-proof it

Like our customers at Atlassian,Twitter, and Citrix, your external analytics applications are going to be critical to customer stickiness and revenue. And that’s why it’s important to build the right data architecture.

While your app might not have 70K DAUs off the bat (like Target’s Druid-based apps), the last thing you want is to start with the wrong database and then deal with the headaches as you scale.  Thankfully, Druid can start small and easily scale to support any app imaginable.

Other blogs you might find interesting

No records found...
Jun 17, 2024

Community Spotlight: Using Netflix’s Spectator Histogram and Kong’s DDSketch in Apache Druid for Advanced Statistical Analysis

In Apache Druid, sketches can be built from raw data at ingestion time or at query time. Apache Druid 29.0.0 included two community extensions that enhance data accuracy at the extremes of statistical distributions...

Learn More
Jun 17, 2024

Introducing Apache Druid® 30.0

We are excited to announce the release of Apache Druid 30.0. This release contains over 409 commits from 50 contributors. Druid 30 continues the investment across the following three key pillars: Ecosystem...

Learn More
Jun 12, 2024

Why I Joined Imply

After reviewing the high-level technical overview video of Apache Druid and learning about how the world's leading companies use Apache Druid, I immediately saw the immense potential in the product. Data is...

Learn More

Let us help with your analytics apps

Request a Demo