At Imply, we’ve talked often about how the world of analytics is changing (a16z podcast, future of analytics, the new analytics hero). More and more companies are building modern fit-for-purpose analytics applications for either internal or external consumption. These applications need a fast real-time analytical database that can handle any scale and thousands of concurrent users. For example, Atlassian (whitepaper) and Citrix (video) are embedding these applications into their products for their customers. While others, like Netflix (blog), Twitch (video), NTT (blog), and Charter (whitepaper), are building standalone internal analytics applications.
Developers at 1000+ leading companies have adopted Apache Druid as their database for analytics applications, many of whom look to Imply to deliver a simplified experience. Today, we’re excited to announce a major leap forward in ease-of-use with the introduction of Imply Polaris, our fully-managed, database-as-a-service. With Polaris, developers can get started with Druid in minutes and build massively scalable and interactive analytics applications without expertise or management overhead.
It’s quick and easy enough to download the Apache Druid open-source package and run through the quickstart with sample data. But things become more involved when you need to load and analyze your own real-world data which could run into the hundreds of terabytes or petabytes.
The challenges
We wanted to make Apache Druid’s power more easily accessible to individual developers through their journey, from setup and initial application development to production deployment and then scale without depending on anyone else. We identified three challenges: infrastructure, operations, and integrations.
First, we had to abstract the infrastructure so that developers could focus on the application rather than deploying and securing a distributed database. It can be a full-time job in itself to size hardware, tune Druid to ever-changing application needs, and ensure system security. Managing distributed systems and keeping them up-to-date takes time that could be spent focused on application development instead. No one wants to build the “undifferentiated heavy lifting”.
Second, getting the best out of real-time databases requires tuning on multiple layers. Some are around how the database works itself—how it should store data, what indices to use, how to prioritize queries, or how to balance data across servers. Other tuning concerns are more application-centric and involve aligning your data model and queries with the database’s architecture to get the best performance. Either way, an engineer needs to spend valuable time to extract the best performance from Druid.
Third, developers need to get data into and out of the database and present it to users. They need to think about how the data flows out of their application, how it gets transformed, loaded into the database, and ultimately presented to the end user. They may need additional data infrastructure to build these pipelines. That’s more overhead to build and manage, just to start with a basic application. The other direction also requires some thought. How does a developer present the data to the user? How does it provide them the ability to drill into the data and explore it when they have questions? Will they need to build a whole visualization system before they get to “hello world”?
Introducing Imply Polaris: the easiest way to build analytical applications
Imply Polaris is a fully managed Database-as-a-Service and more. Most similar offerings focus only on deployment management capabilities. Sure, Polaris has all the SaaS-y features you’d expect:
Polaris can get you started with your own database in 5 minutes
It follows all the best practices for security and is SOC2 (type 1 now, and type 2 in a few months) and HIPAA compliant
It easily scales up and down your cluster with a couple clicks and no downtime
It autoscales ingestion resources to ensure that you always load your files quickly and can maintain a lag for streaming data of less than 10 seconds.
We consider these capabilities table stakes for such an offering. But what we’re really most excited about is how Polaris rethinks the developer experience end-to-end.
For example, we know that developers who want to start building an application don’t want to have to set up additional data infrastructure or punch through a firewall to connect an external service before loading a single byte of data. On the other hand, it’s easy for a developer to simply push the data they want directly from their application to Polaris, so we made push-based streaming available in Polaris.
At the same time, we wanted to make it easy for developers to get the best performance. Imply Polaris automatically implements best practices for performance. For example, Polaris automatically chooses the right partitioning scheme based on the state of the table and automatically compacts data as streaming data lands in. Built-in performance monitoring allows developers to drill into the performance of their queries. Over time, Polaris will become even more intelligent as we build in tooling to help developers understand the impact of various performance tuning knobs. That way they will know, for example, which queries will get faster and which will get slower with one partitioning setup versus another. We will give developers the visibility to understand the performance impacts of pre-aggregation (aka rollup) and how to maximize it, and, more generally, how best to wield Druid’s superpowers and bend them to their will.
The last point I’ll touch on is our visualization layer. Users familiar with Imply will recognize Imply Pivot embedded in Polaris. Others will see a powerful UI capable of slicing and dicing highly dimensional data at the speed of thought. This is important because we want developers to be able to deliver immediate value to end users out of Polaris without building a whole custom application. As soon as data is in Polaris, users can start creating visualizations and drilling down into interesting data patterns to understand what’s going on and why. Over the next year, we’ll be improving Pivot’s ability to be embedded in external applications or to be used by customers wholesale as an application offering for their own customers.
What’s coming up
We’re rapidly developing and expanding Polaris’s capabilities. We’ll soon be adding more regions, more project sizes, more capabilities around ingestion including pull-based ingestion, and more autoscaling capabilities. We’ll be expanding our performance monitoring capabilities and the available workflows to optimize performance. For now, we’re focusing on making sure that every new capability we add to Polaris comes with an amazing developer experience.
How to get started with Imply Polaris
It’s easy to get started with Polaris. Simply sign up for a free trial at https://imply.io/polaris-signup, no credit card required, and within a few minutes you’ll be up and going building apps to slice and dice data like a ninja.
Other blogs you might find interesting
No records found...
Nov 15, 2023
Introducing Apache Druid 28.0.0
Apache Druid 28.0, an open-source database for real-time analytics, introduces Async queries, UNION ALL support, SQL WINDOW functions, enhanced ingestion features, including multi-Kafka topic support, and...
This blog covers the rationale, advantages, and step-by-step process for data transfer from AWS s3 to Apache Druid for faster real-time analytics and querying.
What’s new in Imply Polaris, our real-time analytics DBaaS – September 2023
Every week, we add new features and capabilities to Imply Polaris. Throughout September, we've focused on enhancing your experience as you explore trials, navigate data integration, oversee data management,...
Introducing incremental encoding for Apache Druid dictionary encoded columns
In this blog post we deep dive on a recent engineering effort: incremental encoding of STRING columns. In preliminary testing, it has shown to be quite promising at significantly reducing the size of segment...
Migrate Analytics Data from MongoDB to Apache Druid
This blog presents a concise guide on migrating data from MongoDB to Druid. It includes Python scripts to extract data from MongoDB, save it as CSV, and then ingest it into Druid. It also touches on maintaining...
How Druid Facilitates Real-Time Analytics for Mass Transit
Mass transit plays a key role in reimagining life in a warmer, more densely populated world. Learn how Apache Druid helps power data and analytics for mass transit.
Migrate Analytics Data from Snowflake to Apache Druid
This blog outlines the steps needed to migrate data from Snowflake to Apache Druid, a platform designed for high-performance analytical queries. The article covers the migration process, including Python scripts...
Apache Kafka, Flink, and Druid: Open Source Essentials for Real-Time Data Applications
Apache Kafka, Flink, and Druid, when used together, create a real-time data architecture that eliminates all these wait states. In this blog post, we’ll explore how the combination of these tools enables...
Visualizing Data in Apache Druid with the Plotly Python Library
In today's data-driven world, making sense of vast datasets can be a daunting task. Visualizing this data can transform complicated patterns into actionable insights. This blog delves into the utilization of...
Bringing Real-Time Data to Solar Power with Apache Druid
In a rapidly warming world, solar power is critical for decarbonization. Learn how Apache Druid empowers a solar equipment manufacturer to provide real-time data to users, from utility plant operators to homeowners
When to Build (Versus Buy) an Observability Application
Observability is the key to software reliability. Here’s how to decide whether to build or buy your own solution—and why Apache Druid is a popular database for real-time observability
How Innowatts Simplifies Utility Management with Apache Druid
Data is a key driver of progress and innovation in all aspects of our society and economy. By bringing digital data to physical hardware, the Internet of Things (IoT) bridges the gap between the online and...
Three Ways to Use Apache Druid for Machine Learning Workflows
An excellent addition to any machine learning environment, Apache Druid® can facilitate analytics, streamline monitoring, and add real-time data to operations and training
Apache Druid® is an open-source distributed database designed for real-time analytics at scale. Apache Druid 27.0 contains over 350 commits & 46 contributors. This release's focus is on stability and scaling...
Unleashing Real-Time Analytics in APJ: Introducing Imply Polaris on AWS AP-South-1
Imply, the company founded by the original creators of Apache Druid, has exciting news for developers in India seeking to build real-time analytics applications. Introducing Imply Polaris, a powerful database-as-a-Service...
In this guide, we will walk you through creating a very simple web app that shows a different embedded chart for each user selected from a drop-down. While this example is simple it highlights the possibilities...
Automate Streaming Data Ingestion with Kafka and Druid
In this blog post, we explore the integration of Kafka and Druid for data stream management and analysis, emphasizing automatic topic detection and ingestion. We delve into the creation of 'Ingestion Spec',...
This guide explores configuring Apache Druid to receive Kafka streaming messages. To demonstrate Druid's game-changing automatic schema discovery. Using a real-world scenario where data changes are handled...
Imply Polaris, our ever-evolving Database-as-a-Service, recently focused on global expansion, enhanced security, and improved data handling and visualization. This fully managed cloud service, based on Apache...
Introducing hands-on developer tutorials for Apache Druid
The objective of this blog is to introduce the new set of interactive tutorials focused on the Druid API fundamentals. These tutorials are available as Jupyter Notebooks and can be downloaded as a Docker container.
In this blog article I’ll unpack schema auto-discovery, a new feature now available in Druid 26.0, that enables Druid to automatically discover data fields and data types and update tables to match changing...
Druid now has a new function, Unnest. Unnest explodes an array into individual elements. This blog contains design methodology and examples for this new Unnest function both from native and SQL binding perspectives.
What’s new in Imply Polaris – Our Real-Time Analytics DBaaS
Every week we add new features and capabilities to Imply Polaris. This month, we’ve expanded security capabilities, added new query functionality, and made it easier to monitor your service with your preferred...
Apache Druid® 26.0, an open-source distributed database for real-time analytics, has seen significant improvements with 411 new commits, a 40% increase from version 25.0. The expanded contributor base of 60...
How to Build a Sentiment Analysis Application with ChatGPT and Druid
Leveraging ChatGPT for sentiment analysis, when combined with Apache Druid, offers results from large data volumes. This integration is easily achievable, revealing valuable insights and trends for businesses...
In this blog, we will compare Snowflake and Druid. It is important to note that reporting data warehouses and real-time analytics databases are different domains. Choosing the right tool for your specific requirements...
Learn how to achieve sub-second responses with Apache Druid
Learn how to achieve sub-second responses with Apache Druid. This article is an in-depth look at how Druid resolves queries and describes data modeling techniques that improve performance.
Apache Druid uses load rules to manage the ageing of segments from one historical tier to another and finally to purge old segments from the cluster. In this article, we’ll show what happens when you make...
Real-Time Analytics: Building Blocks and Architecture
This blog identifies the key technical considerations for real-time analytics. It answers what is the right data architecture and why. It spotlights the technologies used at Confluent, Reddit, Target and 1000s...
What’s new in Imply Polaris – Our Real-Time Analytics DBaaS
This blog explains some of the new features, functionality and connectivity added to Imply Polaris over the last two months. We've expanded ingestion capabilities, simplified operations and increased reliability...
Wow, that was easy – Up and running with Apache Druid
The objective of this blog is to provide a step-by-step guide on setting up Druid locally, including the use of SQL ingestion for importing data and executing analytical queries.
Tales at Scale Podcast Kicks off with the Apache Druid Origin Story
Tales at Scale cracks open the world of analytics projects and shares stories from developers and engineers who are building analytics applications or working within the real-time data space. One of the key...
Real-time Analytics Database uses partitioning and pruning to achieve its legendary performance
Apache Druid uses partitioning (splitting data) and pruning (selecting subset of data) to achieve its legendary performance. Learn how to use the CLUSTERED BY clause during ingestion for performance and high...
Easily embed analytics into your own apps with Imply’s DBaaS
This blog explains how developers can leverage Imply Polaris to embed robust visualization options directly into their own applications without them having to build a UI. This is super important because consuming...
Building an Event Analytics Pipeline with Confluent Cloud and Imply’s real time DBaaS, Polaris
Learn how to set up a pipeline that generates a simulated clickstream event stream and sends it to Confluent Cloud, processes the raw clickstream data using managed ksqlDB in Confluent Cloud, delivers the processed...
We are excited to announce the availability of Imply Polaris in Europe, specifically in AWS eu-central-1 region based in Frankfurt. Since its launch in March 2022, Imply Polaris, the fully managed Database-as-a-Service...
Should You Build or Buy Security Analytics for SecOps?
When should you build—or buy—a security analytics platform for your environment? Here are some common considerations—and how Apache Druid is the ideal foundation for any in-house security solution.
Combating financial fraud and money laundering at scale with Apache Druid
Learn how Apache Druid enables financial services firms and FinTech companies to get immediate insights from petabytes-plus data volumes for anti-fraud and anti-money laundering compliance.
This is a what's new to Imply in Dec 2022. We’ve added two new features to Imply Polaris to make it easier for your end users to take advantage of real-time insights.
Imply Pivot delivers the final mile for modern analytics applications
This blog is focused on how Imply Pivot delivers the final mile for building an anlaytics app. It showcases two customer examples - Twitch and ironsource.
For decades, analytics has been defined by the standard reporting and BI workflow, supported by the data warehouse. Now, 1000s of companies are realizing an expansion of analytics beyond reporting, which requires...
Apache Druid is at the heart of Imply. We’re an open source business, and that’s why we’re committed to making Druid the best open source database for modern analytics applications
When it comes to modern data analytics applications, speed is of the utmost importance. In this blog we discuss two approximation algorithms which can be used to greatly enhance speed with only a slight reduction...
The next chapter for Imply Polaris: celebrating 250+ accounts, continued innovation
Today we announced the next iteration of Imply Polaris, the fully managed Database-as-a-Service that helps you build modern analytics applications faster, cheaper, and with less effort. Since its launch in...
We obviously talk a lot about #ApacheDruid on here. But what are folks actually building with Druid? What is a modern analytics application, exactly? Let's find out
Elasticity is important, but beware the database that can only save you money when your application is not in use. The best solution will have excellent price-performance under all conditions.
Druid 0.23 – Features And Capabilities For Advanced Scenarios
Many of Druid’s improvements focus on building a solid foundation, including making the system more stable, easier to use, faster to scale, and better integrated with the rest of the data ecosystem. But for...
Apache Druid 0.23.0 contains over 450 updates, including new features, major performance enhancements, bug fixes, and major documentation improvements.
Imply Polaris is a fully managed database-as-a-service for building realtime analytics applications. John is the tech lead for the Polaris UI, known internally as the Unified App. It began with a profound question:...
There is a new category within data analytics emerging which is not centered in the world of reports and dashboards (the purview of data analysts and data scientists), but instead centered in the world of applications...
We are in the early stages of a stream revolution, as developers build modern transactional and analytic applications that use real-time data continuously delivered.
Developers and architects must look beyond query performance to understand the operational realities of growing and managing a high performance database and if it will consume their valuable time.
Building high performance logging analytics with Polaris and Logstash
When you think of querying with Apache Druid, you probably imagine queries over massive data sets that run in less than a second. This blog is about some of the things we did as a team to discover the user...
Horizontal scaling is the key to performance at scale, which is why every database claims this. You should investigate, though, to see how much effort it takes, especially compared to Apache Druid.
When you think of querying with Apache Druid, you probably imagine queries over massive data sets that run in less than a second. This blog is about some of the things we did as a team to discover the user...
Building Analytics for External Users is a Whole Different Animal
Analytics aren’t just for internal stakeholders anymore. If you’re building an analytics application for customers, then you’re probably wondering…what’s the right database backend?
After over 30 years of working with data analytics, we’ve been witness (and sometimes participant) to three major shifts in how we find insights from data - and now we’re looking at the fourth.
Every year industry pundits predict data and analytics becoming more valuable the following year. But this doesn’t take a crystal ball to predict. There’s instead something much more interesting happening...
Today, I'm prepared to share our progress on this effort and some of our plans for the future. But before diving further into that, let's take a closer look at how Druid's core query engine executes queries,...
Product Update: SSO, Cluster level authorization, OAuth 2.0 and more security features
When you think of querying with Apache Druid, you probably imagine queries over massive data sets that run in less than a second. This blog is about some of the things we did as a team to discover the user...
When you think of querying with Apache Druid, you probably imagine queries over massive data sets that run in less than a second. This blog is about some of the things we did as a team to discover the user...
Druid Nails Cost Efficiency Challenge Against ClickHouse & Rockset
To make a long story short, we were pleased to confirm that Druid is 2 times faster than ClickHouse and 8 times faster than Rockset with fewer hardware resources!.
Unveiling Project Shapeshift Nov. 9th at Druid Summit 2021
There is a new category within data analytics emerging which is not centered in the world of reports and dashboards (the purview of data analysts and data scientists), but instead centered in the world of applications...
How we made long-running queries work in Apache Druid
When you think of querying with Apache Druid, you probably imagine queries over massive data sets that run in less than a second. This blog is about some of the things we did as a team to discover the user...
Uneven traffic flow in streaming pipelines is a common problem. Providing the right level of resources to keep up with spikes in demand is a requirement in order to deliver timely analytics.