The next chapter for Imply Polaris: celebrating 250+ accounts, continued innovation

Sep 20, 2022
Julia Brouillette

Today we announced the next iteration of Imply Polaris, the fully managed Database-as-a-Service that helps you build modern analytics applications faster, cheaper, and with less effort.

Since its launch in March 2022, Polaris has quickly gained momentum in the market—we’ve added more than 250 new accounts to date, and counting. We’ve seen a lot of excitement from our customers, who represent an international group of industry leaders transforming the world of real-time data with modern analytics applications.

One of those customers is Pelmorex, a Canadian weather information and media company. We asked Radu Nicolae, Technical Product Manager for Pelmorex’s DSP, about why they chose Polaris for their audience and campaign analytics use case:

“With Imply Polaris as an internal analytics powerhouse, we can track changes in traffic and enable our operations team to adjust strategies in real time,” Nicolae said. “As a cloud database service, Polaris was the fastest, most affordable, and secure way to build our Apache Druid-powered service.”

Polaris enables you to get the full power of open-source Apache Druid and start extracting insights from your data within minutes—without the need to procure any additional infrastructure. You can use the same database from start to scale, with automatic tuning and continuous upgrades that ensure the best performance at every stage of your application’s life—from the first query to your first thousand users and beyond. Over the past six months, we’ve continuously worked to make Polaris even better. We have expanded ingestion capabilities, improved visualization capabilities, and added integrations with event-streaming platforms.

In our latest Polaris release, we’re building on top of the innovative feature set announced in our March 2022 launch, delivering updates to enhance data ingestion, simplify operations, and make pricing even more flexible.

Enhanced Data Ingestion

Loading data into Polaris is now easier and more flexible with expanded support for schemaless ingestion. We’ve added support for nested columns, allowing for arbitrary nesting of typed data, like JSON (for a detailed look at the new features released in Apache Druid 24.0, check out this post). And because Apache Druid can natively store any JSON structure—including flat data, nested data, and arrays—you don’t need to alter the data format before ingesting it in Polaris. This feature is especially powerful when the data shape is unknown, as is often the case when ingesting logs from different applications and services, for example.

In addition to schemaless ingestion, we’re enhancing ingestion in three ways. First, we added SQL-based data transformation capabilities. This feature enables you to define new columns whose values are the outcome of any function you choose—all via SQL. With this addition, use cases like custom scoring and categorizing of data upon ingestion are simpler and less time-consuming.

Second, Polaris now supports Theta and HyperLogLog (HLL) sketches, two classes of algorithms that leverage the power of approximations and rollup to deliver subsecond query responses on terabytes of high cardinality data. Let’s say your application scales to ingest billions of events per day and serve many active users (which, you probably hope it does)—using sketches will dramatically reduce the volume and processing costs of data while maintaining extreme levels of query performance within 98 percent accuracy.

There are various uses and use cases for sketches. A common one is to store estimates and statistical information about unique values, rather than storing each unique value and its count (read: exponentially more data). For example, if you want an estimate of how many distinct users visited a webpage and then clicked on an advertisement, but you don’t want to store a row for each separate user’s click, this is where sketches and rollups in Apache Druid shine.

Want to see sketches in action? Check out how Nielsen Marketing Cloud uses sketches in Druid for audience and marketing performance analysis, shaving hours off of data ingestion time versus Elasticsearch.

Last, but not least, we made broader streaming options available in Polaris with support for Confluent Cloud. Streaming data pipelines are increasingly critical to the success of modern businesses, so we are excited to support more ways to mobilize streams in Polaris.

Simplified Operations

We designed Polaris to get the best possible performance, availability, and security out of Apache Druid with no Druid expertise required. Simple operations features are a core part of that design—and from built-in monitoring that delivers a curated experience for situational awareness, to user-defined alerts that help meet Druid and business-level objectives, we’ve taken Polaris operations to a new level with this release.

In addition to role-based access control, newly expanded support for resource-based access delivers more granular controls for increased security. This includes row-level access control capabilities that allow you to attach a policy directly to the resource, dictating who can view, download, or edit specific resources—a key feature for external-facing use cases. Finally, Polaris’ built-in visualization now enables faster slicing and dicing so developers can provide immediate value to end users from their Polaris-powered applications without building a custom UI.

More Flexible Pricing

Polaris now offers more flexible pricing that allows you to customize the service to meet your specific price and performance requirements. We’re able to do this by introducing a general-purpose node type to complement the existing compute-optimized node type. Building on the success of our compute-optimized series (A-series), the general-purpose node type (D-series) balances compute, memory, and storage capacity for more diverse workloads. Developers can choose to continue using the compute-optimized nodes for compute-intensive workloads or applications that benefit from high-performance processors. Or, switch to general-purpose nodes when a more balanced approach between compute and storage is acceptable. Of course, as technical or business requirements evolve and change, it’s easy to move between node types with a single click.

To make pricing even more transparent, we’ve enabled more granular and comprehensive visibility into consumption and billing metrics, so you can easily customize your Polaris service to meet the unique needs of your application and users.

Learn more and get started for free

We are proud to have reached this milestone for Polaris—but we are constantly innovating, adding new capabilities around performance, ingestion, regional support, and more. In the coming weeks, we’ll provide deeper dives into new and upcoming features so you can take full advantage of this upgraded developer experience.

Ready to get started? Sign up for a free 30-day trial of Imply Polaris—no credit card required! As always, we’re here to help—if you want to learn more or simply have questions, set up a demo with an Imply expert.

Other blogs you might find interesting

No records found...
Jul 23, 2024

Streamlining Time Series Analysis with Imply Polaris

We are excited to share the latest enhancements in Imply Polaris, introducing time series analysis to revolutionize your analytics capabilities across vast amounts of data in real time.

Learn More
Jul 03, 2024

Using Upserts in Imply Polaris

Transform your data management with upserts in Imply Polaris! Ensure data consistency and supercharge efficiency by seamlessly combining insert and update operations into one powerful action. Discover how Polaris’s...

Learn More
Jul 01, 2024

Make Imply Polaris the New Home for your Rockset Data

Rockset is deprecating its services—so where should you go? Try Imply Polaris, the database built for speed, scale, and streaming data.

Learn More

Let us help with your analytics apps

Request a Demo