Real-time analytics architecture with Imply Polaris on Microsoft Azure

Sep 06, 2024
Adhip Gupta

This article provides an architectural overview of how Imply Polaris integrates with Microsoft Azure services to power real-time analytics applications.

About Polaris

Imply Polaris is a cloud-native, real-time analytics DBaaS built on Apache Druid®. It deploys in seconds, scales effortlessly, and doesn’t require any Druid expertise.

Polaris offers full infrastructure management with auto-scaling, strategic support, and continuous updates. Secure and resilient by design, Polaris provides a comprehensive, integrated experience that enables you to ingest data through both streaming and batch methods. Using Polaris built-in visualization tools, you can extract valuable insights from various types of data including clickstream, IoT events, social media, monitoring, fraud detection, and more.

Architecture

We have expanded Polaris’s capabilities to Azure. With Polaris on Azure, you can now create clusters, ingest data, execute queries, and build dashboards—all within the Azure ecosystem.

Polaris integrates seamlessly into new or existing data pipelines. It provides connectors from various data sources, which allows for easy data ingestion into Polaris. It allows users to query their data via both the UI and API, while also enabling the creation of dashboards for data visualization. This architecture above highlights the essential components of a typical data pipeline and how Polaris fits into the ecosystem. Below are the key components:

Data Delivery: 

Data Processing

  • Apache Flink, Apache Spark
  • Imply Polaris

Data Storage

  • Imply Polaris ( Azure Blob Storage and Azure Disks )

Visualization

Data Workflow

Polaris is purpose-built to provide real time insights on streaming data in combination with historical data. Here is a how data will flow into Polaris and can then be used to create insights for customers and their end users:

  1. Data produced from various data streaming sources such as Clickstream, IoT, Monitoring etc. can be delivered using streaming platforms such as Azure Event Hub, self-hosted Kafka, and Confluent Cloud on Azure.
  2. Historical data from batch sources such as logs, blobs, and files can be delivered through Azure Data lake services such as Azure Blob Storage and Azure Data Lake Storage v2.
  3. Optionally, batch or streaming data can be pre-processed before ingesting into Polaris using ETL processing tools such as Apache Flink and Spark.
  4. Polaris has dedicated connectors to Azure streaming and batch sources to easily ingest this data into clusters. 
  5. Once ingested, data is available to be queried right away. Data is partitioned and converted into segment files and stored in tables.
  6. The data is also ready to be visualized using Polaris’ in-built visualization engine – Pivot. You can build data cubes, reports and dashboards in the Polaris UI.
    1. Polaris also integrates with 3rd party visualization tools such as Tableau and Azure Managed Grafana that help customers to build dashboards and reports.

Considerations

Deployment and Scale

  • Polaris offers flexibility in deployment by allowing you to choose from a predefined list of project shapes and SKUs. This enables you to select the appropriate compute and storage capacities based on your specific use case, data volume, and retention requirements.
  • Polaris’s ingestion auto scales according to the volume of data ingested, ensuring efficient resource utilization. You only pay for the data you ingest, with Polaris dynamically managing resource scaling up and down to meet demand.

Security

  • All data at rest is encrypted with AES-256 and data in transit is always sent over HTTPS with TLSv1.2.
  • Polaris provides you with private networking options to make sure data always remains within the cloud boundaries.
  • Built-in access control features allow organizations to manage and restrict user and API access to Polaris resources.

Resiliency

  • Customer data is stored in highly available cloud services. In Azure, Polaris leverages Azure Blob Storage and its geo redundant services to make sure data is always available.
  • Compute resources are deployed across multiple Availability Zones in highly available Azure Kubernetes clusters.
  • Customer metadata is stored in highly available Azure CloudSQL instances.

Performance

  • Projects in Polaris run in dedicated Kubernetes namespaces, and Kubernetes resource limits are used to guarantee projects with dedicated compute and storage resources.

Next Steps

Learn more and experience the power of Polaris on Azure by signing up for your 30-day free trial.

Other blogs you might find interesting

No records found...
Feb 03, 2026

Imply Lumi product update: what’s new and what’s coming

Since releasing Imply Lumi in September 2025 as a decoupled data layer for observability, the Imply R&D team has been hard at work to make it easier and more economical to retain, query, and analyze observability...

Learn More
Dec 19, 2025

The Most-Read Imply Blogs of 2025 (and what they signal for 2026)

Before we take on 2026, let’s rewind. 2025 was the year observability teams stopped asking, “How do we reduce data?” and started asking the real question: “How do we build an architecture that can keep...

Learn More
Dec 16, 2025

The Breaking Point for Observability Leaders

Observability is at a crossroads For years, observability has promised to give teams the visibility they need to keep digital services resilient. But as data volumes explode, many leaders are realizing the...

Learn More

Ready to decouple your observability stack?
No workflow changes. No migrations. More data, less spend.

Request a Demo