This article provides an architectural overview of how Imply Polaris integrates with Microsoft Azure services to power real-time analytics applications.
About Polaris
Imply Polaris is a cloud-native, real-time analytics DBaaS built on Apache Druid®. It deploys in seconds, scales effortlessly, and doesn’t require any Druid expertise.
Polaris offers full infrastructure management with auto-scaling, strategic support, and continuous updates. Secure and resilient by design, Polaris provides a comprehensive, integrated experience that enables you to ingest data through both streaming and batch methods. Using Polaris built-in visualization tools, you can extract valuable insights from various types of data including clickstream, IoT events, social media, monitoring, fraud detection, and more.
Architecture
We have expanded Polaris’s capabilities to Azure. With Polaris on Azure, you can now create clusters, ingest data, execute queries, and build dashboards—all within the Azure ecosystem.
Polaris integrates seamlessly into new or existing data pipelines. It provides connectors from various data sources, which allows for easy data ingestion into Polaris. It allows users to query their data via both the UI and API, while also enabling the creation of dashboards for data visualization. This architecture above highlights the essential components of a typical data pipeline and how Polaris fits into the ecosystem. Below are the key components:
Data Delivery:
Data Processing
- Apache Flink, Apache Spark
- Imply Polaris
Data Storage
- Imply Polaris ( Azure Blob Storage and Azure Disks )
Visualization
Data Workflow
Polaris is purpose-built to provide real time insights on streaming data in combination with historical data. Here is a how data will flow into Polaris and can then be used to create insights for customers and their end users:
- Data produced from various data streaming sources such as Clickstream, IoT, Monitoring etc. can be delivered using streaming platforms such as Azure Event Hub, self-hosted Kafka, and Confluent Cloud on Azure.
- Historical data from batch sources such as logs, blobs, and files can be delivered through Azure Data lake services such as Azure Blob Storage and Azure Data Lake Storage v2.
- Optionally, batch or streaming data can be pre-processed before ingesting into Polaris using ETL processing tools such as Apache Flink and Spark.
- Polaris has dedicated connectors to Azure streaming and batch sources to easily ingest this data into clusters.
- Once ingested, data is available to be queried right away. Data is partitioned and converted into segment files and stored in tables.
- The data is also ready to be visualized using Polaris’ in-built visualization engine – Pivot. You can build data cubes, reports and dashboards in the Polaris UI.
- Polaris also integrates with 3rd party visualization tools such as Tableau and Azure Managed Grafana that help customers to build dashboards and reports.
Considerations
Deployment and Scale
- Polaris offers flexibility in deployment by allowing you to choose from a predefined list of project shapes and SKUs. This enables you to select the appropriate compute and storage capacities based on your specific use case, data volume, and retention requirements.
- Polaris’s ingestion auto scales according to the volume of data ingested, ensuring efficient resource utilization. You only pay for the data you ingest, with Polaris dynamically managing resource scaling up and down to meet demand.
Security
- All data at rest is encrypted with AES-256 and data in transit is always sent over HTTPS with TLSv1.2.
- Polaris provides you with private networking options to make sure data always remains within the cloud boundaries.
- Built-in access control features allow organizations to manage and restrict user and API access to Polaris resources.
Resiliency
- Customer data is stored in highly available cloud services. In Azure, Polaris leverages Azure Blob Storage and its geo redundant services to make sure data is always available.
- Compute resources are deployed across multiple Availability Zones in highly available Azure Kubernetes clusters.
- Customer metadata is stored in highly available Azure CloudSQL instances.
Performance
- Projects in Polaris run in dedicated Kubernetes namespaces, and Kubernetes resource limits are used to guarantee projects with dedicated compute and storage resources.
Next Steps
Learn more and experience the power of Polaris on Azure by signing up for your 30-day free trial.