Imply is a powerful event analytics platform built on the open-source Druid data store. With Imply, you can explore your events using interactive visualizations, SQL, or your own custom applications. Imply is designed to be deployed on premise or in the cloud and to power internal and external analytic applications. The core of the platform is the open source Druid data store. Pivot, PlyQL, and Plywood enable data exploration immediately after data is ingested.
The easiest way to evaluate Imply is to install it on a single machine using the quickstart.
Druid is the open source analytics data store at the core of the platform. Druid enables arbitrary data exploration, low latency data ingestion, and fast aggregations at scale. Druid can scale to store trillion of events and ingest millions of events per second. Druid is best used to power user-facing data applications.
For more information about Druid, please visit http://druid.io.
Imply Pivot is a web-based UI for visual data exploration. It features dimensional pivoting, slice-and- dice and nested visualization, as well as contextual information and navigation. Use Pivot to perform OLAP operations with your data and immediately visualize your data once it is loaded in the platform.
For more information about Pivot, please visit http://pivot.imply.io.
PlyQL is an alternative SQL language for Druid, built on top of Plywood. You can query it using the plyql command line tool included in the Imply distribution.
For more information about PlyQL, please visit http://plywood.imply.io/plyql.
For more information about Plywood, please visit http://plywood.imply.io.
Imply bundles Druid, Pivot, PlyQL, and Plywood in a single, easy-to-install package. It can be run in any configuration supported by the underlying components. For users just getting started, we recommend running a cluster with these three server types:
Query servers are the endpoints that users and client applications interact with. Query servers run a Druid Broker that route queries to the appropriate data nodes. They also include an Imply Pivot server as a way to directly explore and visualize your data.
Data servers store and ingest data. Data servers run Druid Historical Nodes for storage and processing of large amounts of immutable data, Druid MiddleManagers for ingestion and processing of data, and optionally Tranquility components to assist in streaming data ingestion.
For clusters with complex resource allocation needs, you can break apart the pre-packaged Data server and scale the components individually. This allows you to scale Druid Historical Nodes independently of Druid MiddleManagers, as well as eliminate the possibility of resource contention between historical workloads and real-time workloads.
The Master server coordinates data ingestion and storage in your Druid cluster. It is not involved in queries. It is responsible for starting new ingestion jobs and for handling failover of the Druid Historical Node and Druid MiddleManager processes running on your Data servers.
Master servers can be deployed standalone, or in a highly-available configuration with failover. For failover-based configurations, we recommend separating ZooKeeper and the metadata store into their own hardware. See the clustering documentation for more details.
Imply offers several advantages over stock Druid:
It is easy to migrate to and from stock Druid and the Imply distribution.