This module addresses best practices and considerations for data modeling in Druid. It covers why you can’t cut and paste a data model from an RDBMS, denormalization/flattening; schema design; column inclusion; field type options and selection; use of aggregation, complex data types like hyperunique and datasketches and multi-value arrays
An overview of Druid University topics.
A high level overview of Apache Druid and its key capabilities.
Druid components, processes and ecosystem.
The types of analytics are the best fit for Druid.
A deeper drive into the Druid components and processes.
The details and benefits of the Druid columnar file format.
Best practices and considerations for data modeling in Druid.
Build an ingestion spec for data streaming from Apache Kafka.
Build an ingestion spec for Druid native batch ingestion.
Using the Druid SQL API.
A brief walkthrough of Imply Pivot analytics UI.
A short Druid University summary and next steps video.