Online analytical processing (OLAP) and online transactional processing (OLTP) are the two most common database systems today. While some use cases utilize aspects of both types, this distinction is still the easiest way to think about databases.
OLAP databases are used to run analytics on massive volumes of data. This is ideal for providing a detailed understanding of operations, profit, and other key performance indicators. Need to find the most profitable store and its best-selling products for a specific quarter? Need to find the solar panel producing the least electricity this month (and why)? Use OLAP.
OLTP databases, however, are used for daily operations, such as checkouts, searches, deliveries, and more. Need to track likes on a social media post? Need to manage bookings at a resort? Use OLTP.
Read on to learn more about the characteristics of OLAP and OLTP databases, details such as schema and multidimensional data analysis, and where each type of database excels.
What is an OLTP database?
OLTP systems are optimized for many users executing large quantities of small, fast transactions. Examples of these fast-paced, high-volume operations include a hedge fund trading stocks, an airline verifying flight times for ticket purchases, or a hotel blocking off rooms for online bookings.
There are four broad categories of transactional operations:
Create operations add new data to the database. For instance, if a new customer creates an account, then their information will be inserted into the database for future interactions, such as shopping, shipping, billing, and more.
Read operations access existing data from the database. Examples include retrieving a customer address for shipping a package, or checking inventory levels before adding a product to a shopping cart.
Update operations change existing data. This could be an online retailer modifying a product listing, or a shopper correcting their credit card information.
Delete operations remove data. This can take the form of a news site removing subscriber information upon request, or an ecommerce site erasing old customer addresses.
Importantly, all transactional database operations must be ACID compliant, in order to guarantee the reliable, resilient execution of database transactions. These properties are:
Atomic. Each transaction is a single unit of work, which will either complete or fail, without any halfway measures. This guards against inconsistencies arising from partially completed transactions. For instance, if a customer data update fails to complete, then the whole operation will be aborted and re-attempted, rather than leaving half the information in the database fields.
Consistent. Databases have constraints to ensure uniformity and data integrity, and to reject any transactions that violate these rules. As an example, if an e-commerce order comes in with only a partial customer address, the database can block this transaction from completing without the required information.
Isolated. Concurrent transactions do not interfere with each other. This prevents issues like phantom reads, when one transaction deletes a record just as another transaction is reading it, or non-repeatable reads, when data is simultaneously read by one transaction and modified by another.
Durable. The changes committed by transactions will persist, even amidst system failures, power outages, or hardware troubles. This can be accomplished through regular backups in a separate data store, like deep storage or disk.
There are several types of OLTP database management systems (DMBS), each with different data models and formats—and associated strengths and weaknesses.
Relational databases date from the 1970s, and in fact, were among the first types of databases. As a result, relational databases are a mature technology with a wide-ranging ecosystem of products and plenty of support for (and compatibility with) non-relational databases. Because data is queried through Structured Query Language (SQL), this database class is sometimes referred to as SQL databases.
This data model utilizes tables to store data, with related data (such as passport information) being spread across multiple tables. In order to unify this dispersed data for an operation (such as billing and shipping an order to a customer), users have to JOIN disparate tables. As a result, as datasets become larger, operations like JOINs become more intensive and may impact performance. For this reason, relational databases may not always scale well, and users may encounter limitations or latency.
Common relational databases include MySQL, PostgreSQL, or Microsoft SQL Server.
NewSQL databases are an evolution of relational databases, intended to address their shortcomings (specifically scaling, flexibility, and performance) while preserving their strengths (ACID compliance and the familiarity of SQL). Towards that end, NewSQL databases often have distributed architectures that spread data across multiple nodes or servers, efficient handling of concurrent transactions, and fault tolerance and resilience to ensure high availability.
NoSQL databases, or non-relational databases, are a catch-all term for technologies that have arisen to address some of the shortcomings of relational databases. As an abbreviation for Not SQL, this database type encompasses a variety of data models such as document, in-memory, graph, or key-value. Many NoSQL databases have some advantages over relational databases, including flexibility (they accommodate a wider range of data, such as unstructured and semi-structured data), better query performance, and more intuitive scaling.
What is an OLAP database?
Before data can be useful, it has to be stored, organized, and analyzed. That’s where OLAP databases come in, executing intensive, deep analysis to answer questions crucial to the health and performance of the organization. These insights are then used for both high-level concerns, such as steering company strategy, or more granular operations, such as improving individual features or products.
OLAP products are diverse, and include anything from data warehouses to business intelligence platforms. In addition, OLAP databases may utilize a snowflake schema, a star schema, or a galaxy schema to organize their data for easier processing and faster retrieval.
Traditionally, OLAP systems were optimized for low-volume, long-running analytics on lots of data. To facilitate these operations, some types of analytical databases utilized a data model called an OLAP cube, a framework for collecting, visualizing, and exploring data across different dimensions. Although OLAP cubes are not necessarily cubes in the traditional sense, they do incorporate a large volume of data and dimensions in an intuitive format—which doesn’t require code, SQL statements, or JOINs, thereby lowering the barriers to entry.
Using an OLAP data cube, business analysts can execute these five types of operations:
- Rollup compacts data by summarizing similar values, compressing data intervals (such as going from one second to one minute), and reducing the rows or dimensions involved in analysis. This reduces the storage footprint of an OLAP solution and its associated costs, and is ideal for low-cardinality data.
- Drill down is the opposite of rollup, essentially adding more detail to data (expanding intervals from one minute to one second, for instance). Additional granularity is helpful for more detailed analysis, especially on high cardinality data. An example of this would be a utility analyst drilling down to daily consumption metrics from monthly figures.
- Slicing data enables analysts to view data along a single dimension and obtain more specifics. To return to the utilities use case, the data analyst could slice their data cube to focus on the consumption for a specific region or city, such as Merida, Mexico.
- Dicing data is when analysts isolate multiple dimensions in order to compare and contrast. Our hypothetical analyst could dice their data cube to compare the consumption between two cities, like Merida and Valladolid, to see which city uses more electricity during which months, weeks, or even hours.
- Pivoting the data cube is a metaphor—it essentially rotates the cube on its axis to provide a different perspective on data. Rather than looking at the consumption of Merida and Valladolid, an analyst could turn the data cube to see if there were any blackouts or service gaps, and perhaps build a forecast to predict future occurrences.
When to use OLTP—and when to use OLAP?
In general, many application architectures will utilize both OLTP and OLAP databases, though for different functions.
Transactional databases often power the daily operations of an application, such as checkout, search, shipping, or inventory for an online retailer. These are everyday tasks that are crucial to the running of any business or organization. Crucially, these routine processes play to the strengths of OLTP systems: they are small, computationally cheap, and often involve high numbers of concurrent users and queries.
In addition, these transactions usually involve a small number of records. As a result, OLTP databases can manage large datasets with low latency, and are highly scalable. In fact, many OLTP products (with exceptions such as older relational databases) follow a scale-out model, adding more nodes or servers in order to increase capacity.
Examples of an OLTP system at work within an application could be an airline booking system. When a traveler searches for flights between two cities with no direct connections, the application could quickly look up the connecting times via a third, intermediate airport to ensure that an itinerary provides passengers with enough time to deplane and transfer.
Another example within this same application would be checking capacity. After the traveler inputs a search, the booking system would confirm that only flights with seats available will be displayed, while sold-out flights will not be shown. Flights that are rapidly filling up can also be labeled as such, encouraging potential passengers to act quickly and book their seats.
On the other hand, analytical databases are intended for computationally heavy operations on a large amount of data, but with a far smaller number of concurrent users and queries. The main focus of OLAP analysis lies in providing data for teams to make business decisions, report profits and losses, optimize processes for efficiency, or identify and assess risks through open-ended exploration. Most of these functions are also not time sensitive: for instance, in the past, business analysts would leave reports to run and compile overnight.
Traditionally, OLAP has struggled with accommodating high concurrency. Analytical operations simply require more resources than CRUD operations, because aggregating, data processing, and visualizing data across various dimensions is more computationally intensive.
Also unlike CRUD operations which cover a small number of records, OLAP aggregations often encompass large datasets over a long period of time. The hypothetical airline would use an OLAP system (such as a data warehouse) to provide high-level perspectives on profitability, punctuality, fuel efficiency, staffing, and more, crunching months or even years of data. This could equate to gigabytes or terabytes—compounding the resources involved and making these tasks far more computationally intensive.
Simply put, while transactional databases help you handle the day-to-day minutiae of an application, analytical databases help you understand and improve organizational procedures and strategy.
What is ETL?
But what about combining the two? What if an OLAP platform needs access to transactional data in order to complete its analysis? What if an OLTP database needs to clear out old, “cold” data to make space for more recent, “hot” data?
In this situation, an OLAP database can extract data from the OLTP database, transform it into a format that is more compatible with its data model, and finally, load it into its data storage. This process, also known as ETL, utilizes an external connector to port data between the two solutions—and sometimes, the process might be ELT, with the transformation and conversion coming after the data is loaded into the OLAP database.
Alternatively, some OLTP systems may export older data into deep storage in the form of Amazon S3 or Hadoop, usually as a backup. These deep stores may be structured, following a specific data model such as relational; unstructured, lacking any schema but ideal for storing images, videos, and voice data; or semi-structured, lacking a fixed schema but not entirely unstructured either, a category which includes unconventional data formats such as XML, JSON, or key-value pairs.
While structured storage has a data model and thus cannot accommodate all types of data, it may not require stored data to be further transformed before analysis, saving an extra step. Conversely, while unstructured deep storage can store more varieties of data, its loose, unorganized nature requires data to be transformed into a format suitable for OLAP before it can be analyzed. Lastly, although semi-structured deep storage can be complex and costly to manage, it can flexibly accommodate more data types and data sources, and can more easily model complex relationships like nested arrays.
Bridging the gap: real-time analytics
Previously, OLAP queries were rarely urgent, utilized historical batch data, and did not feature a high degree of concurrency.
However, real-time analytics is the complete opposite: it often ingests data via streaming technologies like Amazon Kinesis or Apache Kafka, requires that analytical queries be completed in milliseconds, and often fields a high concurrent rate of both queries and users. In a sense, real-time analytics is the intersection between both analytical and transactional databases—and the next evolution of OLAP systems.
Today, real-time analytics (RTA) is becoming increasingly important, especially as data grows in volume, speed, and variety. To complicate matters, organizational needs have become more demanding—fielding more complex queries, with more dimensions, on larger datasets (anything from terabytes to petabytes) with tight SLAs for response times. After all, the organization that can move fastest will prevail, which comes down to who can extract more insights out of fresh data in the shortest amount of time.
One example is a security operations (SecOps) team at a major financial institution, tasked with screening millions of daily transactions to prevent all types of fraudulent activity. This includes purchases using stolen credit cards, money laundering via hacked ATMs, or faking online banking transactions with counterfeit credentials.
In this scenario, it’s impossible for human teams to manually sift through a massive firehose of data. It’s also impractical to use batch data to proactively stop fraud, simply because this data arrives after the transactions have been carried out and traditional OLAP analysis is too slow to keep pace. Given the financial penalties for credit card chargebacks or violating money laundering regulations, it’s easy to argue that averting potential crises, rather than resolving them afterwards, is more cost effective.
A better approach would be to analyze data in real-time. After ingesting streaming data, a good RTA database can provide fast analysis on current data, simplifying the process of identifying anomalies. For instance, any spikes in spending or purchases made in an unfamiliar geographic area could trigger an alert to human analysts for intervention.
To accomplish this goal, this SecOps team can also use a RTA database to power their machine learning. At regular intervals (such as hourly or daily), an algorithm can analyze fresh transaction data to create fraud prediction inferences, which are then stored in the RTA database. When the algorithm detects possible fraud, it can quickly retrieve stored inferences to compare to the suspicious activity. Depending on the result, the algorithm could flag the action for further review by human analysts, who would then decide whether to permit, ban, or investigate the incident further.
To learn more about real-time analytics, read the Imply use case page. For more information on how one bank used Apache Druid to prevent money laundering, read this blog post.
Summary
Ultimately, OLAP and OLTP databases are two sides of the same coin, and take up separate, but equally important roles in an application architecture. Transactional databases are utilized to execute large amounts of small, lightweight CRUD operations on massive datasets, serving a high rate of queries per second and concurrent users. These transactions comprise the daily traffic of an application, such as purchases, confirmations, shipping, or other user interactions.
In contrast, analytical databases run more complex analysis on even larger datasets, which can span months or even years of historical data. These databases were generally used to generate insights for reporting, visualizations, and strategy. While some OLAP databases are intended for a fairly large, concurrent user base and a higher volume of queries (such as for customer-facing data analytics), other products are optimized for a lower number of users (such as business reporting).
Real-time analytics is an emerging niche that merges both—requiring the fast responses and high parallel processing of transactional databases while preserving the analytical complexity and large data sizes of analytical databases. In sectors like security or digital advertising, end users need fast insights from current data as rapidly as possible.
What is Imply and Apache Druid?
Apache Druid is an open source database that ingests streaming data, rapidly executes (and returns) complex analytical queries at scale, and accommodates massively concurrent users and operations. Apache Druid (and Imply, its associated product family) live at the intersection of OLTP and OLAP, combining the best of both worlds into a data ecosystem for the fast-paced, high-volume demands of real-time analytics.
Imply products are compatible with both Apache Kafka and Amazon Kinesis, the two top streaming platforms today, and provide native support for SQL—no workarounds or connectors necessary. In addition, Druid provides useful capabilities for working with streaming data, such as exactly-once ingestion, which guarantees data delivery and prevents data duplication, and query-on-arrival, which guarantees that data is instantly available for querying upon ingestion.
Druid also includes features to improve the developer experience, such as schema autodetection, which automatically discovers the schema of incoming data and updates tables accordingly, removing the need to manually do so. This is particularly helpful for use cases where data source fields change frequently, such as IoT sensors—which may have inconsistent firmware updates and thus, missing or inconsistent values across their readings.
Users can also directly query data from deep storage without having to first load data onto Druid’s data servers—enabling cost savings, a simplified data architecture, and faster, more flexible reporting and data analysis.
To learn more about Druid, read the architecture guide.
Imply Polaris, a fully managed, database-as-a-service, is the easiest way to get started with Druid. Register for a free trial of Polaris today.