Real-time data is information that flows directly from the source to end users or applications. In contrast to other types of data, such as batch data, real-time data typically does not undergo transformation, storage, or other intermediate steps before it is delivered to its consumers. While some types of real-time data are processed before arrival, this procedure must occur nearly instantaneously, in order to ensure delivery speed and to minimize latency.
The defining characteristic of real-time data is time sensitivity. Real-time data and its associated insights expire incredibly quickly, and so must be analyzed and capitalized on without delay.
One example is nautical navigation software, which must gather hundreds of thousands of data points per second to provide weather, wave, and wind data that is accurate to the minute. To do otherwise is to endanger end users whose lives depend on this data, such as ship crews.
The opposite of real-time data is batch processing, where data is gathered, processed, and analyzed in groups (batches). Because batch processing operates on a slower timeline, it is not as resource intensive or as high stakes as real-time data, and is suitable for slow analysis of historical data.
The classic example of batch processing would be business analysts preparing quarterly reports for executives. Because these reports have long deadlines and is not as urgent as real-time data, analysts can run batch processing jobs overnight or over several days.
This explainer will discuss the mechanics, challenges, benefits, and use cases of real-time data. Read on to understand how real-time data is analyzed and acted on, which sectors and industries use real-time data, and how to take advantage of this emerging field.
What are the two types of real-time data?
Real-time data comes in two related, but distinct types.
Event data
This data type captures specific incidents at a single point in time. Events are timestamped to record the time of occurrence, and may be continuously generated, such as a train’s speed reading, or created in response to a specific action, such as a login attempt or a credit card transaction. Other examples include heart rates for a fitness tracker, a shipment reaching a waypoint, or inventory falling below a certain threshold.
Streaming data
This data type is a constant, continually updated flow of data. While streaming data can have timestamps, it is not a requirement as in event data. Examples include the routes of taxis or delivery trucks; monitoring data from air quality sensors; and the transponder locations of airplanes.
To complicate matters, streaming data is not just a type of data, but also the best way to deliver real-time data to applications. The most popular technologies for streaming data ingestion are Apache Kafka and Amazon Kinesis.
Event data can also be streamed directly into applications—even if it’s not directly a form of streaming data. One way to think of this is that streaming is a larger umbrella which can include events alongside other data types. However, event data is only streamed into an application if it is required for a real-time use case; otherwise, it can be ingested via batch processing.
What is real-time analytics?
Real-time analytics provides fast insights on fresh data. Its defining characteristic is that time to value is very short, because insights are very time sensitive. In contrast, a traditional use case for analytics such as business intelligence, is much less urgent and has a longer time frame.
Without real-time analytics, real-time data is of limited value. In fact, users need real-time analytics in order to take action on their real-time data, whether it’s an energy provider programming their solar panels to follow the angle of the sun or a bank preventing fraudulent credit card transactions.
At the same time, both real-time data and real-time analytics are two parts of the same cycle. By ensuring a steady stream of raw events and information, real-time data is the foundation of any instantaneous analytics pipeline or automatic process.
To learn more about real-time analytics, read this page.
What are the benefits of real-time data?
For an organization that’s seeking to maximize the value of their real-time data, implementing the proper pipeline and procedures includes benefits such as:
Enhanced decision-making
By providing information that is accurate to the minute, real-time data can transform how organizations act upon data. Because batch processing utilizes historical data, and is slow, it risks introducing outdated information into the decisioning process.
One factor is timeliness. In some sectors, conditions change very quickly, and organizations have to respond equally rapidly in order to capitalize on the situation—or even just to provide a safe, acceptable level of service to end users.
One example is patient monitoring at a major hospital. Devices transmit patient data, such as heartbeat and respiratory rate, blood pressure, or oxygen saturation, to cloud-based software. If any of these vital indicators drop below a certain threshold, then alerts must go out to hospital staff, who can then respond quickly to the issue and decide how to proceed.
By providing more actionable insights, real-time data and analytics empower organizations to make better decisions more quickly. Is a stock trading algorithm mistiming the market and selling too late or purchasing too early? With batch processing, this issue would only be detected and resolved long after it occurred. With real-time data and analytics, however, teams can more quickly find and fix the problem.
Improved operational efficiency
Real-time data also enables a finer level of control over daily operations. A delivery truck fleet streams data to a cloud-based application. If several trucks in the fleet are caught in a traffic jam, the application can alter the routes of other drivers to avoid this slowdown and continue deliveries at a steady pace.
Another possibility is longer equipment lives due to improved maintenance schedules. Geothermal power plants tap into underground wells of hot water to generate steam for electricity. Pipes, pumps, and sensors are located deep underground and operate at high pressures and temperatures. Operators have to monitor and analyze real-time sensor data for predictive maintenance, preventing expensive outages or disruptions.
Real-time data and analytics can also enable automation, which removes a lot of rote, routine work from teams. One example is asset management: if the occupancy level of a building falls below a certain level, then the application can be programmed to turn off the lights or reduce heating or cooling.
Better customer experiences
Customer loyalty is heavily affected by the quality of their experience, whether they’re using a digital service or purchasing a product. Instant personalization can ensure that their preferences are met and their behavior accommodated. This could be discounts for their favorite hotels on a travel application, shortcuts for frequently used commands, or effective product suggestions.
Customer support can also benefit from real-time data. By accessing fresh information on a customer’s history and issues, teams can more easily find and fix the root causes of a problem. Is a graphic design user having difficulty rendering objects on their cloud-based platform? The help team can analyze their usage data and quickly determine that the latency is due to a misconfigured setting.
Competitive advantage
Ultimately, real-time data analytics provide organizations with an edge over their rivals. In particular, a company or team can now respond quickly to changes in economic conditions, shifts in user preferences, and emerging competition.
Innovation can also be strengthened with real-time data. By shortening the time to insight, companies can more quickly identify patterns, needs, or market gaps, and begin building new products (or improving existing ones) to capitalize on these new opportunities.
Cost efficiency is another possible benefit. Real-time data can enable automation in areas like inventory management or resource planning, allowing companies to operate more efficiently. Other examples include predictive analytics to improve maintenance and lengthen equipment lives, better personalization for ad audiences, and more precise environmental monitoring to avoid penalties.
How does a real-time data pipeline work?
Because of its unique requirements, real-time data requires a specialized data architecture in order to extract and provide insights to end users and applications in a timely manner. A sample real-time data pipeline would likely include:
Ingestion. The pipeline ingests data from various sources, such as IoT sensors, social media feeds, user access logs, and more. For this stage, the method of choice is either a messaging system or streaming platform such as Apache Kafka or Amazon Kinesis.
Processing. After data enters the pipeline, it has to be processed, which could include being cleaned of errors or noise, transformed into a more suitable format for analytics, or undergoing some sort of aggregations (like JOINs). The most common stream processors are technologies like Apache Flink, Apache Spark Streaming, or Apache Kafka Streams.
Loading. The next step is for data to be loaded into a database optimized for real-time data and analytics. While some organizations may prefer one database for all their data needs, the truth is that most databases are specialized for specific niches—transactional databases run routine business operations and are not ideal for deep analysis. At this stage, teams may also decide to make data available to other analytics tools or applications via APIs, or messaging services.
Data analytics and visualization. Before any decisions can be made, the data must be analyzed and insights extracted. Here, teams will use either an off-the-shelf or custom application to rapidly query data, draw conclusions, or perform actions. An additional option is visualization through dashboards or other graphics, which provide an intuitive, easily understood medium for executives or other decision makers.
Monitoring and maintenance. As it operates, the pipeline has to be assessed for accuracy and reliability. Any issues, such as bottlenecks, system failures, or data issues, have to be identified and resolved promptly so as not to compromise the flow of data and insights.
Alongside these traditional stages, data pipelines may also incorporate emerging technologies, such as:
Artificial intelligence. Given the advances in chatbots and other forms of machine learning, many real-time data pipelines now incorporate such models to speed up traditional steps. Algorithms may perform upstream tasks such as filtering for data values, removing noise from data, or pre-aggregating data in preparation for advanced operations. Conversely, AI may also perform downstream tasks such as anomaly detection, sentiment analysis, or pattern recognition.
Multimedia, multi-modal insights. Rather than just lines of text or two-dimensional charts, real-time data pipelines could also display results in more varied, interesting formats for human audiences with different learning styles. This includes push notifications or emails for automatic alerts, or interactive dashboards and visualizations for flexible data exploration.
Who uses real-time data—and how?
Today, real-time data is everywhere. Industries as diverse as manufacturing, DevOps, and cybersecurity, rely on real-time data and analytics to extract insights and hone their competitive advantages in many varied ways. Here are some examples:
Observability
From outages to slowdowns, application observability optimizes performance, maximizes the impact of maintenance, and ultimately, minimizes the length and impact of any issues that arise.
Most modern applications run on microservices architectures, which devolve different functions (such as checkouts or searches for an ecommerce portal) into different services, which are added or removed as needed. While this provides flexibility and scalability, it also drastically increases operational complexity and introduces confusion into how traffic and data move between components.
When problems occur, debugging can also be complex because teams may not necessarily know what to look for—the classic “unknown unknown.” As a result, the best observability platforms will ingest significant amounts of real-time data in the form of metrics, logs, traces, user interactions, and other events, before performing pinpoint calculations at speed to surface time-sensitive insights.
For instance, an engineering team could track a disruption in network traffic by working backwards, isolating the region that failed first and then analyzing the event logs to find the malfunctioning device or digital endpoint that triggered the problem. To do so, they need to explore data flexibly—rapidly browsing interactive dashboards that convert torrents of real-time data into a format that humans can understand and analyze. Ideally, these dashboards should permit users to execute the five aggregations, drilling down, slicing and dicing, pivoting, or rolling up data.
To learn more about the role of real-time data in application observability, please visit the dedicated page.
Security and Fraud Prevention
In a sense, security and fraud analytics are related to observability. Both utilize similar methods (open-ended exploration of real-time data through dashboards or other interactive means) to detect and fix issues (in this case, a malicious party or harmful interaction). Some observability platforms even include security incident event management (SIEM) within their product offerings.
However, security and fraud analytics do require specialized toolsets, and examine a smaller, distinct subset of real-time data, often logs and traces, events that users (and hackers) generate through the course of their actions. For instance, an unauthorized system entry will create a log as a record, and further actions that the hacker takes could also be marked down as a trace, showing the path of their actions through an application environment. With this information, a cybersecurity team can lock them out and prevent further damage.
Banks and financial institutions are another industry that places a heavy emphasis on security and fraud prevention. Whether it’s money laundering or stolen credit cards, malicious parties can ruin the lives of bank customers, destroy bank reputations, and lead them to incur fines or other heavy penalties. As a result, banks are incentivized to use real-time data to prevent fraud, rather than resolve it after the fact. This includes the use of dashboards and other interactive graphics, as well as automated approaches like machine learning.
To learn more about the role of real-time data in security and fraud prevention, please visit this page.
External analytics
Today, analytics is for everyone—not just colleagues. Whether it’s a game creation system showing game performance for third-party developers, a digital advertising platform measuring audience engagement, or a fitness tracking application, external-facing analytics is both a value-added component and a core product offering.
But serving paying users presents different challenges. A paid service must be highly available and performant, especially if customers are using it to monitor or optimize their own paid product offerings. In addition, any external analytics service has to handle a vast volume of real-time data, support rapid analytics on this continuous stream of data, and accommodate high query and user traffic. Failure to accomplish any of these goals can result in unhappy clients, churn, and loss of revenue or brand damage.
Speed is the first necessity. No customer wants to wait around or deal with error messages or the spinning pinwheel—particularly if they need insights for their own environment. That requires data ingestion and analytics to occur in real time, to ensure rapid response times.
Scale is another key requirement. As an example, if a fitness application has thousands of customers, each with a device that generates multiple events per second for metrics like heart rate, stress, or REM sleep, that could equate to hundreds of thousands of events per second. Any fitness tracker has to be able to ingest, process, analyze, and visualize all these trends back to users instantaneously.
To learn more about the role of real-time data in external, customer-facing analytics, please visit this page.
IoT and telemetry
The greatest strength of the Internet of Things (IoT) is the ability to bridge the digital and the physical. By connecting physical devices, such as sensors or actuators, with the power of digital analytics software, organizations can optimize operations, provide a picture of real-time performance, identify and correct inefficiencies, and even improve safety conditions.
Still, real-time analytics for IoT and sensor telemetry can be uniquely challenging. Not only is data streamed quickly and at scale, but such real-time data may also need to be queried on arrival because it is especially time sensitive. Data flows can also fluctuate wildly depending on factors such as the time of day, year, or special event. A point of sale (POS) device could emit millions of events per hour during a holiday sale, and much less during off-peak times.
Most IoT data is timestamped event data generated by sensors. Any database has to be able to accommodate this data type, and additionally, include features for handling this data, such as densification or gap filling, so that datasets are complete and suitable for analytics or visualization.
Flexible data exploration, usually through dashboards, is also important. Teams may need to drill down, slice and dice, pivot, and rollup their data to find insights. Which devices are performing at suboptimal levels? Which devices are showing anomalous temperatures? All these questions, and more, must be answered.
Resilience and availability is another factor. Data loss can create serious problems for compliance, record-keeping, or safety. As an example, the loss of all GPS data for an airplane in mid-flight can be dangerous in certain conditions, forcing pilots to rely on less advanced methods of navigation such as radio signals, dead reckoning, or overworked air traffic controllers.
To learn more about the role of real-time data for IoT and telemetry, please visit this page.
Product analytics
Customer feedback drives many processes: refining existing features (or releasing new ones), growing the user base, and ultimately, turning a profit. Still, assessing user opinions can be tricky, given the competing demands for attention and the difficulty of filling out long surveys.
Instead, organizations can ingest and analyze user behavior data, in the form of direct interactions such as clicks or swipes, as well as contextual metrics like bounce rates. Using this information, an organization can better understand the trends driving adoption or churn, and thus create a better user experience.
As with the other use cases, speed, scale, and streaming are critical. After all, competition is intense, and the application that can be iterated and improved fastest will attract new users, retain existing ones, and have a better chance to thrive. Therefore, whoever can ingest, process, analyze, and act upon real-time data has a distinct advantage.
One example could be a digital travel platform. Faced with a huge range of options for booking hotels and flights, a traveler would not necessarily know where to go, and perhaps pick the cheapest or most familiar application. One platform could tailor this user’s experience, providing discounted fares, offering them free loyalty programs, alerting them of price drops, or gently prompting them to finish their booking via push notifications.
To learn more about the role of product analytics, please visit this page.
What are some challenges associated with real-time data?
By nature, real-time data cannot be managed or accommodated in the same manner as other data types, such as batch processing. In fact, there are a variety of important considerations for real-time data, including:
Handling data volume and velocity
As mentioned above, real-time data use cases often involve large volumes of data moving at speed, which is usually ingested into applications and environments through streaming technologies like Apache Kafka or Amazon Kinesis. As a result, all resources, such as storage, compute, and network, have to be optimized for both speed and scalability.
Speed is necessary because real-time data—and its insights—expire rapidly. Therefore, any organization that leaves real-time data on the table can lose out on the information they need to optimize their performance or products. This is also why any real-time data processing has to be minimal or rapid, executed by specialized stream processors such as Apache Flink.
One example is communication-based train control (CBTC), a digital system that monitors, routes, and manages train traffic. CBTC systems require a constant, rapid flow of data from sensors in order to determine the positions of trains; identify and respond to data disruptions or equipment malfunctions; and run trains closer together without compromising safety. Without fast-arriving, real-time data, CBTC systems cannot accomplish these tasks, and the entire transit system grinds to a halt.
Scale is another key requirement for real-time data infrastructure. For example, CBTC systems have to ingest a high volume of events and deliver this data to applications and end users. A single train line could have roughly 100 sensors to detect indicators such as track temperatures, speeds, or proximity between trains, each generating several events per second. Compounded across multiple lines in larger transit networks, this could equate to hundreds of thousands of events per second, or millions of events per hour.
The velocity and volume component is present throughout real-time data use cases, and isn’t isolated only to IoT examples. Other sectors, such as streaming media, digital advertising, and online gaming, all also have to address this dynamic when designing their application architectures. For instance, a game studio running a massively online role-playing game (MMORPG) needs to ingest, process, and analyze countless events per second for player actions such as selling items, PvP battles, crafting objects, and more.
Minimizing latency
In fact, real-time data pipelines are especially vulnerable to delays or breakdowns in ingestion, processing, or analytics, which can have cascading, catastrophic effects on end users. Just as a single, delayed train can force passengers to miss their connections or slow down other trains, so too can late-arriving data throw off an entire series of processes.
Upstream issues are especially damaging. If a streaming platform or a stream processor has an outage, then that creates a bottleneck, leaving real-time data to pile up until a SRE or DevOps team can find and fix the issue. This also slows down downstream components, such as databases, applications, analytics, visualizations, and ultimately, decision-making.
For instance, a delayed speed sensor reading can affect a CBTC system’s automatic braking protections, which in turn raises the possibility of train collisions, derailments, or other dangerous, undesirable results.
But data latency can also cause issues beyond safety concerns. The vast majority of stock trading is done by algorithms, which can gather, analyze, and act on real-time data much faster than human minds. To ensure that algorithms are selling or buying the right stocks at the optimal price and moment, they require an uninterrupted flow of the latest data, such as market fluctuations, competitor actions, and even geopolitical events. Any slowdowns or disruptions in data could disrupt trading, leading algorithms to mistime transactions, leading to lost revenue from retaining unprofitable stocks for too long.
Ensuring data quality and reliability
In the real world, data may be noisy, including incomplete, missing fields due to issues such as inconsistent firmware updates across devices; massive outliers that deviate far from statistical norms; or duplicated values due to errors in consistency or generation.
Using this unreliable, low-quality data could potentially skew any analyses. Flawed data will result in flawed insights and flawed decisions. As an example, a brick-and-mortar retailer might utilize an automatic inventory management system to handle products within stores, replacing low stock, forecasting future demand, or pinpointing popular and unpopular goods. For large chain stores with high foot traffic and the rapid turnaround, quickly cleaning their real-time data before exposing it to analytics is crucial.
If the management system ingests faulty sales and revenue data for analysis, it can create inaccurate forecasts. In turn, this could result in prioritizing the creation and sale of unpopular items, devaluating the importance of customer preferences, and ultimately, impacting revenue and store reputation.
Data integration and governance
Just as data is not always clean and accurate, so too is data rarely homogenous. It can come in many different formats, from many different sources, and through many different applications. All of this can create compatibility issues with real-time analytics software.
Another dimension is security. Many governments regulate sensitive data such as healthcare information or personal contacts, requiring encryption and a right to removal. As such, organizations have to standardize processes around consistency, security, and compliance.
One example is a large chain of hospitals. Within this network, physicians, administrators, nurses, and other employees use many different programs and applications—some for monitoring patient health indicators, others for assisting in complex procedures such as surgeries, and some to assist severely impaired patients with functions such as dialysis or respiration.
All of these different applications generate a flow of data, which have to be streamed into a separate electronic health records (EHR) software for safekeeping, or other platforms for analysis and alerting. This data likely comes in different formats, such as timestamped events for patient monitoring or perhaps video records of surgeries. All have to be cleaned up and standardized before it can be processed for analysis.
All of these disparate programs also have to encrypt data or regulate user access, ensuring that only authorized individuals can see or manipulate data. The consequences of leaked or misused healthcare data are significant, including fines, lawsuits, and in the worst cases, even criminal charges or facility shutdowns.
How can I unlock real-time insights?
Apache Druid is an open source database for real-time data and analytics. Built to address the speed, scale, and streams of this data type, organizations use Druid as a data foundation for applications across a wide range of use cases, from IoT to customer-facing analytics to cybersecurity. Druid also forms the backbone of Imply, a database company that provides tools to build and share visualizations, a Druid-as-a-service offering, and a range of other products.
From the beginning, Druid was designed for the challenges of streaming data. Built to be natively compatible with Amazon Kinesis and Apache Kafka, Druid can directly ingest streaming data without additional software. In addition, Druid offers exactly-once ingestion to guarantee data consistency and guard against duplication, while also providing query-on-arrival, so that users and applications can instantly access data for analysis. This removes concerns about delays, unclean data, or duplication.
Because real-time data and its insights expire so quickly, Druid was also designed for speed. Thanks to its unique design, Druid can return queries in milliseconds, regardless of the volume of data or the number of concurrent queries and users. This ensures that analysts and applications will have fresh data easily available for their analyses and operations, that insights will be generated and acted upon in a timely manner, and that downstream operations will run smoothly and be unaffected by delays.
Druid can also scale elastically. Different functions are split across separate node types, so that they can be added or removed to match demand. Druid’s deep storage layer also serves a secondary purpose, facilitating scaling by acting as a common data store, so that data and workloads can be easily retrieved and rebalanced across nodes as they are spun up or down.
The primary purpose of Druid’s deep storage is to ensure resilience by providing continuous backup for emergencies. This additional durability protects against data loss; should a node fail, its data can be recovered from deep storage and restored across the surviving nodes.
Another useful feature is schema autodetection. Some use cases, such as IoT, may utilize data with different fields, leading to consistency issues. While other databases may require manual intervention and downtime to correct this problem, Druid can instead automatically change tables accordingly, such as adding NULL values to fill in missing fields.
To learn more about Druid, read our architecture guide.
To learn more about real-time analytics, request a free demo of Imply Polaris, the Apache Druid database-as-a-service, or watch this webinar.