top of page

What is Real-Time?

Updated: Dec 19, 2020


Making sense of one of the most fungible and undefined terms in business.

One of the more frequently-used and widely-misunderstood terms in supply chain management and business in general is “real-time.” Consultants and vendors (myself included) have been talking about creating the “real-time enterprise” for at least a couple of decades now. At the same time, there have been very few, if any, formal definitions of what real-time means to a business process, much less an entire enterprise.

The term has taken on additional relevance in the past five years, with many new-age Internet companies moving their big data architectures from batch-oriented towards streaming (aka real-time) architectures.

When most people think of real-time, they tend to think of a system in which inputs are processed and responded to in milliseconds, or immediately. This creates a mental model of the “real-time enterprise" as being able to sense its “real world” and then able to respond immediately to changes in that real world. But this is not a formal definition. Furthermore, it’s unrealistic to think of an entire enterprise or even many of its business processes as being able to (or even wanting to) respond to every one of its inputs immediately.

So how should we define and think about real-time?

Formal Definition from Control Engineering

Control engineering is a branch of mathematics and engineering that deals with monitoring and managing the behavior of dynamical systems. A controller is given a goal; a typical goal is to maintain a process variable between upper and lower limits, around a set point. The controller has intelligence about what it takes to achieve the goal. It continuously senses inputs from the process, runs intelligent algorithms, and then produces outputs that manipulate the physical world in some way to achieve the goal.

In a strict sense, the controller must respond in a given timeframe to ensure that the process stays under control. Failure to respond in the given timeframe could lead to a runaway or unstable process with potential significant negative repercussions. The software in this situation is given a time window (typically in milliseconds) in which it has to run and return a new answer to apply to the controller, which then makes a corresponding change to the physical process (if needed).

When the controller is able to sense, run its algorithms, and then deterministically respond to the process in a given timeframe, it is said to be “real-time.” Here “deterministic” means certainty – No matter the input, the controller will always respond within the defined time window. In supply chains, these types of situations are often found at the intersection of the physical and digital worlds, typically in factories and warehouses, where electrical, mechanical, and chemical processes must be controlled.

While formalized in the process control world, the idea of a sense-and-respond control loop can be generalized to broad supply chain management processes. I applied this thinking to the sales and operations planning (S&OP) process in my 2009 paper, “A Rudder for Course Correction.” This is nothing new. In fact, it is a central theme behind Jay Forrester’s seminal 1961 work, “Industrial Dynamics,” which describes many of the multi-enterprise demand-supply dynamics associated with supply chain management.

Real-Time Simulation

Early in my career I worked as a software engineer in the field of “real-time simulation.” Software we developed had to adhere to this formal control engineering definition of real-time. Each program was given a processor time slice; the program’s code was required to execute fully within this time slice; otherwise, the system would become unstable. Since the simulator produced results just as they would occur in the real world, the simulator was said to operate in “real-time.” In other words, it was “real-time” just like the “real world.”

What About Higher-Level Business Processes?

For higher level business processes such as sales and operations planning, there are no such dire consequences. However, there are time windows and repercussions for not hitting the time windows. Historically, these time windows have been measured in minutes and hours, not milliseconds. Having said that, there is an increasing trend for shorter and shorter time windows at higher and higher levels of the business process stack.

This is a trend driven by competition and is affecting all business processes. For example, planning cycles are starting more and more to resemble operational or execution cycles. Companies that couple shorter frequencies with higher intelligence win. As discussed earlier, in a process control setting such as a plant or warehouse, real-time is measured in milliseconds. For higher level business processes such as demand planning, supply planning, and S&OP, real-time may be measured in seconds, minutes, hours, or even days.

This leads to a generalized definition of real-time, with a foundation in the formal control engineering definition. Real-time is defined as the amount of time within which your business process needs to execute in order to maintain control and to be competitive. This “time window” may be different for different processes.

Planning processes have historically been batch-oriented, because of the need to make a snapshot at a given point in time and then go through an organizational consensus process. For example, many S&OP processes in the past have been monthly processes. Some of this was driven by the fact that a lot of corporate reporting is done in monthly buckets. Plans were developed for a month, and then intra-month adjustments were done on a weekly basis. Both the monthly planning and weekly adjustment processes were supported by batch data gathering processes.

A number of years ago, many planning processes moved to weekly buckets, with daily adjustments. In this case, the batch windows shrank to one week and one day, respectively.

Let’s take the example of a large consumer products company or a large retailer that has to create distribution plans for 100 million SKUs. Historically, the agility of this process has been gated by how fast and efficiently all the data can be gathered and “integrated,” how fast the planning engines can be run, and then how fast the results can be unpacked and sent to all the systems that consume such results. The end-to-end process may take a number of hours, with the front-end and back-end data processing consuming up to 80% of the overall time window.

Now, things have moved towards sub-daily processes, with continuous updates. What is meant by continuous updates? Does this mean every minute, every second, or every sub-second? Let’s remember the definition of real-time. If you think you need it every minute in order to maintain control, then the definition of real-time for that situation is one minute; if you need it every tenth of a second, then the definition of real-time is a tenth of a second. Designing business process time windows is very important for your competitive position and has large consequences for the underlying technical infrastructure required. And, be advised that time window requirements will evolve, and they are only headed in one direction - down.

For data, this underlying infrastructure is commonly called a “data pipeline.” Data pipeline design has long been important for supply chain processes in large organizations.

Here Come the Streams

In the past six or so years, streaming architecture has evolved from a few use cases to one of the hottest topics in data engineering. The underlying technology approach for streaming is the lowly log file, an immutable, append-only data structure that is decades old. Like many such breakthroughs, it took some inventive engineers to think differently (in this case, almost the opposite of conventional thinking) to bring about change. And, importantly, as researcher Martin Kleppmann points out in his book, "Designing Data Intensive Applications," sometimes it is simple approaches with very defined use cases that yield the biggest breakthroughs. To make his point, Kleppmann quotes John Gall from the 1978 book "Systemantics:"

A complex system that works is invariably found to have evolved from a simple system that works. The inverse proposition also appears to be true: A complex system designed from scratch never works and cannot be made to work.

This is sage advice for some of the technologies that are widely promoted today. Streaming has taken off because it first solved very specific use cases; then engineers discovered that is was actually a general technology that could solve a wide variety of use cases for supporting the delivery of real-time data.

I broach the streaming topic here in the context of the real-time discussion, but its applicability for supply chain management use cases is a lengthier discussion best handled in another post. For the time being, let’s just say that the time has come for “real-time,” as long as we know how to define it for our business processes.

435 views
bottom of page