Understanding Discrete-Event Simulation, Part 4: Operations Research

From the series: Understanding Discrete-Event Simulation

Now, let’s talk about discrete-event simulation in the context of operations research. Operations research is a broad topic that involves examining manmade processes with the goal of improving their performance. Every process requires resources such as time, money, materials, equipment, and staffing. You conduct operations research so that you can make smart decisions on how best to use these resources in order to satisfy your objectives - things like maximizing productivity and revenue - while minimizing defects and cost. And, of course, there are many ways to analyze an operation, but because many processes can be readily abstracted as event-driven systems, discrete-event simulation is often used. Let’s take a closer look.

One of the numerous disciplines within operations research is manufacturing. An assembly line is a great candidate for a discrete-event simulation because it can be broken down into a series of finite steps. If our car assembly line involved five steps, one after the other, we could represent the cars as entities going through five server blocks in series. Since our goal is to understand high-level objectives like the rate of production, the details of each of these steps aren’t particularly relevant to us. It doesn’t matter how the car gets painted. We only need to model how long that and every other step in process takes.

Now, as a car gets assembled, everything’s fine if it can proceed immediately to the next step. But if there’s a backlog in a manufacturing process, that can be modeled with queues. Of course queuing is wasteful downtime, and so a common operations research task is to perform a cost:benefit analysis on increasing throughput for a particular step. Perhaps we could decrease the time of interior assembly with better machinery, which is modeled by adjusting service time. Or instead, we could buy more machines, which could be modeled as a server with increased entity capacity or as separate servers in parallel. Incidentally, this last modeling pattern is also how you would model routing if different automobiles took different manufacturing pathways.

But even with this nuance, this is about as simple a model as you can get for a car factory. Model fidelity could be increased in a number of different ways, each detail improving our understanding of the system and enabling us to make better decisions. For instance, we could break our tasks down into all the sub-steps that comprise them. We could also account for volatility in the timing of these tasks by including probabilistic terms in certain steps. If staff or other resources are required at a particular point, they could be modeled as a component that merges with the automobile before the process proceeds. Faults could be inserted and their impact evaluated by pausing or delaying an action in the model. And if the assembly line adapts to a changing situation, if it can alter course in an attempt to improve performance on-the-fly, a discrete-event simulation will need to include a model of that intelligence. Perhaps you’ll model the adaptation logic through an algorithm written in code or as a finite state machine.

Now, when you include all these details in a discrete-event simulation, you can begin to conduct powerful analyses of a system that you’d lack any intuition about otherwise. Once the framework is in place, you can run thousands of different scenarios and examine how output varies with things like production schedule, work manifests, staffing allocation…whatever you like. The results of the simulation enable you to make informed decisions about ways to best improve the operation against your performance objectives. And when tied to a numerical optimization scheme, the computer can help you out by converging towards the best possible outcome. Such techniques are invaluable not only to manufacturing, but for any of the domains of operations research.