Optimization, the OR universe, and everything: In conversation with Karla Hoffman

Are AI, ML, and OR the same or different? Is operations research a solved problem? How do you know if you’re ready for optimization? We asked industry veteran Karla Hoffman these questions and more.

Nextmv cofounders Carolyn Mooney (CEO) and Ryan O’Neil (CTO) interviewed Karla Hoffman, professor emeritus at George Mason University and former president of INFORMS, about the evolution of the optimization space, misconceptions about it, how it impacts everyday operations, and the opportunities ahead. Karla has a long career both teaching and consulting in the field of operations research and optimization. The following captures snippets of their conversation.

The content below has been edited for length and clarity. A version of this conversation is available for on-demand viewing.

Carolyn Mooney: How would you describe operations research (OR) to the everyday person?

Karla Hoffman: I'm the kind of person who walks into a store and thinks, “Really, couldn't you organize this better? Why do I have to go from this aisle to another aisle halfway across the store just to get the two items I need that are related to each other?” If you like math, if you like solving puzzles, and if you think the world could do things better, then you belong in the field of operations research. 

So what is OR? I take a very broad definition of it. OR is using modeling to determine decision alternatives and choosing among those alternatives to find the best among what is possible. Or an even shorter definition is: Using math and science to improve decision making.

Carolyn: We often hear about AI and ML as being in the same realm as OR. How do you see the interplay among these techniques?

Karla: Since my definition of OR is very broad, I could argue that it includes data analysis, statistics, and machine learning, as well as the classical OR tools of optimization and simulation. 

But what makes optimization somewhat unique when compared with statistics and/or machine learning is that the latter two use the data from the past to determine good decisions for the future. Optimization considers the constraints on the problem and the objectives of (preferably) all of the stakeholders, and then chooses good or best decisions among the actionable alternatives. So optimization may come up with alternatives that nobody has ever considered because they're considering the billions of possible alternatives rather than what has been seen in the past.

Optimization will, of course, throw out many silly feasible alternatives because they are not “good” decisions based on the objectives specified, but it may provide alternatives that were not previously considered. It is impossible for humans to think about the billions of alternatives that exist and so, we determine approaches that have built in biases to our thinking. We tend to think only about the things that have worked well in the past. But, the support tools of OR are capable of providing new, useful alternatives that have never been employed

Carolyn: How have you seen the field of operations research change?

Ryan O’Neil: Operations research and optimization seem to be going down a similar path to what statistics did 10 to 15 years ago with data science where we started to see it integrated into production software stacks in a way it wasn't before.

I remember when I first started studying the field, it was very much geared toward a human operator, and decision support meant providing some sort of modeling capabilities to a person who would then execute on those things rather than a software service that would be making automated decisions. So you see more people coming into this space who are not from traditional OR backgrounds trying to pick up those tools and apply them to their problem with varying levels of success. 

Karla: I would agree with that. We started with large corporations using mainframes to help make important decisions. But, computing was expensive and a problem needed to be important enough that it warranted the process of data collection, modeling, and then translating the solution into something that the decision maker could understand and the IT team could maintain. It was not until the past decade that OR ventured away from only thinking about planning problems and moving towards real-time decision problems — but planning problems are very different from real-time problems. 

Today, the world is used to being able to get instantaneous solutions to almost anything they want or expect. Examples include being able to call a ride or a shopping service, get alternative routes to locations close and far, or even make hotel or food reservations on your phone for tomorrow or next month. Day-to-day decisions have changed and the world is more comfortable with decision support systems where some or all of the decisions have an automatic component to them. These tools allow overrides, but the default is an automated decision.

Carolyn: We’ve sometimes heard that optimization is a “solved problem.” Is that true

Karla: It may be a solved problem if you think about the problems that we tried to solve 25 years ago. It is not solved if you think about today's problems. 

There are two things that I think have really changed. One is the recognition that you can't use completely deterministic data and expect the suggested decisions to work in the real world. There is simply too much much randomness within the system. In truth, decision makers should probably not want the most efficient solution if it’s fragile, which is to say almost any changes in the environment could make it a bad solution. So even within the planning stage, we want to incorporate randomness and make sure that our solutions are resilient; that is: there is sufficient slack in the system to adjust to changes. And, the randomness might not be capable of being modeled with a specified distribution.

In real-time optimization, we need decisions that can be determined in seconds and be able to provide alternatives quickly when the data/situation changes. So, we need alternatives that take into account all the idiosyncrasies that occur on a second-by-second basis. These might include traffic congestion, people not showing up because of illness, and bad weather. There's just an enormous number of issues that are important in the real world. 

Happily, our algorithms are continuously changing to accommodate a changing world. We now use a collection of algorithms, or “hybrid algorithms”. We've learned how to put them together to solve what were once considered unsolved problems. As we get more successful with these techniques, we hear about new problems, and new problems require new algorithms, and so on.

Ryan: It's interesting with some of the applications that have been developed recently. When you look at how optimization tends to be covered in the news, it often talks about how the traveling salesman problem is this big unsolved problem and if only we had this magical quantum computer that could solve it for us. 

But from an optimization perspective, we think of it as more or less solved. We can solve very large TSPs very quickly. It's when they get realistic constraints or aspects to them that they become difficult. 

Now, we see routing problems where there are drones involved, one vehicle can contain another, or there's synchronization between two different types of vehicles like different types of food trucks and replenishment vehicles, or three different pickup spots, or a driver can take a break in one of five different locations. As these smaller decisions become incorporated into these problems, they become harder and harder.

Carolyn: What is a favorite real-world OR/optimization problem you’ve worked on?

Karla: I’ll tell you about my first real-time application. I was working with a concrete company in Northern Virginia. A consultant at the company noticed that there were very long lines at certain plants where the trucks get the concrete to deliver to customers. He recognized it as a queuing problem. What they needed was more servers or truck bays, but they couldn’t do that because the plant’s footprint couldn't expand. However, they could route the trucks to different plants, but they couldn’t see how to determine which trucks should be assigned to which plant.

It wasn’t an easy problem. At first the optimizations did not include all of the real-world issues that occur on a day-to-day basis. We had people go on break and not tell the system. We had people take routes that didn't seem rational. We had drivers that weren't allowed to go to certain locations because this business is in the Washington DC area, and some of their customers were located at secure sites that required certifications for any driver to enter that facility. There were weather issues and traffic congestion that needed to be considered. 

We also needed to think about the sequencing and synchronization. In the concrete industry, once an order begins (which may consist of using six or more trucks to get the needed concrete), the order had to be completed and the trucks had to arrive with a specific spacing. Concrete creates the foundations to most buildings, and once you start you can’t stop. This is also an industry that works with the construction industry, which is well known for being optimistic about the completion of various tasks. In our case, that means that a foreman on a construction job would schedule a delivery and then cancel or postpone that delivery. On top of that, we had to be mindful that if concrete stays on a truck for longer than 2 hours in hot weather, you might have to remove the concrete using a jackhammer or explosives. I did meet the “dynamite man” who was the local expert on how to use TNT to remove concrete from a truck without destroying the truck.

It was a complicated problem, but it was also a great company to work with because their CEO wanted the industry to change as opposed to keeping the technology as a trade secret. 

Carolyn: What should companies be thinking about in order to benefit from optimization?

Karla: One of the things I applaud Nextmv for is creating technology that does not separate the modeling effort from the IT/production issues. The data is collected and stored in a language that allows the language of the application. For example, I have a vehicle routing problem with a given number of trucks, depots, and drivers; I have these rules regarding driver scheduling; or deliveries have these time windows and precedence constraints. The software then takes this information and automatically translates it into an optimization problem. 

The idea is to keep optimization systems running not only when data changes, but also when managers change. Twenty years ago, every optimization project needed a data collection effort. Today, data is everywhere but the process of translating this data into linear-integer modeling has remained the same. We need the languages that do these transformations to be automatic. Keeping the data and the modeling in a language that a manager understands as opposed to something only an optimizer understands. 

What I am advocating for is languages that the IT team and the client can understand and maintain over time. We should not be using languages that require a new modeling effort for routine vehicle routing problems. Virtually every industry has vehicle routing problems; they have scheduling problems. So there should be mechanisms in place to tell the system the characteristics of the company’s vehicle routing problem or scheduling problem. Clearly, there may be unique characteristics that will need to be added to the general approach. But these are side constraints. They can be added, as needed. And, languages are needed to make the addition of say a budget constraint, or a collection of clustering constraints “easy’ and intelligible to the manager. 

Ryan: I think the data collection part is very interesting. It wasn't very long ago that every value you fed into a solver was sort of precious. You had to sort of massage it or research it to figure out “oh, I have the value 3.” Now everybody has a phone that's constantly sending their location data and you have this clickstream data of people going through your website. It's much more of an extraction problem to construct the belief state of a system to give to a solver now.

Carolyn: This transitions nicely to your work on decision diagrams. How does this optimization technique differ from more classic techniques, and how do you think it changes the way people are modeling optimization problems?

Karla: I think there are two things. First of all, decision diagrams (like constraint programming) treat the logic constraints very differently, and this is an important consideration.

Ryan probably remembers the pain of trying to perform “tricks” to get the optimization to understand the logic of something really simple like an either/or constraint or a precedence constraint. Rather than using these tricks, one can take a more direct computer science approach to these constraints. Again, one keeps the constraint in its logic form and the software decides how to handle them within the optimization framework. 

But I am also worried about the fact that in OR courses we stress the importance of optimal solutions. There are two issues with thinking in this way. First, many applications require an answer in less time than is available to prove optimality. In these cases, we should stop worrying about optimality and focus on getting the real problem solved as well as you can in the time you have. In the background, one should always measure how good the solutions are that the solver is providing.   

The second is that, as I discussed earlier, you really might not want the most efficient solution if it’s a fragile solution. So even in the planning stage, we want to incorporate randomness and make sure that our solutions are resilient. When planning, what you want is that the system being created is flexible enough to handle the anomalies. We know something is going to go wrong every single day. We might not know where, we might not know when, we might not even know how, but we really need flexibility in our operations — from airline scheduling to concrete transportation, to vehicle routing and scheduling. 

Carolyn: What opportunities do you see in operations research today? What can we do to be better practitioners as the space grows and develops?

Karla: One challenge with OR in the past has often been that we build prototypes and throw them over to the IT people and expect them to appreciate what we're doing and get it right. And then we walk away and hope it will be correct forever. I think we need systems where the models being used are understandable to the decision makers and this will be more likely to occur if the model is directly generated from the data. If something changes in the company, the data dictates that the model will change rather than people dictating that the data needs to be transformed into this form so that the optimization software can continue running and providing usable solutions.    

Whether people know it or not, there's a lot of optimization built into our everyday lives. I think it's a very exciting time for optimization. For example, in the world of supply chain we didn’t always have the right objective function. We wanted the most efficient solution instead of a resilient solution. Clearly we are now sensitive to such issues as: number of suppliers, quality of suppliers, and cost. What are the tradeoffs between these needs? One cannot blame the optimization for providing the cheapest solution if that was what was specified as the goal. The questions we need to ask: What should the goals have been and what is the cost of adding additional flexibility, reliability, resiliency to the system?

To learn more, watch Karla in conversation during a recent interview or attend Ryan’s presentation on decision diagrams and optimization.

Video by:
No items found.