Explore the applications of mathematical optimization alongside the AI/ML landscape for predictive analytics, LLMs, and beyond.
Aimpoint Digital is a market-leading analytics, data engineering, operations research, and AI solution engineering firm that partners with Nextmv for consultative services. Carolyn Mooney, Nextmv founder and CEO, recently sat down with Yash Puranik, Director of Decision Sciences at Aimpoint Digital, to discuss the strategy, application, and implementation of mathematical optimization in relation to the broader AI/ML landscape.
The following is a companion interview to a longer, related conversation that has been edited for length and clarity.
Carolyn Mooney: A lot of technologies fit into the AI bucket — machine learning, LLMs, GenAI. Today we’re focusing on one of them: mathematical optimization. Can you give a brief background on what mathematical optimization is?
Yash Puranik: Of course! And it may be easier to start with what mathematical optimization isn’t. The word “optimization” is overloaded in many contexts; we are not talking about search engine optimization, LLM optimization, business process optimization, or database optimization. The main things we are referring to when we say optimization are tools, models, and techniques that help you make better decisions.
The key part of mathematical optimization is that it is applicable when you're faced with situations where you have certain decisions to make and your decisions are limited by physical realities, business rules, or other constraints. With mathematical optimization, you can evaluate a number of trade-offs before deciding the best course of action. It's also worth calling out that it’s known by different names in many communities: math programming, operations research, decision science, etc.
Carolyn: What are some common optimization use cases, and what are some exciting ones that you're seeing in the field?
Yash: Once you start learning how to recognize mathematical optimization use cases, they exist pretty much everywhere: in finance, logistics, supply chain, manufacturing, energy, and even government — the possibilities are endless.
One of the more fun use cases is the NFL scheduling. It's a very classic optimization problem, trying to decide which team plays and when, and you have to account for a lot of different factors like, travel time, TV schedules, and TV rights. There's a webinar on YouTube about how that problem is solved with optimization.
I'm sure many of us have heard the story of how UPS trucks never turn left to try and minimize their travel times. That's just a small part of the big solution. This INFORMS article shows how even back in 2016, the pilot-scale project had saved hundreds of millions of dollars for UPS. You can only imagine what the savings were once it was deployed on the full-scale project.
What are some examples you see most often, Carolyn?
Carolyn: I hear a lot about multi-vehicle routing, like the traveling salesperson problem. That’s the most common use case people think of when they think of optimization. It's good to highlight that optimization isn't just for routing. It's for workforce scheduling, setting prices, and even product placement on grocery shelves. Optimization has a high amount of impact on our daily lives, it's just sometimes a little hard to see.
I'm excited about how optimization can play a role in sustainability. Reducing time on road or reducing fuel usage is great, but there's also a lot of applications for how you select between different energy resources, or how to optimize energy to different parts of the grid.
Large companies could be saving millions, if not billions, of dollars a year using these systems. That's because these small decisions they’re making on the everyday plans, or real time plans, can have outsized impacts when you roll them up.
Which leads me to my next question for you, Yash: Why do you think mathematical optimization isn’t more popular? Why aren't more people talking about it?
Yash: The main thing we've been waiting for is for compute power to catch up. Many of these algorithms had been developed as early as World War II. We just didn't have fast enough computers or good implementations. We are also slightly hampered by the textbook understanding that many mathematical optimization algorithms are NP-hard and they don't scale. The reality is that you can solve many practical problems efficiently, even if they are NP-hard, and even if you don't always have guarantees of global optimality.
Additionally, optimization solutions are often counterintuitive. If you're used to running your operations in a specific way, and your optimization model now suddenly suggests you do something different, that raises a lot of questions. It requires a lot of work to build trust in the solution.
Carolyn: I agree. When I worked at Lockheed Martin, we had our own server stacks running simulation and optimization work because we needed really high-powered machines. Now, with cloud services and infrastructure, it makes it a lot easier to apply these different technologies.
There's also a combo effect. Many of the solvers in the optimization space have gotten faster and more compute efficient. This combination has made it way more possible to use optimization in tactical and real-time scenarios. That's why we're starting to see a proliferation of more use cases for optimization.
Aimpoint is helping different organizations build more efficient decisions into their operations and into their services. How do you think about the process of getting an optimization project live?
Yash: We have a standard process that we typically follow. We spend a lot of time with the discovery phase, problem phase, and framing phase. There are also iterations when we do model development, testing, and sharing results with stakeholders. We may need to make changes and rerun the model. We all go through iterations on every project, not just optimization projects.
We spend a lot of time specifically on stakeholder buy-in. It's easy to disagree with predictions. You can just write them off as bad predictions. But it’s harder to work with decisions you don’t agree with. If we're explicitly recommending a course of action, we need to be able to justify it and stand behind it. We invest a lot of time and effort in trying to make sure the solution is valid and captures everything that stakeholders need.
And then there’s the performance optimization phase, specifically around deployment. What do you think are some of the important aspects around deployment and maintenance?
Carolyn: There's been less focus on the software engineering aspects within the optimization space, and a lot of that is what builds trust within the organization. You have multiple parties that care about optimization. With data science, the focus is on the data you're trying to collect, represent, and utilize in a variety of different algorithms or models, like regression models, for making those predictions. In the decision science world, you're trying to build the model itself.
There’s also business criticalness of these models. When decisions impact an operational plan that says whether a driver is going to a particular stop, or whether a person is working next week, those are business critical systems. If forecasts go down, you can still operate. It's not ideal, but your systems can still move with the decision space. Your operation can sometimes grind to a halt, so it's important to get the software pieces right.
When you think about the three teams that are involved, there's the engineering team, focusing on integrating these decisions (or models) as services. They're focused on the API layer, making them scalable, making sure they’re running and not failing, observability, etc. There’s also the team building the models and working with the business side saying, “this is what we want to optimize for, and this is the KPI we want to minimize or maximize.” And then, finally, there's the operation side leveraging these models to make a difference in their business. Getting these teams to work smoothly together is crucial for getting these systems live and building trust over time in the automated systems that people have.
Yash, what should business leaders be asking of their decision scientists and data scientists that are starting to build models with mathematical optimization or technologies like AI?
Yash: My biggest pet peeve is how we put the cart before the horse. Sometimes we get asked to implement GenAI in supply chain. The key focus should always be on what are you trying to solve and what are you trying to achieve, then figure out the right method. The question should be, “can we optimize inventory and find a way to reduce working capital costs?” instead of explicitly asking to use GenAI for something. There is value in business leaders learning a little more about some of these methods. We don’t want to scare them with jargon and terminology, so things like visualization, transparency, and testing matter. Business leaders should feel empowered to ask more questions, and it is our responsibility to deliver those answers to them.
You have more experience with rollout strategies. What do you think we should think about in terms of rollout strategy?
Carolyn: From the business side, I occasionally see a desire to represent everything at once. What the software is trying to do, and what the modeler is trying to do, is to extract that data and make it more predictable. That provides a higher amount of accountability. You can go back and look at the decisions and understand why they were made. The ability to be iterative and not necessarily capturing everything about the decision from the get-go, is really important. It’s also crucial for building trust in the solution. You start simple and only represent the base layer rules in the decision model because then you can have better intuition about a simple model versus a complex one. If you start to build trust that way, and maybe it only covers 80% of the cases and your team is still manually working on 20%, that's okay.
We should be able to crawl, walk, run. You have to increment your way there, and along the way you're going to save your team a lot of time and be able to scale further.
Check out the techtalk recording for the full interview to hear more discussion around when to apply OR vs. ML to a problem, how to determine if you need mathematical optimization, and resources for getting started with optimization.