You are viewing Nextmv legacy docs. ⚡️ Go to latest docs ⚡️

Decision Stack

Solvers

Overview of solvers in the Nextmv decision stack.

Hop is Nextmv's Hybrid Optimization Platform. Hop solvers search the space of feasible decisions and recommend a solution. They can maximize or minimize a user-defined objective, or satisfy a set of business rules.

The default Hop solver is a Decision Diagram (DD) optimizer. It works well on simple models defined as nothing more than states and transitions. For advanced usage, Hop provides interfaces for most of its components. These include such things as restrictors, queues, and searchers. Solver components can be swapped for canned or custom implementations. This lets modelers start simply and scale up their efforts when that becomes important. Additional Hop solvers are available such as Adaptive Large Neighborhood Search (ALNS). Solvers can be used on their own or in combination with other solvers.

This section provides a basic introduction to Hop search mechanics. It is not intended to be exhaustive. It merely imparts a working knowledge of some of the concepts underlying Hop's algorithms. For the theoretical foundations of optimization using DDs, see the excellent book on the subject.

David Bergman, Andre A. Cire, Willem-Jan Van Hoeve, and John Hooker. Decision Diagrams for Optimization. Springer, 2016.

Decision diagrams

Hop provides solvers for maximizing or minimizing an objective, and satisfying a set of constraints. At a high level, Hop reads JSON input describing the state of the system and constructs actors based on that state. It then uses DD techniques to explore future states for improving solutions.

In order to leverage these DD techniques, Hop requires the modeler to define a few things:

  • State definition
  • Transition function
  • Feasibility test

This state must implement two methods: Feasible and Next. Feasible should return true if a state is actionable, meaning it can be implemented in the real world. What this means depends on the model. For example, in a routing problem, feasibility may imply that each of a set of customer requests has been planned for. In the knapsack problem discussed below, it means that the knapsack's capacity is not exceeded.

Next should return an Expander. An Expander is able to return states that we're interested in reaching from a given state. For a routing problem, Next might attempt to insert a request into an existing set of routes anywhere it could make sense to do so. For the knapsack problem, Next merely considers whether to include or exclude another item.

If we are maximizing or minimizing some objective, our model also needs a Value method to tell Hop the value of each state. An example should make this all clear.

How Hop fits in

Seamless integration is arguably one of the most valuable features of Hop or Dash and guides many of our design decisions. Hop models often makes decisions using real-time data. A Hop runner loads input data from JSON, along with options that impact its execution. The runner provides these data and options to a Hop solver. Hop outputs a recommended future state, such as a set of routes for couriers. A separate service (machine or human) will typically receive the output JSON and use this to make decisions.

hop-fits-in

Example: knapsack problems

Consider the 0-1 knapsack problem. We have a set of items 1 through n. Each item has a value and a weight. We can carry a total weight of 50. We would like to carry the maximum possible value within our weight limit.

Let's say our input data looks like this.

ItemValueWeight
120025
215010
322520

To get started, we need a root state. We aren't carrying anything to begin with, so our root state R is an empty set of items. Its weight and value are both 0. We also call this the root "layer" and say it has a depth of 0. It is a feasible knapsack because we haven't exceeded the capacity, but it isn't a very useful one either.

StateDepthItemsValueWeightFeasible
R0-00Yes

Once we constructs this layer, we examine R. R is our first feasible solution to the knapsack, so we marks it as the "incumbent solution."

Let's consider packing each item in order 1 through n. For each item, we have two options: leave it out or pack it. States R0 and R1 correspond to those two options for item 1. Together they form layer 1 of our diagram.

StateDepthItemsValueWeightFeasible
R01-00Yes
R11120025Yes

Both R0 and R1 have remaining capacity in the knapsack, so they are feasible. We order these by descending value and determine that R1 is a better solution than our current incumbent, R. We save R1 as our new incumbent and skip R0 as it is worse than R1.

Now we decide what to do about item 2 in the same manner. Isn't it nice having computers to do this sort of work for us?

StateDepthItemsValueWeightFeasible
R002-00Yes
R012215010Yes
R102120025Yes
R1121,235035Yes

Again, we examine the feasible solutions in this layer. We finds that R11 improves on the value of our current incumbent, R1. We save R11 as our new incumbent and skip the other solutions.

Deciding whether or not to include item 3 generates a layer with 8 states in it. Each layer has twice the number of states (the "width") of the layer before it. This is called "expanding" the next layer. If you're thinking that our layers are going to get hopelessly big with a small number of items, you're right!

StateDepthItemsValueWeightFeasible
R0003-00Yes
R0013322520Yes
R0103215010Yes
R01132,337530Yes
R1003120025Yes
R10131,342545Yes
R11031,235035Yes
R11131,2,357555No

Something interesting happens constructing layer 3. State R111 exceeds our weight limit. Thus, we remove it from consideration as an infeasible state. Of the remaining states, R101 is the best and improves upon our incumbent, R11.

There aren't any more states to generate, so we choose R101 as our optimal solution. It provides the most value without exceeding our weight limit.

Diagram expansion

The exercise we just went through constructed what is called an "exact" decision diagram. That means it represents the entire state space of our problem, as shown in the following graph. Note that we leave R111 off the graph since it isn't feasible.

                 ___________ R __________
                /                        \
         ___ R0 ___                     _ R1 _
        /          \                   /      \
     R00            R01             R10        R11
    /   \          /   \           /   \        |
R000     R001  R010     R011   R100     R101   R110
Copy

Obviously it gets impossible to generate exact diagrams for problems of any reasonable size. Luckily, we have techniques to deal with this issue. We mentioned that Next returns an Expander. We have two different types of Expanders: Eager and Lazy.

Eager will generate all of its child states at once, but Lazycan be configured to merely generate a subset of them up to our expansion limit. We defer states that could potentially generate more child states than the limit allowed so they can later be used to generate even more states until they are exhausted. If our expansion limit is set to 2, that means that even if a node could generate 10 child nodes, we will only consider the first 2 child nodes and defer the rest for a subsequent pass.

Diagram restriction

The other technique Hop uses is called diagram restriction. Say we use a "value" restrictor that keeps the best states of each layer and defers the rest. Let's set our maximum diagram width to 2. We want to maximize total value, so the value restrictor keeps the two states from each layer with the highest values. Our restricted decision diagram looks like this.

   R
  / \
R0   R1
    /  \
 R10    R11
  |      |
R101    R110
Copy

This diagram is much smaller, and in this case it manages to find the optimal state for our problem. The other states aren't actually gone, we're just going to look at them later. We focused our search on a promising section of the state space, and it paid off.

In constructing this restricted diagram, we find a series of improving, incumbent (best known) solutions to consider: R, R1, R11, and R101. Each improves on the last solution. We then continue searching the rest of the state space. If we find a better feasible state, we'll use that as a new incumbent.

Hop search mechanics

We've covered two of the techniques Hop uses to search the state space of a decision model: expansion and restriction. There are more parts to Hop's search algorithm which you can read about in the package docs. Not all of these are required, and Hop makes them plugable so the modeler can exploit domain knowledge about a problem's structure. The more Hop understands about a problem, the more efficient it can be at finding good solutions and reducing the size of the state space.

Briefly, Hop uses the following procedure to explore any state.

  • Expand a deeper layer using each state's Next method.
  • Reduce states that are dominated by other states.
  • Restrict the states for further exploration and defer the rest.

Hop repeats this procedure for each layer. By default, Hop uses a small restriction width to find feasible solutions quickly. Settings like width can be easily overridden using command line options or environment variables. Search components like restrictors can be plugged directly into the solver options. Hop provides interfaces to follow for custom implementations, if so desired.

Page last updated

Go to on-page nav menu