Our mission at Nextmv is to accelerate decision automation far and wide. We believe decision models are the secret sauce that lets you scale your business, make customers happy, and maximize your revenue growth.
It’s a beautiful dream: a chorus of models humming away in serverless functions, contentedly planning all the logistics that make our lives simpler and more predictable. Modelers test and swap in new ones with ease, progressively improving their operations and seamlessly collecting reporting data along the way.
We’ve put a lot of decision models into production, and it hasn’t always looked like this. In our experience, building and managing decision infrastructure has often been more like a journey into dependency hell.
Local versions of languages are rarely the same. Linux is different from Mac. Linux is different from Linux. Code doesn’t compile. Code does compile but doesn’t link. Code compiles and links but nothing imports. And so on. The same deployment challenges common to other software pop up here, only with a twist of computational intractability.
With Nextmv, we really wanted to give developers a slick, easy path to take models from development to testing to production. To do that, we made Hop and Dash opinionated. They guide you into good patterns for building and managing decision infrastructure. Then we borrowed a bunch of ideas from service-oriented architecture.
The key piece to this puzzle is that all our models are binaries that read and write JSON data. Both Hop and Dash accomplish this using what we call “runners”. A runner reads and writes data, and configures the solver in Hop and simulator in Dash. Runners make it easy to experiment with a model in one environment and then ship it off as a binary artifact to somewhere completely different. They provide consistency, so you know it will behave exactly how you expect.
For example, say I’m building a model that helps me decide what to eat for lunch. Locally, it’s easiest to test from the command line. So I use the CLI runner to read and write JSON to and from standard in and out, or using local files on my dev box.
Once I test my lunch solver and find it has enough propensity for Korean BBQ, I compile using Hop’s Lambda runner, and simply upload the resulting binary to AWS.
That’s a one-line code change and a fast recompile going from development on my box to production in a serverless environment. Nextmv supports a number of deployment contexts, and they all work pretty much interchangeably. They’re designed to behave the same as much as possible, while doing the right thing for whatever environment they run in.
Building decision models into binaries is a beautiful thing. It eliminates a lot of sticky deployment processes and gets you to production faster.
Banner image credit: NASA Chandra X-Ray Observatory
Editor’s Note (March 9, 2021): This post was updated to reflect the updated import paths for the Hop 0.7.0 release.