Test and compare decision models

Should you launch the new decision model to production? Can your current algorithm scale to meet demand? Perform tests and drive development in a repeatable way with Nextmv's testing tools built into a unified platform.

Start freeRead docs

Make go/no-go calls with more confidence

Testing in production — or not at all — is risky and expensive. Building out and maintaining bespoke testing tools is time intensive. Nextmv provides the testing tools and infrastructure to analyze KPIs, assign acceptable limits, and make sure your decision model checks every box.

Read more about acceptance testing →

Shadow and A/B test models with online data

Build confidence in deploying model changes to production environments with shadow tests and switchback (A/B) tests. Run a candidate model alongside a baseline model with a shadow test to see how it handles under production conditions without production impact. Or run a switchback test to see how a new model performs in the real world.

Learn more about online testing →

Derisk decision model rollout to production environments

The Nextmv testing tools are built by decision science experts specifically for iterating, troubleshooting, and improving decision optimization algorithms. Create an experiment in a few clicks in the Nextmv console and find the answers you need to move forward with batch acceptance tests, shadow tests, and more.

Batch experiments

Run exploratory experiments on one or more decision models.

Acceptance tests

Determine if business KPIs are met by a new model compared to another model.

Scenario tests

Identify outcomes for different inputs, models, or decisions.

Shadow tests

Run candidate model alongside production model under production conditions.

Switchback tests

Randomize candidate treatment over units of time and/or location.


Compare scale, speed, performance of models and solvers.

Create a clear, repeatable path to production

Incorporate testing into your model development workflows with our standardized framework. Perform experiments within the context of your apps in the Nextmv console – so details like version history, run history, and experiment results are in one place. 

When the next sprint kicks off, you’ll know exactly where and how testing should happen each step of the way.

Share results, collaboratively investigate

Experimentation goes beyond answering a single question in a void. When test results are transparent and stakeholders can dig into the details, troubleshooting becomes more productive. Bridge the gaps between product managers, data scientists, and data analysts by inviting them to collaborate in a shared workspace in the Nextmv console.

See collaboration in action →

Get startedTalk with us