Tools to Deepen Your Understanding of a Complex World
Imagine what you could accomplish if you could predict the future. You could make a fortune that dwarfs Warren Buffet’s. You would know when that Kickstarter you backed eighteen months ago will actually deliver. You could, in Wayne Gretzky’s immortal words, skate where the puck is going.
That’s why leaders have been petitioning oracles for millennia. Ancient Greeks made pilgrimages to Delphi to solicit enigmatic one-liners from priestesses. Roman generals commissioned augurs to read entrails before major campaigns. Modern corporate executives prefer to hire McKinsey.
One of the greatest accomplishments of modern technology is that we can use it to predict certain kinds of very specific futures with astonishing accuracy. Your phone can tell you what the weather will be like tomorrow. Google Maps will notify you when you need to leave to make it to the airport on-time during rush hour. Predictive text finishes your sentences when you’re drafting an email. Hedge funds surf opaque trends to arbitrage high-frequency trades. Utilities anticipate peaks and troughs in energy demand to load balance the grid. These miraculous feats make good on Arthur C. Clarke’s observation that any sufficiently advanced technology is indistinguishable from magic.
But even magic has its limits.
Much of humanity’s newfound predictive power stems from advances in machine learning. Dramatic growth in raw computational capacity made it possible to use machine learning to identify and project subtle patterns in vast datasets. If you have sufficient data—i.e. mind-boggling quantities—you can train algorithms on it, and then play those algorithms forward to see what happens next.
That approach works exceptionally well when you’re predicting things like whether a financial transaction is fraudulent, but it fails entirely when you’re trying to predict the growth of, say, a novel epidemic.
Why? Two reasons.
First, when something is truly new—be it a product or a virus—by definition it hasn’t produced any data to train your algorithms on. There’s no grist for the silicon mill. Machine learning can tell you a lot about how something that’s happened before may happen again, but it can’t tell you anything about de novo phenomena.
The second reason is that humans have agency. When COVID took off in early 2020, epidemiologists modeled how they expected the virus to spread. But when policymakers saw the dire projections, they intervened with stay-at-home orders and social distancing policies. These interventions ensured that reality ultimately didn’t match the original models. The results of the models influenced the outcome. In fact, the whole point of making epidemiological models in the first place is to influence outcomes.
So machine learning and associated approaches excel at predicting established extrinsic patterns, but struggle when it comes to emergent phenomena or complex, adaptive systems involving human agency.
The problem is that if you’re a leader, your job is to come up with genuine innovations and make strategic interventions in complex, adaptive systems. If a decision won’t make a meaningful impact, then why waste your time with it? But if a decision can make a meaningful impact, it will throw off all your predictive analytics. To succeed, you need to find points of leverage and try genuinely new things. You need to maximize your agency in a complex world.
Here at Epistemix, we build data science tools to help you do just that.
Imagine that you stand at a crossroads. You’re facing a tough decision on which the future of your organization depends. The stakes are high, and while you’d love to fall back on what worked last time around, this time it’s different. What might help you make this fateful call?
We start by spinning up a virtual world that represents the people you care about—your customers, citizens, partners, stakeholders, et al—drawn from our synthetic population of the entire United States that’s statistically accurate down to the census block level and gives you the option of importing whatever existing data you have. These virtual people live virtual lives in the simulation: they live, work, study, gather, travel, and age just like we do in our daily lives. Then you sketch out the ground rules for the scenarios you want to examine—like these scenarios for the future of COVID that my cofounder shared in a widely read STAT op-ed, or what might happen if you introduce a new product to a large market, or how two competing policies would play out differently in a given region. Finally, you run the scenarios in thousands of parallel simulations, collating results according to probability, and iterating to update assumptions and reduce uncertainty. The process provides the context you need to choose the best path out of the crossroads, or even blaze a new trail.
This approach doesn’t replace machine learning’s miraculous but limited predictions, it complements them: another tool in your data science toolkit to deploy when circumstances require deeper understanding and direct action. Unlike machine learning, our platform lets you explore the dangers and possibilities of complex systems, identify inflection points, stress test new strategies, and compare the specific impacts of particular decisions. Also unlike machine learning, our agent-based modeling is explainable: you can dissect each individual simulation to see why it evolved the way it did.
That explainability is crucial because it deepens your understanding of the interacting factors your future depends on. And the better you understand the underlying dynamics influencing the choice you’re facing, the less you’ll need to resort to unreliable oracles, and the better prepared you’ll be to navigate a world in constant flux.
John Cordier is the CEO of Epistemix.