By Jack Homer, VP of Professional Practice
Many words have been written about how SD modeling should be scientific and rigorous. An initial dynamic hypothesis will always have shortcomings, and these can be revealed by model testing. We must go through an iterative process, revising a model repeatedly so that we can get the dynamic story right, and can more reliably project the impacts of interventions.
Without this scientific approach, it is very easy to get stuck on a model that differs from reality and leads to wrong conclusions. We have a responsibility to our clients and audiences to avoid the twin hazards of preconceived notions and groupthink. We should ideally bring to bear the full array of modern techniques of data collection, parameter estimation, and model testing (see John Sterman’s “SD at 60: The Path Forward”; SDR 2018); and, at a minimum, should follow a systematic, evidence-rich process that leads toward greater model reliability (see my “Best Practices in SD Modeling: A Practitioner’s View”; SDR 2019).
But what will such rigor buy us? Recognized superiority over other analytic methods? Respect and influence in policy circles? An ability to change the world?
Here I would like to consider the words of Dana Meadows and Jenny Robinson in their classic book, “The Electronic Oracle: Computer Models and Social Decisions” (Wiley 1985). In the book’s final chapter, they describe what would be necessary to make systems modelers much more effective agents of social transformation. Their answer, in brief, is (1) greater adherence to the scientific method (including “painstaking documentation” and “community self-policing with stringent rules for replicability, criticism, testing, and evidence”); (2) more emphasis on design and experimentation (as opposed to narrow prediction and analysis); and (3) more collaboration with others addressing the same problem (while knowing one’s own limits and biases).
In other words, Meadows and Robinson urge rigor in modeling, but also boldness tempered by humility. Our conclusions should reflect uncertainty—not only about parameter values but about the model’s very structure. Incisive thinking and evidence should be brought to bear, but still one should not expect to be proclaimed the best or most definitive.
One should, rather, just aim to be taken seriously and brought into the discussion with other analysts (who have their own worthy approaches) and decision-makers (and their diversity of perspectives). One should seek improvement in the world, which occurs gradually and through the contribution of many voices—rather than seeking a personal rise to the top.
I would like to see us, as a field, pursue all three parts of the Meadows/Robinson prescription for more effective systems modeling: rigor, creativity, and collaboration. If we can do that, we will buy ourselves a recognized voice in strategy and policy conversations. We will also cast off our unhelpful insularity and join forces as equals with colleagues in other analytic disciplines.