By Jack Homer, VP of Professional Practice
I once developed a relatively compact SD model to help a major chemical company restructure their plastics business. The model was well received by senior management but was not considered granular enough for accurately projecting the financial consequences of a strategic shift. The company had recently developed a rich dataset on the detailed costs of their operations, and they wanted these data incorporated into the model.
I spent several months expanding and deepening the model so that it could take these operational data into account—reflecting all the additional elements of structure the new data logically implied. I presented the expanded model again to the management team, who seemed overwhelmed by all its detail and clearly did not trust it as much as they did the original smaller model. This outcome surprised me: weren’t they the ones who had asked for the greater granularity in the first place?
In SD practice, we always face the question of how large and detailed a model should be. We cannot depend on clients to answer this question, because, as in the case above, they typically do not understand beforehand how excessive model detail can interfere with clarity and comprehensibility.
Moreover, a bigger and more complete model is not necessarily a more reliable and trustworthy model, unless much more time and effort are invested to develop the necessary supporting evidence and do adequate sensitivity testing. Doubling the number of variables in a model may lead, for example, to tripling or quadrupling the number of parameter values that need to be estimated and the amount of sensitivity testing that needs to be done.
How, then, can we “right-size” our models? After 40 years of doing SD, I still sometimes wrestle with this question, an indication that there is no simple or uniform answer. But I’ll try to explain my usual approach.
Let’s first recognize that size has two components: model breadth or boundary, and level of aggregation. I find that model boundary setting is usually not so hard. When first developing a model, I set the initial model boundary based mostly on what the clients have to say about potential interventions as well as outcome measures of interest. The boundary may change naturally over the course of a modeling project, growing larger as more issues are addressed, or sometimes shrinking as some variables are dropped for lack of relevancy or significance. Such changes in boundary are made with little trouble in most cases.
Level of aggregation, or the degree of “lumping”, on the other hand, is a more permanent decision, typically touching more parts of the model—and always causes me more concern. How many demographic categories should I consider in modeling population health and disease? How many sectors in modeling a national economy? How many product or service lines in modeling a company?
Clients are often used to seeing data broken out by multiple categories (as on a spreadsheet) and may push the modeler in the direction of greater disaggregation. But SD modelers need to think for themselves. They need to ask whether there is any strategic distinction among the multiple categories; that is, any way in which the interventions under consideration may cause the various categories to change in their relative proportions or move in different directions. If the answer is yes, then disaggregation may be warranted; if no, then you can safely lump the categories together and deal in terms of weighted averages.
Data availability should also play a role in the aggregation decision. I often find that data are available in disaggregated form for a few of the key variables but not most of them. In this case, a decision to disaggregate will introduce more uncertainty about parameter values and could cast doubt on model results.
In sum, you should disaggregate only when it serves the model’s strategic purpose and when the disaggregation is broadly supported by evidence. If you get the right level of aggregation, the right size model will usually follow.