There are times when we accept publically accept an approximation to move forward with life. Take, for instance, π (pi), better known for representing that irrational number of 3.14159 (and a lot of change) starring as the ratio of the circle’s circumference to its diameter. Yes, many of us recall our friend Archimedes and π from trigonometry class. Pi (π) is a mathematical constant derived from the area of a circle. Plug that π in to make sense of some spatial relationship. There is no riotous call for proofing unless it is on the exam or the SAT. Do students even proof anymore? The π is rationally accepted to serve a purpose to an end, to finish that darn homework on distributed systems. I will argue that the allegory of π can help us avoid clouded, myopic policy decisions.
A mental model is a cognitive tool. Success comes with altering those mental models. In other words, a mental model is not the dénouement. Formal simulations and mental models need each other. Mental models fall asleep on the job when complexity of the policy is dynamic (Sterman 2000). But it must be remember that policies are not spit out by modelling programs nor should they. So why am I calling upon modelling when policymakers already are bombarded with so much? I present the “policy π” in application of model testing to policymaking. “Policy π” focuses on making decisions that account for both systemic uncovered through the approximations garnered from formal modelling as well as practical issues posed in implementing the policy in the real world. Included in this “Policy π” is the common issue of policy ratification.
Regardless of the policy model used in a policy system,
when a new policy is written or committed in the system,
the administrator must consider [policy ratification as] how the new policy interacts with those
already existing in the system. (Agrawal et al 2005)
The model does NOT offer the “answer”. But the model is downright necessary to understand what is going on. When people brain storm, we battle with mental (qualitative) models pieced together as divergent points (inputs). We have all been there. Mentally, we approximate all the time. It is a natural part of the policy process. To the miasmic stench of permanent markers littering a flip chart, the decision makers and perhaps a silent minority are left to make sense of that complex data to get at the process that has no obvious end in sight. There needs to be “reflective conversation with the situation (policy )”of the simulated results (Schon 1992). The model does NOT forecast. The model may not be able to capture all of the connections to other policies that it is related to. It is a simulation. It is a model that is based on decision rules. It is a network with a priori boundaries.
Even with the best minds around the table, there still must be a mechanism to unleash the mental models around the table. Just as health policy would not move forward without epidemiology evidence, so should be the place of systems thinking in the policy process. Systems offers math to support or refute initial reactions to early conditions viewed under a policy. But the model will not be a predictive crystal ball. Much chatter has been circulated about the overreach of models into the world of prediction in policy. Agreed. But the meat of policy is not the model. In my opinion it should not be viewed in that way.
The policy π supported by the model, may give you sweet potato when the table (policymakers) ordered and expected the boysenberry that is now out of season. There may still be that flaky crust and reasonable price. But the input of a tuber over a berry is a notable change. Some of the ingredients may change. The taste will differ. Cooking time may change. You may not like boysenberry. The change to sweet potato could require a new produce distributor, who must then must recalibrate (with excitement) accommodating the needs of a new customer. Your fellow diners may try to convince you to hold out for boysenberry though there is a chance that maybe this sweet potato pie may remind you of Grandma’s. Sometimes a model changes that policy “pie” expectation. The model will not cover every possible scenario. Can you accept the presentation of the sweet potato? And perhaps next week, once again upending your policy gastronomy? Yes, a pie is a pie. Both a sweet potato and a boysenberry are dicotyledonous. Few would confuse the two. Each would present its own unique set of growth habits and susceptibilities. But food stuffs present different phenotypes and culinary experiences. Policy satiates, leaving some bellies full while others push away from the table wanting. Each policy π must be approached and respected as a necessity to prudent, systemic action to debate, not as a letdown of the expected π wished to grace the plate. How filling was that policy pi?
Agrawal, D et al (June 2005) Policy Ratification. Presented at Policies for Distributed Systems and Networks Sixth IEEE International Workshop
Schon D (1992).The theory of inquiry: Dewey’s legacy to education. Curriculum Inquiry. 22(2), 119-39
Sterman J (2002) All models are wrong: reflections on becoming a systems scientist. System Dynamics Review, 18, 501-531