Policy decisions are influenced by many factors, from the ideology of the policymaker and their advisors to political expediency. Most would also agree that key political decisions should be evidence-based. However, this is easier said than done. Understanding what evidence policymakers need, and how they should evaluate this, is key for more robust decision-making. More
Politicians often make policy decisions based on many factors that are not evidence-based, such as ideology and political expediency. There is also often pressure to reach political decisions very quickly.
It is common for experts that advise policymakers to disagree, due to their own values, ideologies, institutional roles, or support for demographic groups. This can lead them to place different weights on certain costs and benefits and come to different conclusions. It is also difficult to understand the merits and drawbacks of different forms of evidence. Further, it can be even more challenging to apply academic, theoretical arguments to real-world cases.
In her recent work, Professor Ann Nevile at the Australian National University asks: What is the nature of the evidence needed by policymakers? And how should they go about evaluating it, especially given the differing conclusions drawn by various experts?
Disagreement amongst experts is key to this discussion. When this happens, it usually leads to delays in decision-making and politicians retreating to less radical alternatives in order to reduce risk.
For example, the decision to float the Australian dollar was delayed by opposition from the Head of Treasury. Although the Treasurer, the Prime Minister, the Reserve Bank and other relevant departments all believed the exchange rate should be deregulated, the Head of the Treasury believed that the costs of deregulation outweighed the benefits. The Treasurer knew that the potential consequences were unpredictable, and didn’t want to make such a historic decision without the Treasury on board. Officials shared a common goal, but disagreed on the methods of achieving this.
In 1983, a capital inflow crisis in Australia urgently raised the question of deregulating foreign exchange markets. As part of discussions around this decision, Prime Minister Bob Hawke asked the Reserve Bank Governor about removing all exchange controls, not just those necessary to float the dollar. As a result, Australia went further than most other countries at the time and removed almost all foreign exchange controls.
This example illustrates the effects of disagreement amongst technical experts, and the importance of personal attributes of key decision makers – an unpredictable factor that is often overlooked.
So, what can policy advisors do to ensure the information they are providing to politicians is useful?
Professor Nevile argues that it is vital to draw on a range of policy instruments when advising government. Policy instruments are simply tools that governments use to achieve specific objectives, such as laws, regulations, economic incentives, campaigns and investments. She argues that it is key to connect theory to the real world. Combining factual information with a number of theoretical perspectives gives policymakers the breadth of information they need to make an informed decision.
Professor Nevile looks at this in more detail in relation to a specific type of policy advice: the evaluation of government programs.
A fantastic way of identifying the best policy interventions is to deploy small-scale pilot programs, which can then be evaluated to identify specific factors that underpin success or failure. However, for this to work, the evaluation methodology needs to be robust.
Traditionally, policy evaluations have sought to emulate the natural sciences by isolating causal factors through experimental research methods. In instances where a group undergoing a policy intervention and an approximate control group can be found, randomised control trials can provide an answer to the question of whether a particular program has achieved its stated, formal objectives.
However, in the Australian context and around the world, the possibilities for this are rare. The fact that policies may have multiple, even conflicting, objectives as a result of political compromise needs to be recognised.
The traditional view of evaluation also assumes a linear relationship between policy and implementation. From this perspective, policy is static and objectives will be achieved as long as certain conditions are present. However, in reality, this is not the case. Bureaucracies often have limited control over practices ‘on the ground’.
Finally, traditional methods of evaluation do not take into account the importance of policy framing. Policy framing refers to the way in which policy problems are defined. For example, unemployment might be identified as an issue to be addressed through policy. However, framing determines whether unemployment is perceived and tackled as a problem of a workforce skills deficit or a lack of aggregate demand in the economy. These two different framings of unemployment would lead to very different policy interventions.
It is therefore vital that policymakers reject standardised approaches to evaluation that ignore the nuances of competing political interests, complex implementation, and the relationships between formal and informal policy objectives.
Professor Nevile advocates for a pluralist approach to evaluation.
For pluralist evaluators, there is no single, universal logic of evaluation that can be applied to all projects or programs. Rather, they believe that combining the strengths of the standard experimental method with the strengths of other methods allows for a more comprehensive assessment.
A pluralist evaluation utilises a range of research methods and types of data. Using both qualitative and quantitative data alongside theoretical perspectives, a complex but realistic picture of policy success can be obtained. This method can explain failure more accurately, because it examines process as well as outcomes. It can also identify the unanticipated consequences of policies and facilitate the implementation of research results.
While technical experts often disagree over the interpretation of various forms of evidence, governments are attracted to the idea of evidence-based policy because of the additional authority this provides. For this to be successful, policy advisors need to understand the best ways to communicate evidence.
For Professor Nevile, the combination of technical statistical techniques and qualitative descriptions of processes provides a rich, balanced and accurate assessment of policy interventions.