Growing as teenagers back in the 1960s-70s, my generation believed that the world after the year 2001 would be totally different. In a way this turned to be true – along with technological advancements we could not dreamt of at the time the political, security, economic, environmental, and societal problems our planet faces today are not only unprecedented but go beyond comprehension, in their nature and severity.
The fact that in the last ten years a series of ‘second versions’ have been introduced and commonly accepted—such as Web 2.0, Enterprise 2.0, Governance 2.0, Globalisation 2.0—suggests that we have come to realize that the world around us and the global processes have changed in a categorical / qualitative terms rather than mere quantitatively. And this ‘version two’ trend does not limit itself to the development of the worldwide web and the use of social software platforms, but reflects the fundamental changes in the way how we interact between us as individuals, groups, states and societies and how do we cope collectively, with the complex and unpredictable world of the twenty-first century.
Still this is not as surprising as is our inability to cope with them. The Financial Times editor, Lionel Barber, has stressed this point when describing the global trends under the ‘Globalisation 2.0’ banner, that ‘[n]ational governments are desperate to regain a measure of control’ over the mounting problems posed by global processes.  It looks like our political and economic institutions have not been prepared for this change to happen, and are struggling now in the hesitant attempts to adjust. Francis Fukuyama identifies the problem (with regards to democracies) as one of ‘political decay’ and puts his diagnosis as follows: ‘The failure of modern democracies come in many flavours, but the dominant one in the early twenty-first century is probably state weakness: contemporary democracies become too easily gridlocked and rigid, and thus unable to make difficult decisions to ensure their long-term economic and political survival.’ 
The fundamental insight offered by prominent thinkers of the day has been that we (as a humankind, at all levels, from individuals to institutions) must embrace the uncertainty, adapt to it, and evolve and grow stronger with it, instead of pretending that we can predict, measure and even manage the risks (let alone to do so without fundamental changes to our not-effective-anymore practices). 
When it comes to the state, the focus on building its evolutionary capabilities implies that, as its one overarching function public policy making shall be the first to adapt. In the course of the past century, especially its second half, public policy was heavily relying on the rational choice theory and related models which use cause-effect and pattern recognition methods (with a sophisticated statistical computation) to explain the operational environments with stable settings, known variables and the abundance of historical data available. Nowadays however, even though remaining relevant and useful this approach, coupled with hierarchical model of decision-making and the rigid goal-fixated implementation design, is showing its limitations in offering viable policy solutions in complex, dynamic settings and effectively addressing the emerging problems in various domains. Against this background, vast evidence produced by social scientists and practitioners from various fields of expertise over the last three decades has convincingly demonstrated the benefits of experimentation, evidence-based policy, and flexible, adaptive approaches to decision making. 
On a positive note, there is a growing recognition and use across the world, of new methods of policy analysis and design. As behavioural economist Richard Thaler reports with the reference to Economic and Social Research Council’s 2014 survey, more than 130 countries have utilised behavioural science insights in their policies, while over 50 countries have developed policies influenced by the behavioural sciences.  In other words, new (and at times distinctively different but compatible) approaches are already being used, but need to be recognised by governments as a legitimate choice and, in certain situations, even as a default option instead of Public Policy 1.0 methods.
In this post therefore, I am attempting to outline key features of Public Policy 2.0, drawing on the insights from various fields of knowledge grouped under the umbrella of effective policy making in the twenty-first century. It is a sketchy initial shot produced with an eye towards initiating a discussion and, ideally, collaboration around this project. I shall mention here that attempts have been already made to define the ‘version-two’ of policy analysis and evaluation. 
For the working definition I would suggest the following: Public Policy 2.0 is a proactive, experimental approach to policy making which derives from the appreciation of the complexity and unpredictability of the world and which is deployed with an aim of enabling the states and the societies adapt to the rapidly changing environment, and to evolve and thrive with and within it.
Public Policy 2.0 is a proactive, experimental approach to policy making which derives from the appreciation of the complexity and unpredictability of the world and which is deployed with an aim of enabling the states and the societies adapt to the rapidly changing environment, and to evolve and thrive with and within it.
To specify, Public Policy 2.0 rests on the following principles:
— builds bottom-up, where more authority and responsibility given at tactical and even ‘limited task’ level is married with strong coordination and support role at the centre of government;
— employs trial and error approach, with testing simultaneously many ideas trough small pilots, making sense of the findings (including those from inevitable failures), in order to collect and analyse evidence, learn, and to catch up with the changing environment in a timely manner and on the go;
— exercises the management methods that rely on the experimentation and feedback, are supportive to creativity, and encourage unorthodox approaches;
— maintains an ongoing dialogue between multidisciplinary teams of researchers and practitioners and relies, for the implementation, on a broad based in-country and international collaborative networks of partners; and
— recognises the structures only as non-rigid and adjustable to the evolving context, and both the strategic and tactical goals as subject to constant revision.
At this point I see the benefits coming in various ways, namely:
— sense-making: deployment of decision making mechanisms that, while relying on less data and complicated computation, allow for employing insights from social and behavioural sciences to make reasonable and effective policy interventions, given the limited time and information available;
— decentralisation: puts the decision making in right hands of those who deal with the problems in real time and space; helps building the cadre of experienced, tested managers who are ready to assume responsibility along with authority; and encourages the initiative and reasonable risk-taking;
— analysis: opens opportunities for generating more evidence, as from traditional experimental and quasi-experimental designs, so from qualitative and deliberative methods—to make the policy assessment and evaluation more insightful of values and aspirations of stakeholders and relevant to delivering the expected impacts and benefits; 
— design: allows designing strategies and policy programmes which combine the traditional integrated approach with the modular architecture—to enable decoupling potentially high-risk components from the rest of the programme and to create more opportunities for synergic effects from the implementation;
— monitoring: by making the policy programmes’ objectives subject to continuous examination, revision and adjustment—enables using simple but informative methods of relevance, in order to quickly and meaningfully assess the real progress made along the path.
With that said, this proposal does not call for the ‘policy reform’ or dismissal of the present Public Policy version 1.0 approach and for the wholesale shift to Public Policy 2.0 — immediately or in any visible term. Instead, I would advocate for their complementarity, the mutually reinforcing parallel application. The major task at the initial stage will be to demonstrate benefits (as ever) and to ensure that Public Policy 2.0 methods have their place as equals to those of Public Policy 1.0, when considering the analytical, implementation design and managerial issues of any policy issue.
This is especially relevant to the countries which make their state-building efforts in transition from authoritarian regimes towards democracy—where experimentation is imperative in order to find their own way, to tailor the practices tested (but still in need of further enhancement, as we have seen, along with those yet untested) in liberal democracies to their political tradition, culture, and present-day circumstances. I believe that the international organisations, development agencies and broader donor community shall place the strengthening of policy-making capacity of recipient governments at the centre of assistance, and do so with encouraging creativity, innovation and experimentation so that to enable the most effective and harmonious combination of both Public Policy methodological versions.
 From the speech delivered at the FT-Nikkei symposium: Lionel Barber, ‘Globalisation 2.0 – an optimistic outlook,’ Financial Times, 14 January, 2016
 The quote is from: Francis Fukuyama, The Origins of Political Order: From Prehuman Times to the French Revolution (London: Profile Books, 2011). More in detail he elaborates on this topic in his recent book, the second instalment of the series: Francis Fukuyama, Political Order and Political Decay: From the Industrial Revolution to the Globalisation of Democracy (London: Profile Books, 2011). These two volumes are an essential reading for anyone who wants to understand the nature of political processes and to make sense of the current developments.
 Among those seminal works: Eric D. Beinhocker, The Origin of Wealth: Evolution, Complexity, and the Radical Remaking of Economics (Boston, MA: Harvard University Press, 2006); Nassim Nicolas Taleb, The Black Swan: The Impact of the Highly Improbable (London: Allen Lane, 2007); and Nassim Nicolas Taleb, Antifragile: How to Live in the World We Don’t Understand (London: Allen Lane, 2012)
 There is vast literature—books, articles in academic journals, reports by think tanks—on the benefits of experimentation, evidence-based policy, and flexible management and decision-making methods. These are some of my favourite books: Daniel Kahneman, Thinking, Fast and Slow (London: Penguin Books, 2012); Gary Klein, Streetlights and Shadows: Searching for the Keys to Adaptive Decision Making (Cambridge, MA: The MIT Press, 2009); Gerd Gigerenzer, Peter M. Todd, and the ABC Research Group, Simple Heuristics That Make Us Smart (New York and Oxford: Oxford University Press, 1999); Richard H. Thaler and Cass R. Sunstein, Nudge: Improving Decisions About Health, Wealth, and Happiness (New York and London: Yale University Press, 2008); Tim Harford, Adapt: Why Success Always Starts with Failure (London: Little, Brown, 2011)
 Richard H. Thaler, Misbehaving: The Making of Behavioural Economics (London: Allen Lane, 2015), p. 344
 Agrell and Treverton, in their discussion on policy analysis draw on the unpublished paper by the economist and public policy scholar Robert Klitgaard, Policy Analysis and Evaluation 2.0 (2012): Wilhelm Argell and Gregory F. Treverton, National Intelligence and Science: Beyond the Great Divide in Analysis and Policy (New York: Oxford University Press, 2015), pp. 115-135. Also see for an elaborate account of post-positivist policy analysis: Ya Li, ‘Think tank 2.0 for deliberative policy analysis,’ Policy Sciences, 48/1 (2015), pp. 25-50
 It is not accidental that in defining Policy Analysis version 2.0, Klitgaard builds on the characteristics of evaluation suited to the world of uncertainty, borrowing from the leading authority in qualitative analysis methods, Michael Quinn Patton: M. Q. Patton, ‘Use as a Criterion of Quality in Evaluation,’ in A. Benson, C. Lloyd, and D.M. Hinn (eds.), Visions of Quality: How Evaluators Define, Understand, and Represent Program Quality: Advances in Program Evaluation (Kidlington, UK: Elsevier Science, 2001), pp. 23-26. In more recent publication, Patton points out to advantages of qualitative analysis methods (which are highly relevant to the policy analysis and evaluation version 2.0, as advocated in this post): ‘Indeed, qualitative evaluation and in-depth case studies were utilization-focused methodological responses to the kinds of evaluation questions stakeholders were asking and the criteria they applied to judge quality of finding: contextual understanding, in-depth analysis, and cross-case comparisons.’ [Michael Q. Patton, ‘The Sociological Roots of Utilization-Focused Evaluation,’ The American Sociologist, 46/4 (2015), pp. 457-462 at 461.