Ten Most Important Nudges, for All Walks of Life


Nudge as Choice Architecture

Knowing how people make decisions is important. For various practical reasons, from making better policies to delivering product or service with higher value, to managing teams and companies, to effectively winning campaigns (think of elections, for example). Not surprisingly, the way how we make decisions as individuals and groups interests scientists—psychologists, sociologists, behavioural economists, political science and communications scholars. They have been studying human decision-making for long, but the real boom started with publishing a seminal work by Richard Thaler and Cass Sunstein, who made the concept comprehensible (as compared with technical jargon filled academic publications/journals) to broad audiences of non-experts and coined the term ‘nudge’.

This is how they define it: ‘A nudge, as we will use the term, is any aspect of the choice architecture that alters people’s behaviour in a predictable way without forbidding any options or significantly changing their economic incentives. To count as a mere nudge, the intervention must be easy and cheap to avoid. Nudges are not mandates. Putting fruit at eye level counts as a nudge. Banning junk food does not.’

What made the notion so popular was efficiency of its practical application—relatively unsophisticated and inexpensive methods that can deliver tangible benefits. Moreover, as the proponents claim, it is beneficial both ways: the governments and citizens in policy implementation and service delivery, the businesses and customers alike, in their interaction. The logic is pretty simple: by helping people make better choices of your products and services, you help yourself (through customer satisfaction, for instance).


Nudge and Free Will

The use of insights from behavioural sciences in public administration and business management has been growing for years, and there are many followers of employing ‘nudging’ methods for communicating public policies (in US and UK even public offices established), but also critics who claim that ‘nudging’ is paternalistic, even unethical.

The main concern of opponents is that by using behavioural science, the governments and businesses (and in fact anyone else) can manipulate our decision making, ‘softly’ push us toward making choices which are not necessarily in our best interest. On the other hand, there is a notion of free will in individual choice involved. For us, the very fact of the choice made voluntarily matters. In fact, we value it so much that we are ready to make sacrifices and accept the cost of it (at times, stubbornly pursuing our course—remember sunk cost?) thus, exercising our ‘right to be wrong’.

This is what I recall from Dostoyevsky: ‘What human being wants is just an independent choice, whatever the cost of this independence and whatever it may bring about.’ Therefore ‘rationality’ (strictly self-interest based behaviour, as defined by mainstream economists in their models) is not necessarily the driving or determining factor of any given decision made by us as individual human beings, and I assume this makes it impossible to model an individual’s behaviour (for good or bad). The value of a model which does not incorporate our intuitive cognition is of small practical use.

Nudging Tips

So what are the most important nudges? According to Cass Sunstein, the co-author of Nudge, there are ten of them:

Default rules: Setting the most beneficial to customers (as perceived by the initiative owners) as a default—most of us automatically accept them without giving second thought. Application may include automatic enrolment in programmes, including education, health, or savings;

Simplification: Making programmes easily navigable, even intuitive. This may include simplification of (at times numerous and lengthy) forms and regulations (which only experts dare ‘decoding’);

Uses of social norms: For example, by emphasizing what most people do—putting phrases like ‘most people plan to vote’ or ‘most people pay their taxes on time’ or ‘nine out of ten hotel guests reuse their towels’ in the communication with customers. In UK, this kind of nudges have proven effective in target interventions that support families with long-standing problems, turning around their lives and improving the life chances of children;

Increases in ease and convenience: We frequently reject some offers, especially those related to change of habit, because find them complicated (this is not the only reason, of course). The benefit shall be presented up-front, to immediately attract attention, without much effort. Applications may include, for example, making low-cost options eye-catching in the list or healthy foods visible on the store shelf;


Disclosure: For example, the economic or environmental costs associated with energy use, or the full cost of certain credit cards, or making easily available large amounts of data, through the Internet (as in the cases of data.gov and the Open Government Partnership);

Warnings: We know they work—campaign against smoking, with its emotional appeal through graphic means (as for cigarettes), is an example. Other visual effects, such as large fonts, bold letters, and bright colours can be effective in triggering people’s attention. Generally, visual effects can be used not only to warn but to encourage certain behaviour: for example, the use of flags can affect tension between communities, feeding into reconciliation strategies (as shown in the Northern Ireland Government’s Shared Future policy);

Precommitment strategies: These are nudges by which people commit to a certain course of action at a precise future moment in time—it is thought to better motivate action and to reduce procrastination;

Reminders: For example, by email or text message. The purposes may broadly vary—from paying bills, to taking medicines, or making a banker’s or doctor’s appointment. It also has a nice touch (‘we care’) which is good for building consumer trust and confidence. Closely related approach is ‘prompted choice’, by which people are not required to choose, but asked whether they want to choose (for example, clean energy or a new energy provider, a privacy setting on their computer, or to be organ donors);

Eliciting implementation intentions: Asking questions like ‘do you plan to vote?’ or ‘do you plan to vaccinate your child?’. Emphasizing people’s identity can also be effective (‘you are a voter, as your past practices suggest’). There are some interesting outcomes, for example in encouraging people towards more sustainable transport habits by leaving their cars at home and use public transport, by using this nudge;

Informing people of the nature and consequences of their own past choices: Private and public institutions often have a great deal of information about people’s own past choices – for example, their expenditures on health care or on their electric bills. The problem is that individuals often lack that information. If people obtain it, their behaviour can shift, often making markets work better (and saving a lot of money). Take for example, initiatives like ‘smart disclosure’ in the US and the ‘midata project’ in the UK.

*                      *                      *

In the following article I will share examples from the experience of governments as well as business practice on nudging people to take certain action—as practical application of the above nudges (or stated intentions to do so).

For other articles in this series see How Humans Think: Five Mental Shortcuts and How to Make Right Decisions in the Age of Uncertainty

How to Make Right Decisions in the Age of Uncertainty


The threats and opportunities in the Age of Uncertainty

We live in the Age of Uncertainty. From individuals, to families, communities, social groups, governments, business entities, international organisations—we all live in an environment which is much different from what we used to live in, and increasingly so. Its main feature is complexity—defined by constant changes that occur rapidly and simultaneously in various dimensions, abundant but fragmented information, and unpredictability of future (even immediate) developments.

The Age of Uncertainty challenges us. It poses multiple and unforeseeable threats but also offers unprecedented opportunities. Just look around: the rate and scale of demise of big companies, transnational corporations thought to be ‘well-established’ in the market for decades is exponentially increasing. But so does the rate and scale of new, incredibly successful fresh entrants. Consider Facebook, Google, eBay.  But that’s not all Tech: think of companies like Uber—at a $50 billion valuation, the on-demand taxi company is more valuable than at least 70 percent of the Fortune 500. Opportunities are elsewhere, in any sector, new or old.

“In complex situations, an opportunity often avails itself in totally unexpected places, directions and forms that are not possible to discover or predict by logical computation (whatever artificial intelligence or big data employed), historic cases or trend extrapolation.”

The task therefore is making best of the opportunities the Age of Uncertainty offers. In complex situations, an opportunity often avails itself in totally unexpected places, directions and forms that are not possible to discover or predict by logical computation (whatever artificial intelligence or big data employed), historic cases and experiences or trend extrapolation. This is where curiosity equipped with unorthodox, flexible approaches and experimentation can do much better than ‘old school’ ways—after all, we do live in totally different times.


Is it enough answering the tested right questions?

All businesses, whether start-ups or established corporations, use the same set of questions for strategic planning: What business are you in? What is your purpose? Who is your customer? What is your value proposition? Who are your competitors? What is bargaining power of suppliers and customers? What are the risks?

Anyone who has written a business plan, or participated in their firm’s strategic planning exercise is familiar with those questions—all strategy frameworks start and end with them. They all stay relevant today.

True, posing and answering the right kind of questions helps. You will also do well by asking right questions regularly. My question, however, is: Whether it is enough asking the same, even tested and proved useful, questions in the Age of Uncertainty?

I think there are a few very important questions missing in this list. They concern your decision making. Because at the end of the day, it is how you answer those questions matters, not the mere fact that you pose right questions.

In a series of posts I will share my favourite decision making processes and methods which I think are best suited to complex situations when decisions have to be made under the time pressure and information constraints. I have tried them, in various modifications and combinations, over many years working in academia, private and public sector, in headquarters and in the post-conflict field settings, in well ordered and highly volatile situations.

And note that these methods are not ‘fixed’ as in a textbook—I keep working on them, reviewing, re-applying, and testing all the time. They combine old-school logic based approaches with experimental, recursive processes and intuition based decision rules. Moreover, it has been proven by the experiences of those working in emergency situations that this kind of decision making strengthens an individual’s and organisation’s resilience and adaptive capacity and, thus, increases their chances to succeed in the Age of Uncertainty, complexity, and wicked problems.


How do you make decisions?

The way how we make decisions matters. Greatly matters. We all know it too well, from our own experiences and those of others—individuals, companies, governments. And we have learned from both successes and failures (although, to be honest, failures are bitter but better advisers).

So, how do you make decisions? Below is a list of questions I think you will find useful to start with. It is not exhaustive, neither intended it to be—my role here is to give you a hint, to inspire you to think creatively. After all, this is the essence of the very approach I stand for. Here you go:

  • How do you strategize: Do you fix your plans into existing data or make data work for your plans? Do you set long term plans that are set in a stone or are they subject to regular revision?
  • How do you analyse: which formal and informal processes do you follow? Which methods do you apply at different steps?
  • What data do you use: external or generated by your own activities; only statistical or also qualitative, narratives? How do you store it and make readily available (i.e. is it useful for daily decision making)? How expensive is it to collect, process, and store or share it?
  • What are your rules (or thresholds) for stopping the search for decision, making decision, and moving into action upon it?
  • Do you always look for perfect decisions? What best decision means for you (or your company)? How do you review your decisions, what are checks and balances?
  • How flexible you/your business are in adopting test and trial approach, in employing adaptive, tactical approaches to resolving big and small issues?
  • Is creativity, risk taking, learning from failures part of your company’s culture? Are these qualities encouraged by the management/stakeholders or punished (in real life, not in the company’s policies and formal statements)? What incentives does the company use (either way)?

Think about it. Whether as individual, family, team or company—you will find the very process beneficial. This is what I can guarantee. It is a starting point of a never ending and highly entertaining journey. I will try to be of use, especially at your initial steps, to share my experience and knowledge.

For other articles in this series see Five Simple but Powerful Mental Shortcuts

Five Simple but Powerful Mental Shortcuts

In this series of articles I share ideas on how we make decisions as individuals, groups, and organisations and offer my advice on how to design the decision-making processes that combine traditional methods of analysis with more flexible, adaptive approaches—to be better suited to the problems we face in the Age of Uncertainty. Applications are numerous—from individual choices, to business management, public policies, security and conflict management, and international development assistance. In the posts to follow I will share some practical solutions I have developed for various situations or practice domains and issues. I hope that you find them useful—they are intended to be.


Decisions We Make

In everyday life we make most of our decisions intuitively (some say automatically), without involving much computation. For instance, if the situation seems familiar we will tend to draw on our past experiences. We usually use simple techniques, such as rules of thumb (or heuristics) to make judgements in such situations.

Moreover, we employ the same techniques when making judgements under uncertainty, when we know little about an object or have limited, if at all, past experience. This allows us saving time and navigating not too complex situations, or being under constraints of time and with limited amount of information available. In most cases it works well.

However, behavioural scientists have found that our intuitive decision-making has a number of biases (systemic errors) which hinder our ability to choose an optimal option. They also claim that they are predictable (and thus manageable). Drawing on years-long experimentation, psychologists Amos Tversky and Daniel Kahneman have developed a prospect theory  back in the 1970’s, to explain how we make certain biased (or statistically flawed) assumptions, especially when weighing probability of something, in the face of uncertainty (years later, Kahneman was awarded a Nobel Prize in economics for their research).

The method used is frequently referred to as ‘heuristics-and-biases’. I am sure many of them are familiar to you, and not only from individual experience but also that of groups, business organisations, government bodies (we remain humans at work, don’t we?):


We usually use assumptions based on stereotypes when judging on probability of some object or event belonging to certain category. For example, we judge someone’s behaviour on the degree to which their actions are representative of a particular category. Apparently, so does jury in the court of law when categorising the alleged crime of a defendant. How many of us, as customers or in entrepreneurial capacity, have been fooled by someone’s behaviour or appearance of business premises of some firm—simply because they ‘represent’ our idea of what a successful businessman/business should look like? Richard Thaler, behavioural economist and co-author of ‘Nudge’ gives an example: ‘People can nudge you for their own purposes. Bernie Madoff [the Ponzi scheme fraudster] was a master in the art of winning people’s confidence and taking advantage of it. I don’t think he needed to read my book. I think he could have written a better version of it himself.’

Stereotypes are powerful; just look around to see how they influence our inter-personal and inter-group relations in society. Understanding how they influence, through representativeness heuristic, our judgements is very important – for preventing crime (especially hate crime which is on the rise both in Europe and in the US), improving relations at workplace and interactions in various public spaces and undertakings.


We employ adjustment from known things (using them as anchors) to estimate unknown things or make predictions. Sometimes the initial value is automatically suggested by problem formulation. What if you defined the problem inaccurately? Or the value of similar case you rely upon is too specific to serve as an adjustable target for your case? As experiments have shown, we also tend to use the information available (often-time what comes to mind first) without critically examining it for relevance. Whatever adjustments you make afterwards it won’t help, because the baseline is already incorrect.

Have you noticed that when negotiating price, terms and conditions of a deal (may be salary, loan conditions, business contract)? Who first suggests the value (numerical, percentile or monetary as relevant to the topic discussed) sets an anchor, and the rest of discussion will in fact be a bargaining exercise around this very value, whatever distant of the other side’s initial idea it is. Try it, if you haven’t done so before and you will see that it works. I see it as a narrow application of a saying that ‘who sets the agenda controls the outcome’: in this case you control the outcome by establishing the mean value that is favourable to you.



We use mental shortcuts to immediate events or facts, as they first come to mind, when making inferences. It equally refers to those evoking positive and negative emotions. We perceive them as more familiar and common (or rare, depending on perspective). And more we are occupied with these facts, more we become convinced. You see a car crash—you start driving cautiously (at least for some time). You read a shocking story (also with pictures) about food poisoning from certain products, you will avoid consuming them. Couple of your former colleagues lost job, and you immediately think unemployment in your sector is high.

Another well known example is lottery. Do you know how many people get encouraged to buy lottery tickets or go much beyond their normal limit of spending immediately after they hear of X winning forty million ‘just like that’? And this effect occurs contrary to the logic of rationality, as an unexpectedly large jackpot won by X will be resetting to a lower level, thus chances eventually decrease to win so much.

Entrepreneurs know it well—news are all around that Y got very rich, and quickly so, by producing/serving/selling something; we immediately think of it as the most promising opportunity for our business growth and thus overestimate the likelihood of success and overspend. Similarly, the availability bias affects the investment decisions: for example, in the years immediately following the financial crash of 2008 investors’ persistent perceptions of a dire market environment was causing them to view investment opportunities through an overly negative lens and thus avoiding risks in favour of ‘safe’ investments, no matter how small their returns.


We tend to overestimate own strengths and capacity while underestimating potential barriers. Partly it is explained by our memory’s leaning more toward positive experiences than to failures (we work hard to forget, to block, or at least to portray in rosier tones past unpleasant experiences, don’t we? This appears to be no good for learning though). Therefore, when planning, estimating scenarios we refer to best-ever achievements from our internal archive. It is quite natural for us—as psychologists claim, we assess ourselves by our best intentions while others judge us by our worst deeds.

This selective reference to positive examples makes us prone to be overly optimistic in situations when more cautious approach is advisable. It frequently results in unrealistic plans (both time and effort, cost-wise) which then have to be revised, sometimes repeatedly. Have been there? Building a vacation house, developing or implementing a project, weighing plans to enter a new market? And it applies to all endeavours, big and small. Take, for example, Sydney’s Opera House. Budgeted at an initial cost of $7 million, it ended up costing more than $100 million and took more than a decade to construct (what makes it the most expensive cost blowout in the history of mega-projects around the world).


One useful method to tackle the optimism bias is to review the initial plan when equipped with the findings of risk assessment. Put under scrutiny timeline, budget, supplies etc. item by item against quantified operational, political, technological, customer and other relevant/applicable risks. You will be greatly surprised to see how ‘shining’ numbers would shrink and the plan would immediately drop from ‘highly advantageous’ to some lower category, if not abandoned altogether. I have seen projects that as a result would go from confidently positive Net Present Value (NPV of cost-benefit analysis) into negative, prohibitive value in fact.

Well, I think it is better than being fooled by your (own or team) overconfidence. You still can go ahead with implementation, but with eyes open this time around. The UK Treasury even uses software to mitigate optimism bias (especially in infrastructure, capital investment projects of the government).

Loss Aversion

Or take a phenomenon known as loss aversion. When evaluating various options and assessing their potential benefits and risks, we weigh losses higher than gains. It appears that loss of things we possess hurts twice more than gains make us feel good. Therefore if the values of potential win and loss are in the same range and probability of losing and winning are equal, chances are high that we will opt for not taking risks. Psychologists like this mental bias because it is easy to prove in controlled, laboratory environments: simply toss a coin. Try it yourself. Imagine that you were offered a gamble on the toss of a coin (that is 50/50 chance) in which you might lose $50. What would be your preferred payoff? I bet you will demand much more than loss, most probably something around $100.

This in part explains another phenomenon known as sunk cost fallacy, when more we invest in something harder it is to abandon it (and we obviously find many reasons to justify our ‘rational’ decision). Just recall your experiences. Very simple one: you drive to an outlet mall to buy a certain product but don’t find it there; you will buy something else. Of course you will justify your purchase that you needed it, but in reality you don’t want go home empty-handed after driving too far.

Or think how many companies you know of, who suffered heavy losses because of stubbornly refusing to give up a project or product they had invested in considerable funds? One famous example: British and French governments continued funding their joint venture of Concorde even when it was crystal clear that the airplane sales would fall short of the expected level of return to keep the business going. Or think of overseas wars: ‘We have invested too much in this campaign for too long. Too many lives lost, too much money spent. We simply cannot stop it now; it would equal to surrender, so we must fight it to victorious end’.


…and many other

There are many other examples of our mind’s work that will amaze you. One of my favourites is halo effect, our tendency to assume that if product X is good for doing Z, it is perfectly suitable for Y and X, too. Or, if product S of a certain company is good, other products N, M, L and W (even from a totally different product line) will be equally good. Or, because certain person is good at doing D they will be good at doing B, A and C (or the reverse—because they are bad at doing A they will be bad at doing B, C and D). Sounds familiar from the workplace?

*                  *                  *

Although everyone seems to agree that we do use various mental shortcuts to make decisions easy, not everyone accepts our intuitive decision making as deficient (as compared to computed solutions) and erroneous. Many management practitioners rely more on so called ‘naturalistic’ methods which make best of our in-built cognitive capabilities, and rightfully so. With regards to using big data and sophisticated software some argue that abundance of information is costly and often-time confusing, while much better decisions (especially under pressing circumstances) are made with less but relevant information.

Both sides have a point to make, and I think that we have to exhibit more flexibility in adopting a variety of analytical methods and use them in a complementary manner. Remember? It is not the quantity or even quality of data but quality of decisions we make upon them that matters.


How to Identify Lone Wolf Terrorists in Three Decision Steps

Security Brief       

Terrorist threat has many facets and instances. Some methods and actors diminish over time giving place to others. Previously less-known become prominent. Terrorism forms and methods thought insignificant, least dangerous keep evolving to become among most damaging. This is the nature of terrorists—they are in constant search of system vulnerabilities and move fast and are highly flexible and adaptive to the changing environment, in finding new organisational forms, execution methods, and new recruits as well as inspiring other, independent perpetrators.

This is the case with terrorists known as lone wolfs–individuals or a small number of individuals who commit an attack being inspired by a terrorist group and its ideology without being directed or materially supported by it.



– Attacks carried out by lone wolfs (individuals and small number of them) became a dominant vehicle for terrorism in Western countries.

– Identifying and tracking lone wolfs and eventually stopping them is even more difficult than in the case of terrorist organisations.

– Traditional methods of surveillance, data collection and processing appear ineffective against this category of perpetrators. The same is true for decision making methods.

– A flexible decision making process using small set of formalised methods based on sense-making, professional intuition, and simple but powerful decision rules (known as heuristics) offers an opportunity worth considering and testing.

The threat

Just five years ago the following statement was quite typical from terrorism experts: ‘But should the American public panic over this shadowy enemy? Is the lone wolf really so scary after all? Not if its record of lethality is any indication. The four lone wolf attacks since Sept. 11 managed to kill just one civilian… And the perpetrators used weapons no more powerful than a gun.’

Today, situation is very different, especially in Western countries. Lone wolfs are identified as a ‘growing threat’ and rightfully so. As stated in the Global Terrorism report, ‘the majority of terrorist attacks in the West are not carried out by well-organised international groups. Instead, the terrorist threat in the West largely comes from lone wolf terrorism. … These types of attacks account for 70 per cent of all deaths in the West from 2006 to 2014.’ And as evidenced in the global terrorism databases, the trend keeps expanding geographically, diversifying in terms of weapons and perpetrators, and intensifying in terms of lethality of attacks. The massacre carried out by a lone wolf terrorist in Nice using a truck was defined by specialists as ‘weaponization of everyday life’ by terrorists and as such presenting ‘insurmountable challenges for security officials’. Terrorist organisations, in first hand ISIL, are taking advantage of it and are increasingly ‘quick-radicalising’ vulnerable individuals and using lone wolfs for their purposes: only between October 2015 and August 2016  this category of terrorists carried out 20 attacks in Western countries.

The challenge

To prevent a terrorist attack, security agencies shall have enough information received and processed in advance time to allow them stop it effectively. This is not the case, unfortunately—the information is always incomplete, not always verifiable or reliable, and usually there is no much time at disposal. Even more difficult is to prevent attacks carried out by lone wolfs, because they are often-time unknown, untraceable, and as a result highly unpredictable.

Decision-making in intelligence comprises the same elements as in any other decision-making process governed by a mix of search and decision rules. The challenge is that by broadening the search parameters specialists get an enormous amount of data on millions of people who meet the predefined (numerous) criteria, which makes time of processing prolonged, the exercise laborious, expensive and dependant on sophisticated computation. There is no such luxury in security and counter-terrorism (even if funds and other resources would allow)—things move very fast and may change direction at any moment, with totally new actors and methods employed. Therefore a lot depends on the efficiency of methods used for collecting and processing the information, and especially for the final stage—that is, making decisions and acting upon them.

The proposal

Intelligence officers are closely following certain people they believe are representing a real threat as terrorists (i.e. terrorism suspects). Daily screening of information received from various sources also gives early warning signals of suspicious behaviour of many other individuals. The purpose is to find potential ‘new entrants’ who could be flagged for closer surveillance. But first you have to identify them. The problem is that vast majority of these random signals are ungrounded or irrelevant, but still must be assessed—pretty much seeking a needle in a haystack. Not an easy task, even in the case of organisation-affiliated terrorists, let alone a loner.

The approach I propose aims at helping counter-terrorism specialists handle this initial screening/assessment process of enormous datasets relatively quickly, using a formalised but adjustable, open to experimentation process, while arriving at accurate inferences. The approach is based on a notion of ecological rationality—that is, to arrive at more adaptively useful outcomes, decision-making mechanisms shall exploit the structure of the environment and the information it offers.

Process requirements

Requirements I set for effective and efficient decision making to identify lone wolfs are grouped in terms of input, information processing, and output characteristics:


  • Limited information: Ability to perform using limited and relatively accurate data;
  • Time constraint: Fast and computationally easy, to screen and evaluate large number of candidates;
  • Resource constraint: The scope notwithstanding, can be undertaken by an individual or a small team of professionals.


  • Flexibility: Rules allowing use of different clues (factors, features, aspects, criteria) and interchangeably, assign different values, and order them in alternative ways;
  • Focus: Applicable to evaluating both single candidates and a group of candidates;
  • Compatibility: Ability to make judgement of an individual candidate (or candidate group) without comparison to other candidates or reference to baseline (historical) data.


  • Operational usefulness of decision: Deterministic decision at the output (i.e. telling what to do, take-the-best);
  • Certainty of decision: Discharged from the ambiguity of input information; minimal interpretation of decision (i.e. pointing to one selected action);
  • Overall quality of decision: Good enough although not optimal (i.e. accuracy rate is acceptable, enough to act upon it).

Decision making in three steps

Step One: Defining the search cues

The objective of initial step is to define search cues as key characteristics of an object assessed for decision purpose. This is done by identifying distinct features which the candidate for detailed screening must possess in order to be further considered. There are two sequential tasks under this step.

First task: Set key search cues

Key features serve as criteria to help assessing the candidates in decision-making process and shall be grounded in some (generally or locally) accepted definition of the target population. In this case this would the definition of lone wolf terrorists. I will use the general definition offered by National Security Program of the National Security Critical Issue Task Force (NSCITF; 2015): ‘The deliberate creation and exploitation of fear through violence or threat of violence committed by a single actor who pursues political change linked to a formulated ideology, whether his own or that of a larger organization, and who does not receive orders, direction, or material support from outside sources.’

It also offers a clarification very useful for the purpose of defining criteria and categorising the lone wolfs:Absent violence or the threat of violence, the individual may hold extremist or radicalized views, but he or she is not a terrorist. Absent political motivation, an attack would more closely resemble traditional forms of crime, organized violence, or hate crimes. Absent the individual acting alone, the attack would fall under the traditional definition of terrorism that encompasses violence conducted by organized terrorist groups.’

I draw from this definition five key cues/characteristics. Note that the candidate must meet ALL of them in order to be considered/qualify for flagging (follow-up close monitoring). The search cues are:

  1. Single, lone actor
  2. Political aim driven
  3. Intends at/is predisposed to use violent means
  4. Ideology-inspired
  5. Not affiliated with (by chain of command or supply) with terrorist organisation

Second task: Establish indicators

To support the decision making, we have to introduce a set of indicators—the signs which help us make best use of the information available. They may be categorical (yes/no), descriptive or numerical. In any case the decision makers will have to use judgement based on arbitrarily assigned weighs and values. Indicators may be formulated in the form of questions and not necessarily grouped under each of five core features. Below is a set of indicator groups I suggest as an initial shot; it is illustrative, by no means prescriptive.


  • Age group;
  • Permanent residence area;
  • Sex;
  • Social/ethnic/religious/sectarian background;
  • Employment status.

*Note: This group of indicators is optional for identifying lone wolfs—we don’t know much about them. Most are killed at the attack, and other candidates may serve different ideologies, which makes profiling difficult. For example, most attacks carried out in the UK in recent years were North Ireland related, not al-Qaeda/ISIL inspired. There might be other ultra-right motivated candidates which haven’t surfaced yet. Lone wolfs may have different political aims, and each may have more than one supply group, etc. Social background check is applicable only for countries where there is only one category of attackers, like in Israel, where IDF track potential attackers preferentially among one group—young Palestinians living in certain villages. However, this is rather an exception and most countries face terrorism threats coming from much broader background.


  • Previous security record (been spotted before, evaluated as terror suspect but dropped);
  • Medical record (have undergone psychiatrist treatment);
  • Behavioural record (visits to psychologist, e.g. school counsel);
  • Criminal record (recent conviction, release in last 1 year);
  • Family/personal problems (divorce, unhappy marriage, debt).

Warning signs

  • Internet interests (search topics; frequently visiting terrorist sites as of recent);
  • Social media (friends/contacts; posts glorify terror, express suicidal thoughts, or express intense hatred, intent to attack);
  • Social behaviour (e.g. noticed making hate-incited statements in public, in last three months; as of recent was spotted attending gatherings where terrorists, violent attacks are praised and implicitly or even explicitly encouraged);
  • Change of pattern (car rent: hasn’t driven a car in a year, but suddenly rents an unregistered vehicle; and/or apartment/house rent: suddenly moved to live in the area where he/she hasn’t been noticed to have any business or personal interest).

Access to weapons

  • Has connection to gun smugglers (relative, friend, neighbour);
  • Have intensified (or established if hadn’t have before) contacts, have been seen with them in last three months.

The output of this step is a set of cues that will be used in the next two steps, to aid the decision making. The quality of this output is instrumental for obtaining best possible results in the end.

Step Two: Categorisation

Create categories tailored to the search goal

Categorisation intends at a target group of ‘new entrants’ (considering that a person is spotted on a radar screen in the recent period, say, a three-month slot). The candidates might be total novices (unknown/not in the system) or ‘re-entrants’ (those who have been assessed before as suspects but dropped/not flagged for follow-up). The latter group is included because their characteristics may have changed since the last assessment, or simply they have been incorrectly evaluated at previous try or tries by the intelligence analysts (they may or may not be in the system records).

Creating categories is important for two reasons:

First, it structures the decision process and saves time. In our case the task is to create one category that has distinct features (based on five core criteria)—this will help assigning to it those candidates who meet the predefined accession value set by decision makers to this category.  All others are dismissed right away. This greatly decreases the workload for further analysis. I will illustrate this on an example from social choice practice.

Think of elections in two-party system. Candidates belong to either Right or Left. The goal is to choose one candidate as a winner, but there are a number of them running from both camps. Standpoints of Right and Left are distinct from each other. Because the candidates in the same party should share the fundamental views about political issues, when a political standpoint is the most important feature of each candidate plausible preferences would be one of the two: (a) each candidate in Right is preferred to each candidate in Left; or (b) vice versa. Therefore, opting for one party from the outset narrows the search focus and enables a decision maker to concentrate on individual candidates within the subset selected.

Second, it contributes to the accuracy of judgement. Correctly created categories help making right choices among otherwise randomly presented individual candidates/options. This is achieved by decomposing the choice problem into small problems. The example below illustrates my thought.

Suppose that an interview panel screens the applications to recommend a shortlist for further consideration (tests, interviews, etc.). The candidates’ résumé have been distributed among the panel members, but there is no agreement on the selection cues, except for job description which vaguely sets the requirements (with no precise metrics attached) and thus serves for general guidance only.

The panel members send their shortlists of six candidates to an HR representative (also panel member), who has to calculate the outcome and offer the final list. It appears that there are ten top scorers, but given the limit the HR member selects only highest ranking candidates, leaving others aside. If we look at the list of all ten candidates, we will see that they were supported by panel members as follows: A – six; B, O – five each; J, L, G – four each; F, W – three each; and Z, X – two each. Therefore candidates A, B, O, J, L, G were selected.

Now, if we define priority selection criteria more precisely (for example, having two primary criteria – recent experience in the region minimum 5 years and recent work in similar seniority position dealing with same problem for minimum 10 years), then we have two subsets—{a} those who meet these two and score well on others; and {b} those who don’t meet one or both of primary but score well on other criteria. The team’s preference is subset {a} and they have here all candidates but L, O, G and Z. If they choose from this preference category then the initial choice A, B, J will be joined by F, W and X as more suitable candidates in the shortlist, not by L, O, and G.


The decision method we use at this step is elimination. Each candidate is quickly assessed against cues ordered in a certain sequence (usually by descending importance). The candidate who doesn’t pass the cue’s cut-off value is dropped. Heuristics method of Fast-and-Frugal decision tree is most appropriate tool for this exercise. It can be designed in various shapes, to meet the decision maker’s preferences; two flowcharts applicable to our case are represented in Figure 1.


As it is evident from the Figure 1, the Fast-and-Frugal tree allows enough flexibility and room for adjustment and entertaining trial-and-error in order to find an optimal pattern and to arrive at the best available choice, given all limitations imposed by the environment. The decades-long research of this and other heuristics decision methods based on the observation of people working in extreme conditions (military, fire-fighters, nuclear power plant operators, battle planners, etc.) has proven that the method produces robust and accurate results. Moreover, there are examples of applying heuristics and fast-and-frugal methods in security analysis (such as conflict early warning).

The output of this stage is a limited number of candidates included into an initial high risk category for further assessment in Step Three.

Step Three: Assessment and final decision

Now, when we have narrowed our search we can take a closer, final look at the candidates left in the potentially high risk category. This is done by evaluating each candidate against the full set of cues (criteria), but this time assigning values to each cue and weighing them to distinguish by significance. Cut-off value of the end-output would allow the agents to conclude the search with effective decision taken with regards to each candidate considered.

There are various methods which can be used for this exercise. My favourite is multi-criteria model, for it allows enough flexibility (assigning and changing cues, indicators, experimenting with various values and weighs, etc.) which is a necessary condition for decision making in complex situations with limited and imprecise information input.


It can be used to compare and choose among multiple candidates, but is applicable to assessing an individual candidate as well. Figure 2 depicts the flowchart of the process suggested for Step Three. It is an iterative process, where decision makers would go back and forth reconsidering and adjusting the model’s search parameters.

For suspected lone wolfs decision is taken on a case-by-case basis. Once a candidate is assessed his/her total weighed score will be checked against a predetermined threshold. There are only two decisions at this point: either Flag (meaning follow-up with closer surveillance) or Drop (cancel further assessment and make note in the system records). Figure 3 shows how the final matrix may look like.


Final output

Those candidates who are ‘flagged’ may represent, as judged by decision makers, a potential threat as lone wolf terrorists. No probabilities are assigned, but this categorisation of the candidates means that they will be closely monitored for some (predetermined) period of time. Further decisions would be made upon the expiry of the trial period, depending on the candidate’s behaviour and additional information. Added value of the approach proposed is that decision making process is simplified and accelerated while maintaining high accuracy of inferences; it does not require big resources or extensive data; instead it relies on well established, formalised and flexible process and heuristic decision methods and above all, on professional intuition and judgement of intelligence/counter-terrorism experts.


About the author: Dr. Elbay Alibayov is an international development professional specialising in state-building and political processes in conflict affected situations. Most recently, he has worked in Baghdad assisting the Iraqi Government on a range of administrative initiatives and policy reforms. Before that, he helped building local governance structures and capacity through community-based initiatives in rural Afghanistan. In the course of eight years he has worked in Bosnia and Herzegovina, where he held various positions in the field (starting as head of field office in Srebrenica) and headquarters; have designed, implemented and overseen a broad range of strategies and local and nation-wide initiatives; and have chaired and participated in the work of civil-military groups, political coordination boards at all levels.

Introducing Public Policy version 2.0

Growing as teenagers back in the 1960s-70s, my generation believed that the world after the year 2001 would be totally different. In a way this turned to be true – along with technological advancements we could not dreamt of at the time the political, security, economic, environmental, and societal problems our planet faces today are not only unprecedented but go beyond comprehension, in their nature and severity.

public policy complexity

The fact that in the last ten years a series of ‘second versions’ have been introduced and commonly accepted—such as Web 2.0, Enterprise 2.0, Governance 2.0, Globalisation 2.0—suggests that we have come to realize that the world around us and the global processes have changed in a categorical / qualitative terms rather than mere quantitatively. And this ‘version two’ trend does not limit itself to the development of the worldwide web and the use of social software platforms, but reflects the fundamental changes in the way how we interact between us as individuals, groups, states and societies and how do we cope collectively, with the complex and unpredictable world of the twenty-first century.

Still this is not as surprising as is our inability to cope with them. The Financial Times editor, Lionel Barber, has stressed this point when describing the global trends under the ‘Globalisation 2.0’ banner, that ‘[n]ational governments are desperate to regain a measure of control’  over the mounting problems posed by global processes. [1] It looks like our political and economic institutions have not been prepared for this change to happen, and are struggling now in the hesitant attempts to adjust. Francis Fukuyama identifies the problem (with regards to democracies) as one of ‘political decay’ and puts his diagnosis as follows: ‘The failure of modern democracies come in many flavours, but the dominant one in the early twenty-first century is probably state weakness: contemporary democracies become too easily gridlocked and rigid, and thus unable to make difficult decisions to ensure their long-term economic and political survival.’ [2]

The fundamental insight offered by prominent thinkers of the day has been that we (as a humankind, at all levels, from individuals to institutions) must embrace the uncertainty, adapt to it, and evolve and grow stronger with it, instead of pretending that we can predict, measure and even manage the risks (let alone to do so without fundamental changes to our not-effective-anymore practices). [3]

When it comes to the state, the focus on building its evolutionary capabilities implies that, as its one overarching function public policy making shall be the first to adapt. In the course of the past century, especially its second half, public policy was heavily relying on the rational choice theory and related models which use cause-effect and pattern recognition methods (with a sophisticated statistical computation) to explain the operational environments with stable settings, known variables and the abundance of historical data available. Nowadays however, even though remaining relevant and useful this approach, coupled with hierarchical model of decision-making and the rigid goal-fixated implementation design, is showing its limitations in offering viable policy solutions in complex, dynamic settings and effectively addressing the emerging problems in various domains. Against this background, vast evidence produced by social scientists and practitioners from various fields of expertise over the last three decades has convincingly demonstrated the benefits of experimentation, evidence-based policy, and flexible, adaptive approaches to decision making. [4]

On a positive note, there is a growing recognition and use across the world, of new methods of policy analysis and design. As behavioural economist Richard Thaler reports with the reference to Economic and Social Research Council’s 2014 survey, more than 130 countries have utilised behavioural science insights in their policies, while over 50 countries have developed policies influenced by the behavioural sciences. [5] In other words, new (and at times distinctively different but compatible) approaches are already being used, but need to be recognised by governments as a legitimate choice and, in certain situations, even as a default option instead of Public Policy 1.0 methods.

In this post therefore, I am attempting to outline key features of Public Policy 2.0, drawing on the insights from various fields of knowledge grouped under the umbrella of effective policy making in the twenty-first century. It is a sketchy initial shot produced with an eye towards initiating a discussion and, ideally, collaboration around this project. I shall mention here that attempts have been already made to define the ‘version-two’ of policy analysis and evaluation. [6]

For the working definition I would suggest the following: Public Policy 2.0 is a proactive, experimental approach to policy making which derives from the appreciation of the complexity and unpredictability of the world and which is deployed with an aim of enabling the states and the societies adapt to the rapidly changing environment, and to evolve and thrive with and within it.

Public Policy 2.0 is a proactive, experimental approach to policy making which derives from the appreciation of the complexity and unpredictability of the world and which is deployed with an aim of enabling the states and the societies adapt to the rapidly changing environment, and to evolve and thrive with and within it.

To specify, Public Policy 2.0 rests on the following principles:

— builds bottom-up, where more authority and responsibility given at tactical and even ‘limited task’ level is married with strong coordination and support role at the centre of government;

— employs trial and error approach, with testing simultaneously many ideas trough small pilots, making sense of the findings (including those from inevitable failures), in order to collect and analyse evidence, learn, and to catch up with the changing environment in a timely manner and on the go;

— exercises the management methods that rely on the experimentation and feedback, are supportive to creativity, and encourage unorthodox approaches;

— maintains an ongoing dialogue between multidisciplinary teams of researchers and practitioners and relies, for the implementation, on a broad based in-country and international collaborative networks of partners; and

— recognises the structures only as non-rigid and adjustable to the evolving context, and both the strategic and tactical goals as subject to constant revision.

At this point I see the benefits coming in various ways, namely:

— sense-making: deployment of decision making mechanisms that, while relying on less data and complicated computation, allow for employing insights from social and behavioural sciences to make reasonable and effective policy interventions, given the limited time and information available;

— decentralisation: puts the decision making in right hands of those who deal with the problems in real time and space; helps building the cadre of experienced, tested managers who are ready to assume responsibility along with authority; and encourages the initiative and reasonable risk-taking;

— analysis: opens opportunities for generating more evidence, as from traditional experimental and quasi-experimental designs, so from qualitative and deliberative methods—to make the policy assessment and evaluation more insightful of values and aspirations of stakeholders and relevant to delivering the expected impacts and benefits; [7]

— design: allows designing strategies and policy programmes which combine the traditional integrated approach with the modular architecture—to enable decoupling potentially high-risk components from the rest of the programme and to create more opportunities for synergic effects from the implementation;

— monitoring: by making the policy programmes’ objectives subject to continuous examination, revision and adjustment—enables using simple but informative methods of relevance, in order to quickly and meaningfully assess the real progress made along the path.

With that said, this proposal does not call for the ‘policy reform’ or dismissal of the present Public Policy version 1.0 approach and for the wholesale shift to Public Policy 2.0 — immediately or in any visible term. Instead, I would advocate for their complementarity, the mutually reinforcing parallel application. The major task at the initial stage will be to demonstrate benefits (as ever) and to ensure that Public Policy 2.0 methods have their place as equals to those of Public Policy 1.0, when considering the analytical, implementation design and managerial issues of any policy issue.

This is especially relevant to the countries which make their state-building efforts in transition from authoritarian regimes towards democracy—where experimentation is imperative in order to find their own way, to tailor the practices tested (but still in need of further enhancement, as we have seen, along with those yet untested) in liberal democracies to their political tradition, culture, and present-day circumstances. I believe that the international organisations, development agencies and broader donor community shall place the strengthening of policy-making capacity of recipient governments at the centre of assistance, and do so with encouraging creativity, innovation and experimentation so that to enable the most effective and harmonious combination of both Public Policy methodological versions.


[1] From the speech delivered at the FT-Nikkei symposium:  Lionel Barber, ‘Globalisation 2.0an optimistic outlook,’ Financial Times, 14 January, 2016

[2] The quote is from: Francis Fukuyama, The Origins of Political Order: From Prehuman Times to the French Revolution (London: Profile Books, 2011). More in detail he elaborates on this topic in his recent book, the second instalment of the series: Francis Fukuyama, Political Order and Political Decay: From the Industrial Revolution to the Globalisation of Democracy (London: Profile Books, 2011). These two volumes are an essential reading for anyone who wants to understand the nature of political processes and to make sense of the current developments.

[3] Among those seminal works: Eric D. Beinhocker, The Origin of Wealth: Evolution, Complexity, and the Radical Remaking of Economics (Boston, MA: Harvard University Press, 2006); Nassim Nicolas Taleb, The Black Swan: The Impact of the Highly Improbable (London: Allen Lane, 2007); and Nassim Nicolas Taleb, Antifragile: How to Live in the World We Don’t Understand (London: Allen Lane, 2012)

[4] There is vast literature—books, articles in academic journals, reports by think tanks—on the benefits of experimentation, evidence-based policy, and flexible management and decision-making methods. These are some of my favourite books: Daniel Kahneman, Thinking, Fast and Slow (London: Penguin Books, 2012); Gary Klein, Streetlights and Shadows: Searching for the Keys to Adaptive Decision Making (Cambridge, MA: The MIT Press, 2009); Gerd Gigerenzer, Peter M. Todd, and the ABC Research Group, Simple Heuristics That Make Us Smart (New York and Oxford: Oxford University Press, 1999); Richard H. Thaler and Cass R. Sunstein, Nudge: Improving Decisions About Health, Wealth, and Happiness (New York and London: Yale University Press, 2008); Tim Harford, Adapt: Why Success Always Starts with Failure (London: Little, Brown, 2011)

[5] Richard H. Thaler, Misbehaving: The Making of Behavioural Economics (London: Allen Lane, 2015), p. 344

[6] Agrell and Treverton, in their discussion on policy analysis draw on the unpublished paper by the economist and public policy scholar Robert Klitgaard, Policy Analysis and Evaluation 2.0 (2012): Wilhelm Argell and Gregory F. Treverton, National Intelligence and Science: Beyond the Great Divide in Analysis and Policy (New York: Oxford University Press, 2015), pp. 115-135. Also see for an elaborate account of post-positivist policy analysis: Ya Li, ‘Think tank 2.0 for deliberative policy analysis,’ Policy Sciences, 48/1 (2015), pp. 25-50

[7] It is not accidental that in defining Policy Analysis version 2.0, Klitgaard builds on the characteristics of evaluation suited to the world of uncertainty, borrowing from the leading authority in qualitative analysis methods, Michael Quinn Patton: M. Q. Patton, ‘Use as a Criterion of Quality in Evaluation,’ in A. Benson, C. Lloyd, and D.M. Hinn (eds.), Visions of Quality: How Evaluators Define, Understand, and Represent Program Quality: Advances in Program Evaluation (Kidlington, UK: Elsevier Science, 2001), pp. 23-26. In more recent publication, Patton points out to advantages of qualitative analysis methods (which are highly relevant to the policy analysis and evaluation version 2.0, as advocated in this post): ‘Indeed, qualitative evaluation and in-depth case studies were utilization-focused methodological responses to the kinds of evaluation questions stakeholders were asking and the criteria they applied to judge quality of finding: contextual understanding, in-depth analysis, and cross-case comparisons.’ [Michael Q. Patton, ‘The Sociological Roots of Utilization-Focused Evaluation,’ The American Sociologist, 46/4 (2015), pp. 457-462 at 461.