Skip to content
An official website of the OECD. Find out more

Experimentation is the right to make mistakes

This blog was authored by former OPSI intern, Théo Bourgery.

In the context of the OECD’s “Evidence-Informed Policy-Making” conference, I had the opportunity to interview Kaisa Lähteenmäki-Smith and Camille Laporte on the role of experimentation in the drafting of evidence-based policies.

As part of the ‘Evidence-Informed Policy-Making’ conference, co-organised by the OECD’s Directorate for Public Governance and the European Commission’s Joint Research Centre, Piret Tõnurist of the Observatory for Public Sector Innovation ran a workshop on innovation and experimentation. The discussion covered the relationship between both concepts in the realm of policy-making, and how experimentation can be further developed across national public sectors.

We caught up with two speakers, Camille Laporte from the French Secrétariat général pour la modernisation de l’action publique and Kaisa Lähteenmäki-Smith of the Finnish Prime Minister’s Office’s Government Policy Analysis Unit, who discussed their understanding of, and experience with, experimentation in policy-making.

Both have been interviewed separately, and the following transcript juxtaposes their answers.

OPSI: The conference brought the words ‘innovation’ and ‘experimentation’ together in the same sentence. What does this mean?

Camille Laporte: Experimentation provides the possibility to test innovation, in turn finding new solutions to new problems. Experimentation is also the right to make mistakes, to fail, and to learn from these failures. This is quite a difficulty in France, where the culture of trial-and-error is not developed enough, both in the public administration and in the private sector.

Kaisa Lähteenmäki-Smith: Bringing experimentation and innovation together adds a more systematic, as well as a more evidence-based, or research-based, aspect to innovation. The Finnish government has long been active in promoting a culture of incremental innovation and pushing public sector actors to start with a simple: ‘okay, let’s start developing this bit and see what comes as a consequence’. Experimentation in the context of innovation means opening the floor to the scientific community and scientific techniques such as RCTs. Consequentially, experimentation in innovation leads to numerous and new questions: how do you measure impacts? What are the indicators? What do innovation indicators actually mean?

OPSI: Experimentation, especially when it comes to innovation in the public sector, involves a great deal of uncertainty – some research areas are entirely novel. How do we deal with such uncertainty from a scientific perspective?

CL: The key is to provide innovators with safe spaces where they do not have to fear uncertainty. Whenever possible we should start with small cases on the field; we must begin small. Once results have come out and uncertainty is tamed, then we scale up. In the French case I presented today, we started with six little experiments: it is not a lot, but maybe in a few years we can scale the study up and increase impacts.

OPSI: Innovation is by essence very context-based, hence project replicability is complex. As a result, it looks like there is almost a unique methodology for each project. Would it be possible to have some form of standardised methodology which applies to innovation as a whole?

KLS: I am not sure that standardisation is either possible or realistic, but it can be good to take it as a starting-point. I’m a firm believer in benchmarking and peer-learning – a very Nordic thing! It is important to have standards and discuss them. At the same time however, standards are by essence human constructions, and they are being built as experimentations go along. They are never perfect and are prone to change to adapt to the context at hand.

CL: While I do not think it is possible to standardise, we must aim to be as rigorous as possible. Whenever possible, randomisation should be used. Making sure that policy evaluation occurs at every stage of a project development is also important. Following an initial experiment, it is also possible to adapt the methodology – in light of unforeseen constraints. I also want to add that, more often than not, the unreplicability argument is used as an excuse to not do anything and remain passive; this is not acceptable.

OPSI: Experiments can very often be flawed, biased. Perfect experiments do not exist. If this is so, when does an experimentation become good evidence? Are there any institutional rules in the French and Finnish governments to indicate when experiments become evidence, or is it a subjective ‘team feel’?

CL: I see it as more of a ‘team feel’, something subjective. This however does not prevent us from collecting good data and rigorously measure the impact of our experiments. The real issue, to be true, is the access to data, which is still highly limited in France, despite recent progress in open data and transparency. This is where we truly have to improve, and this may be the biggest challenge to come.

KLS: This whole issue of political commitment to experimentation for evidence is brand-new. Within the Finnish government, the topic has only been on the table for two years – indeed the promotion of a culture of experimentation is now included in the current government’s programme. Creating the shared understanding of what experimentation actually is, which standards apply, when it becomes evidence, and how it is perceived by the public is a complex process. In this area, it is essential to have a discussion and ethical standards for experimentation and these are very much scientific ethics. To clarify the rules that apply, we published the Social Experiments in Finland – From a Research, Ethics and Legal Perspective paper in September 2016.

There also is this dimension of openness: we want to be as transparent as possible. Consequentially, the experiment is influenced by the very people we want to be transparent for, and we must control for that. It becomes more and more important today to make the evidence – rigorous, scientific evidence – more understandable and available at an early stage. In many of the studies that we commission within the Finnish Government, the knowledge sharing and interaction with actors may sometimes be more fruitful and telling than the actual outcome. Transparency and openness are key in understanding what good evidence is.