
Learning to do responsible innovation in industry: six lessons
This article has been taken from Tandfonline website with consent of authors.
ABSTRACT
There is now almost a decade of experience with RRI (Responsible Research and Innovation), including a growing emphasis on RRI in industry. Based on our experiences in the EU-funded project PRISMA, we find that the companies we engaged could be motivated to do RRI, but often only after we first shifted initial assumptions and strategies. Accordingly, we formulate six lessons we learned in the expectation that they will be relevant both for RRI in industry as well as for the future of RRI more broadly. These lessons are: (1) Strategize for stakeholder engagement; (2) Broaden current assessments; (3) Place values center stage; (4) Experiment for responsiveness; (5) Monitor RRI progress; and (6) Aim for shared value.
Introduction
The lessons we formulate are based on our experiences in the EC-funded project PRISMA, in which we piloted the implementation of RRI in eight companies (from large to small medium enterprises (SMEs) located in the UK, Italy, Switzerland and the Netherlands), working in the technological domains of synthetic biology, nanotechnology, autonomous vehicles and internet of things (IoT), including both industry-led and university-led projects (Maia and Coenen 2018; Nathan et al. 2018). Previously, we presented a conceptual model for RRI in industry (van de Poel et al. 2017) on which PRISMA was based. We have reported our scientific findings elsewhere; here our aim is to present the following practical lessons that we consider useful for effectuating RRI in industry:
-
Strategize for stakeholder engagement
-
Broaden current assessments
-
Place values center stage
-
Experiment for responsiveness
-
Monitor RRI progress
-
Aim for shared values
Strategize for stakeholder engagement
In the RRI literature, early stakeholder involvement is often considered a crucial element of RRI in order to increase transparency as well as alignment with societal needs and democratic values (e.g. Owen, Bessant, and Heintz 2013; cf. Rowe and Frewer 2000; Blok, Hoffmans, and Wubben 2015; Silva et al. 2019). Early engagement is not without its difficulties, however, especially in industry (e.g. Blok 2014; Brand and Blok 2019). That said, we observed that barriers to stakeholder engagement are often quite practical and mundane.
Even if stakeholder engagement is realized, it may neither be completely successful nor enough by itself to ensure the goals of RRI. However, it remains an essential element to integrate RRI in any context and thus a first priority should be to develop strategies to address existing barriers to stakeholder engagement. The following two examples illustrate some of the barriers we observed.
One of the pilots concerned a manufacturer of cleaning agents for professional use, which develops highly concentrated ecological cleaning agents that are combined with smart dosing systems. A main technological development for the company was IoT, which would allow making cleaning devices connected and support the collection and exchange of data. The company recognized that this might raise issues with respect to privacy and security and that it might affect the trust of customers and other stakeholders. It was, however, a small family-owned company with about fifty employees, and an R&D department with currently 4 people, and only the CEO in charge of strategic planning. The CEO, therefore, felt that the company was too small, and had too little resources to organize a stakeholder dialogue on its own. Moreover, he was hesitant to reveal too much about the company’s innovations, to avoid informing competitors. In this case, we tried to organize a stakeholder dialogue with potential users of the technology, but it did not materialize because the latter showed little interest in the technology or its potential social consequences, presumably because they were not sufficiently aware of the potential issues and their ramifications.
A company in the domain of synthetic biology experienced both a lack of resources (similar to the previous example) as well as the presence of conflicts, and lack of trust, as a barrier to stakeholder engagement. Although the company engaged with several stakeholders such as public think thanks and consumer organizations and was committed to public transparency, they faced fierce public criticism voiced by some NGOs towards the kind of product they worked on. The company considered this criticism to be strongly misguided; however, the NGOs refused to engage in a dialogue, which might be explained by the strongly diverging positions of both actors. As the company also struggled to make a profitable business case out of products they themselves believed were desirable and responsibly produced, they introduced a new strategy with a strong focus on profit making at the expense of stakeholder engagement and public transparency. Previous experiences made the company question the value of stakeholder engagement and public transparency.
We have grouped the barriers to stakeholder engagement we found in these two and the other pilots, into four main categories:
-
Lack of resources: Companies may lack the resources, financially as well as organizationally. In particular, for smaller companies the investments required might be too constraining.
-
Lack of evidence of issues at stake: Early on in the research and development phase, ethical and social impacts of innovation may be unclear. This could both challenge the identification of potential stakeholders, and limit their interest to engage.
-
Confidentiality issues: Companies need to ensure confidentiality of the most innovative aspects of their R&D activities (e.g. inventions not yet protected by a patent). This could hinder an open and broad stakeholder dialogue.
-
Conflicts and lack of trust: The public opinion on a specific technology or product (or the company itself) could be negative. Consequently, trust with certain stakeholders may be lacking and engagement could become practically impossible or unproductive.
Broaden current assessments
Recently, numerous (EC-funded) projects have aimed at the implementation of RRI in different contexts, including industry. In our view, these projects tend to follow a similar pattern. They tend to develop new RRI tools or make inventories of existing ones and then attempt to apply these in the given context. This is also how we started in the PRISMA project. The approach, however, turned out to be of limited use.
In our experience, companies can be motivated to do RRI but not primarily in the form of RRI tools that are brought to them from the outside. Rather, it is better to start from what companies already do and try to broaden that. In most pilots, some form of assessment of ethical and social impacts of technology already took place, through both formal and explicit procedures (sometimes legally required) as well as more informally. The former included, for instance, approaches to risk, environmental and life-cycle assessment, and safety and quality management. Thus, in many cases, RRI approaches can be built on existing assessment activities in companies. The added value of RRI in such circumstances is to broaden these assessments. We found three ways to do this:
-
Broadening the values and issues addressed. One of the pilot companies was active in sustainability assessment, but integrating its (cleaning) products with IoT might also raise privacy issues, an area of assessment with which the company had no experience yet. PRISMA helped them to further assess this angle. Another company’s value proposition was a piece of internet architecture that would enhance people’s privacy and control of their data. Since its raison d’etre involved being at the forefront of a live public debate, the natural way forward was to begin with the company’s own well-developed principles.
-
Including external perspectives in the assessment. Assessments can also be broadened by including (more) external perspectives, e.g. by organizing stakeholder engagements activities. These are difficult, but in our experience worthwhile. Indeed, in one case, a fruitful engagement was created by bringing representatives of two of the companies together who both work on digital technology (one focusing on automated cars, the other on software), enhancing a discourse about the management and protection of personal data.
-
Upstreaming assessment so that it can influence research and innovation. In one of the pilot projects, a life cycle analysis (LCA) was carried out at the end of product development. Furthermore, within that project, it became clear as it progressed that the commercial prerogatives of one of the partners was guiding the nature of the products being developed in a way that made them less sustainable than they might be. We suggested that such analysis be carried out in an anticipatory manner so that it can inform innovation and product development in a way that is timely and effective.
The lesson from the above is not that established tools are not useful for effectuating RRI in industry, but that one should start from the context, practices and procedures companies are already experienced with, try to broaden that, and then see how existing RRI tools can be fitted in.
In general, we found that it is crucial that RRI is not effectuated in a ‘one size fits all’ fashion, but in such a way that it can be adapted to the specificities of particular companies or contexts. There is a need for both a bottom-up approach, as the one just described in this section starting from what is already happening in the company, and a more top-down approach, using and applying available and acknowledged RRI principles, practices, and tools.
To deal with this challenge, we developed an RRI roadmap methodology based on business-oriented, widely accepted innovation management methodologies. We took as references both existing management system standards, such as ISO 26000, ISO 31000, and CEN/TS 16555 and the concept of Innovation Policy Road-Mapping Methodology (IPRM), aiming to articulate societal needs and connect them to technological and industrial development policies and innovation strategies (Ahlqvist, Valovirta, and Loikkanen 2012). We developed a step-wise process, in which the company is asked first to identify the specific products to focus, the key ethical and social issues at stake, the stakeholders to involve, and then to experiment and practices, assess the value and eventually commit to RRI principles and practices. The overall goal is the development of an RRI roadmap, setting a vision and an action plan to integrate RRI into product development, from the early stages of R&I to market entry. Expert advice, and internal training to increase RRI skills and competences are seen as pre-requisite to develop the RRI Roadmap. As is shown by the positive response of the companies involved in PRISMA (all fully endorsed their final RRI roadmaps), this approach seems particularly apt to develop a tailor-made RRI strategy that is attuned to specific challenges, resources, expertise, and commitments of companies (Porcari et al. 2019). 1
Place values center stage
The language of RRI is not (yet) familiar in industry. Rather, RRI discourse is perceived as academic and full of jargon (Dreyer et al. 2017). A commonly shared language is needed to improve the relationship between RRI approaches and existing company practices. Based on our experience, we think the concept of ‘values’ might be promising because they denote things worth striving for, such as safety, sustainability, integrity, openness and fairness. Companies often use values in their Corporate Social Responsibility (CSR) policies, mission statements and corporate strategies (cf. Iatridis and Schroeder 2016). The idea that they should create shared value for themselves and society is also gaining prominence (e.g. Kramer and Porter 2011). The discourse of values also has the communicative advantage that it helps foreground what is considered important and desirable.
One way we learned to promote uptake of RRI activities in companies was to draw attention to tensions we witnessed among values pertinent to innovation (i.e. in which two or more values cannot be realized simultaneously).
In the case of a synthetic biology company there was a clear conflict between the values of sustainability and reliability. To be sure that their products are sustainable – as they claim – they would have to conduct a thorough LCA. However, because the production processes are constantly developing, an LCA would not make much sense because it would be soon outdated. This implies that the company believes they are making sustainable products, without knowing this for sure and without proof.
Another pilot, in the IoT domain, represented a complicated instance of possible value tensions, where the technology was in some contexts proposed as a way of safeguarding people’s control of their data, in some as a way of enhancing their privacy, and in some as a way of increasing cybersecurity. Those working on the project put forward sophisticated accounts of how all three goals could be simultaneously met.
In yet another pilot dealing with the application of organic based nanomaterials in dermo-cosmetic products, the technology was helpful to increase safety of the production process and efficacy of the product, but at the same time introduced potential consumer concerns about the use of advanced technologies in a product certified for using only organic (non-synthetic) ingredients. The dialogue between developers, producers, certification bodies and retailers was helpful to ensure transparency and improve confidence of all actors of the R&I value chain in the product.
From an RRI perspective, one would want to ensure that companies become aware of value tensions both in their normal operations as well as in their innovation processes. In many cases, there may be disagreement about how value tensions should be dealt with (or resolved) and/or there may not be one obviously best way to address them (van de Poel 2015). Nevertheless, companies have an opportunity to be transparent about, and accountable for how they choose to deal with value tensions and why they did so. It might in fact also have advantages for companies not only to accept accountability for how value tensions are dealt with but, also to actively communicate how they deal with value tensions. Such communication might also contribute to the corporate image of a company and so help the implementation of its business strategy.
Experiment for responsiveness
Like stakeholder engagement, anticipation is usually seen as a pillar of RRI. However, anticipation is notoriously difficult and thus its practical value for companies can appear limited. In such cases we suggest a shift to the RRI dimension of responsiveness, here understood as responding to new insights and developments as they evolve over time.
Companies are sometimes reluctant to introduce innovative technologies because they are unsure of the reactions of potential clients, publics, and other stakeholders (cf. Blok 2014). In the so-called waiting games that can result (Robinson, Le Masson, and Weil 2012), all actors wait for each other to take the first step in reducing uncertainty and, as a consequence, nothing happens, even if it is in everybody’s interest to reduce uncertainty.
We witnessed a somewhat similar situation in one of the pilots: the pilot company developed a technology for drones that would allow capturing images and data and analysing them in real-time. This could be helpful for monitoring and surveillance, for example for government tasks. It can also be intruding for privacy, however. The latter was addressed by using a tool that would automatically blur details of, for example, bystanders. Another issue was the safety of using autonomously flying drones in populated areas. Such use was forbidden by the current regulations. Although the government recognized that such drones could potentially be useful to fulfil public functions, it was reluctant to develop a new regulatory framework without operational experience with the use of drones in populated areas. The company, on the other hand, could not try out the drones, as that was forbidden.
To break such a deadlock, mechanisms are needed to gain experience with the new technology and to reduce uncertainty. One way in which this can be done is through (small-scale) niche experimentation (Kemp, Schot, and Hoogma 1998). Such initiatives also offer a potential for RRI. One may, for example, think of the creation of protected testing zones, where different regulations apply as to allow for experimentation, for example with drones or self-driving cars (cf. Weng et al. 2015). Another example is living labs, i.e. real-world environments, where new technologies are tried out and data are gathered about the use of the technology (Almirall, Lee, and Wareham 2012). Such experimentation can enhance learning about a technology; stakeholder reactions to the technology; and the ethical, legal and societal issues it raises and thus can help better align innovations, users, and societal needs. To address the very real possibility that such experiments may create risks or negative social consequences for society, however, it is important to create conditions for responsible experimentation (see e.g. Van de Poel (2016) for a proposal).
Monitor RRI progress
Most likely in the future, companies will continue to struggle to understand the value of their investments into RRI and not fully appreciate how it can enhance their goals and objectives. Monitoring the RRI performance of companies can help them measure, understand, and communicate the value of their RRI investments. To this end, we developed a tool that helps companies to formulate key performance indicators (KPIs) (Yaghmaei et al. 2019). Formulating KPIs may stimulate and help companies to think about what they want to achieve with RRI, and subsequently to monitor whether their RRI activities indeed help to achieve such objectives. It also allows an adaption or reformulating of the company’s RRI strategy and activities, if appropriate.
A crucial issue is whether such RRI monitoring should be done by the company itself or should include some form of external auditing. Both have their advantages and disadvantages. Internal monitoring would appear easier to implement and less sensitive for a company; moreover, as the company takes the initiative itself it may be more open to learn from experiences. On the other hand, internal monitoring only may raise doubts to the outside world, particularly if only positive assessments are communicated, leading up to the idea that monitoring merely provides a form of windows-dressing rather than a real commitment to RRI (cf. Taylor, Vithayathil, and Yim 2018). This issue raises similar considerations to discussions in evaluation studies on the purported goals of monitoring and evaluation in the first place (Kunseler and Vasileiadou 2016): are these activities undertaken to be able to be held accountable, or are they done to learn?
Aim for shared value
Innovators often see RRI as a means of gaining trust and legitimacy in the case of potentially controversial emerging technologies. We witnessed this particularly in the domain of synthetic biology. While RRI may indeed contribute to building trust and legitimacy, we witnessed two pitfalls or caveats to framing RRI as a trust building endeavor:
(1) RRI requires mutual trust. RRI often requires some initial mutual trust to begin with (Asveld, Ganzevles, and Osseweijer 2015). This requires not only that consumers, NGOs and publics trust companies, but also that companies trust these stakeholders. In one pilot in synthetic biology, such mutual trust was absent. Under such circumstances, RRI may actually become part of the conflict. NGOs that oppose a technology may perceive the RRI strategy of a company as a strategic move to gain public trust rather than as a genuine effort. Companies, on the other hand, may be reluctant to become more open and inclusive, as RRI would require, if they fear that this will fuel the controversy.
(2) Trust cannot be instrumentalised. Trust is something that is difficult to earn but easy to lose. Moreover, trust is essentially a by-product of (normal) interactions between parties – interactions through which parties can assess each other’s trustworthiness. Strategies that deliberately and explicitly aim at trust may not only misfire, but even be counterproductive, especially if a strategic dimension revolving around increasing stakeholders’ acceptance of technologies is at play.
Instead of striving to increase trust, we found that a more normatively desirable practical message is to strive to be trustworthy. This implies that one focusses on one’s own actions and explicitly leaves the decision to actually confer trust with others. Rather than promoting RRI as a means of increasing the legitimacy of innovations, we suggest that it be framed as a possibility to create shared value with society. Such a perspective also creates opportunities for companies, and other technology developers, to propose their own unique value proposition to society and, in doing so, to improve their corporate image and contribute to their business strategy.
Conclusions and the future of RRI
The gist of these six lessons is that RRI implementation should do justice to contextual factors and should start bottom-up, from what is already happening in a company or technological sector. This is reflected in our first three lessons, which aim at finding strategies, values and language that render RRI meaningful to companies. At the same time, bottom-up efforts need to be supplemented by more top-down measures and activities, which is reflected in our fifth lesson of RRI monitoring. As we indicated, such monitoring should not solely be done by a company or branch organization but should also ensure some external accountability. The fourth and sixth lessons about experimentation, trust and legitimation can perhaps best be seen as trying to avoid two pitfalls that may well apply to RRI more generally, namely an overreliance on anticipation and, particularly in an industrial context, placing too much emphasis on creating trust and legitimacy, which runs the risk of instrumentalising trust. We suggest that these three themes—developing bottom-up and top-down strategies, and avoiding pitfalls—will remain important cornerstones of successful efforts in the future to continue to implement and institutionalize RRI in industry and other contexts.
Acknowledgement
Parts of this article have been published as deliverable 3.3 from the PRISMA project. All authors have participated in the PRISMA project. Ibo van de Poel has written the first version of this contribution, which has been subsequently drafted by the other authors. All other authors have contributed with their ideas and have read and commented on earlier versions of this article. We thank the reviewers and the editor for very useful comments and suggestions.
Ibo van de Poel is professor in Ethics and Technology and head of the Department Values, Technology & Innovation at TU Delft. ORCID: 0000-0002-9553-5651Lotte Asveld is assistant professor in Biotechnology & Society at TU Delft. ORCID: 0000-0002-2524-7814
Steven Flipse is assistant professor in Communication Design for Innovation at the Science Education & Communication research group at the TU Delft. ORCID: 0000-0002-7400-1490
Pim Klaassen , at time of writing was senior policy advisor Safe-by-Design at RIVM and assistant professor Policy, Communication and Ethics in the Health and Life Sciences at Vrije Universiteit Amsterdam. ORCID: 0000-0003-0029-6393
Zenlin Kwee is assistant professor of Strategy and Innovation in the Department of Values, Technology & Innovation at TU Delft. ORCID: 0000-0003-4146-033X
Maria Maia is a senior researcher working on Technology Assessment in Health at KIT. She is also an affiliated researcher at CICS.Nova. ORCID: 0000-0002-3501-6876
Elvio Mantovani is scientific director of Airi/Nanotec IT, the Committee for Nanotechnology and the other Key Enabling Technologies-KETs of Airi . ORCID: 0000-0003-0971-4109
Christopher Nathan is a Research fellow at the Interdisciplinary Ethics Research Group, University of Warwick. ORCID: 0000-0002-2386-3517
Andrea Porcari is project manager at Airi. ORCID: 0000-0002-7550-7805
Emad Yaghmaei is research fellow in Responsible Innovation at the Department Values, Technology & Innovation at TU Delft. ORCID: 0000-0003-4884-7801
Notes
This methodology is now being integrated, together with experiences from other projects and initiatives, in a pre-standard document developed as a CEN (European Committee for Standardization) Common Workshop Agreement (CEN 2019).