What is research misconduct?

By SalM on March 11, 2021 in News

In Sweden, a national code takes 44,000 words to define research misconduct and discuss scientific values. Next door, Norway’s equivalent is a brisk 900 words, little more than in this news article. And it’s not just the size of the codes that differs across Europe: A new analysis of scientific integrity policies in 32 nations has found widely varying standards and definitions for research misconduct itself, despite a 2017 Europe-wide code of conduct intended to align them.

Research ethicists say the differences threaten to create confusion and disputes for international scientific collaborations. Teams often include members working in different countries; if a team member is accused of research misconduct, which country’s rules should apply? The decision affects who can be held responsible, and which behaviors are considered unethical. “It really is a difficult issue,” says Nicole Föger, managing director of the Austrian Agency for Research Integrity.

The mismatched standards have already led to practical problems, Föger says. She cites a case of an Austrian postdoctoral researcher who applied Austrian ethical standards while working at a university in another European country. The Austrian standards—mandated by the postdoc’s Austrian research funding contract—forbid “honorary authorship” for researchers who did not contribute substantially to a paper. But after leaving a senior researcher at this university off a paper because of a lack of contribution, the postdoc faced a university investigation and was found to have been in the wrong.

The 2017 European Code of Conduct for Research Integrity, developed by the European Federation of Academies of Sciences and Humanities, was designed to be easy for European countries to adopt, offering a nonbinding framework that they could add to as needed to fit their circumstances. It updated an unwieldy 2011 code and was more concise, Föger says. The 2017 version encourages core principles, including honesty, respect, and accountability, describes good research practices, and gives specific examples of misconduct.

But pickup of the European framework has been spotty, according to a study by Hugh Desmond, a philosopher of science at the University of Antwerp, and KU Leuven bioethicist Kris Dierickx. Of 32 countries in the analysis, only two—Bulgaria and Luxembourg—have adopted the European Code wholesale, the authors reported last month in Bioethics. There’s just one policy that all countries have agreed on: that fabrication, falsification, and plagiarism of data and findings constitute research misconduct.

Beyond that, the national policies stray significantly from the European model. “If they seek to rephrase things, that’s already significant in itself,” Desmond says—a signal that the authors of the document intended something different from the Europe-wide code. Many do not address behaviors, besides fabrication, falsification, and plagiarism, that the European code defines as misconduct, such as financial conflicts of interest, manipulating authorship, and self-plagiarism. Some countries say misconduct requires an intention to deceive; others define it as any violation of the code, even negligent ones. Some nations hold all co-authors jointly responsible for fraudulent work, whereas others don’t specify who is responsible.

But the study’s method overestimates differences between countries, says Daniele Fanelli, a scientific misconduct researcher at the London School of Economics. Just because wordings differ slightly from the European code doesn’t mean they don’t endorse the same underlying principles, Fanelli says. Another limitation of the study: Many countries—including Austria—have not yet updated their policies in response to the 2017 code.

The findings echo an ongoing debate in the United States about how research misconduct should be defined, says Lisa Rasmussen, a bioethicist at the University of North Carolina, Charlotte. In 2000, the U.S government defined misconduct as fabrication, falsification, and plagiarism, but some researchers have argued since that other behaviors—such as sexual harassment—should be included.

The practical problems raised by a lack of consensus aren’t limited to Europe, says David Resnik, a bioethicist at the U.S. National Institutes of Health. The potential for serious complications with international collaborations is a worry lurking in the background that “really hasn’t gotten the attention that it deserves,” he says.

Global alignment of standards and policies would likely be even more difficult than it has proved to be in Europe, Desmond says. And with calls for harsher penalties and even criminalization for misconduct, he worries the complications caused by policy mismatch may become “a much more pressing problem.”

Source: sciencemag.org

Arkema Launches Global Start-up Connect Program for Responsible Innovation

By SalM on March 8, 2021 in News

Arkema (Colombes, France) has recently launched its Start-up Connect initiative, a new program which invites startup companies specializing in advanced materials from around the world to join forces with Arkema to establish a privileged research collaboration and benefit from the Group’s support and technological experience. By providing technical or financial support for these innovations, Start-up Connect will be a strategic component of Arkema’s development within an ecosystem of responsible innovation.

The program will combine the dynamism of small, agile and innovative organizations with the Group’s expertise in specialty materials to develop the innovations of tomorrow, Arkema says. Further, the company will be providing these startups with its international reach, its in-depth knowledge of markets and applications and its ability to develop safe and high-performance chemistry. Arkema says it will offer access to the scientific and technical resources of its 15 R&D centers in France, the U.S. and Asia, facilitating the pooling of expertise, innovation and economic development at the heart of these regions. All strategic partnerships defined within Start-up Connect may range from technical collaboration to financial support, expertise, or mentoring.

Arkema has also chosen to devote the vast majority of its innovation partnerships to sustainable growth. To achieve this, the innovations that are selected must be linked to Arkema’s innovation platforms: natural resource management, lightweight materials and design, new energies, electronics solutions and home efficiency and insulation.

“Our materials provide concrete solutions to societal and ecological issues, but the major challenges of tomorrow cannot be addressed by our teams alone, regardless of their talent,” states Christian Collette, Arkema’s R&D vice president. “Our Start-up Connect program must therefore attract innovators from all over the world to work on solving these challenges alongside our researchers. In doing so, we want to create an impetus, a momentum, to bring transformative projects to maturity.”

Public Engagement in the Time of COVID-19

By SalM on March 5, 2021 in COVID-19

This sense of dualism is also at play in public engagement spaces within and beyond the academy. In trying times, it is especially critical that we continue to create and maintain effective dialogues between scholars and communities in order to generate equitable and sustainable solutions to the problems that matter most.

Our objective in writing this post is to reflect on the continued importance of public engagement in the current moment and to highlight the unique challenges communities face during a time of crisis. By sharing these examples of effective public engagement – as well as new efforts to build capacity for even greater transformation – we hope to learn from this moment and reflect on ways that universities can support this critical work.

The academic community is rallying to meet the crisis with waves of new research and creativity in engaging with different publics; public scholars have recognized and acted upon myriad opportunities during the pandemic. There have been outstanding examples of researchers engaging with media professionals to share expertise and shape stories, and interest in writing op-eds or for outlets like The Conversation is incredibly high. Opportunities to participate in research on SARS-CoV-2 or work with researchers to co-design possible antivirals are expanding rapidly. Demand for online teaching and learning is spiking, and universities and other organizations are producing just-in-time learning opportunities addressing the novel coronavirus or sharing fun experiences for kids and families at home.

At the same time, community partners who work with academic scholars are experiencing huge impacts, and few have the same security that the shelter of a university offers. Zoos, museums, and other spaces for informal learning are shuttered and have been forced to furlough or lay off workforces. Schools are closed for the year. Non-profit organizations – especially those focused on alleviating poverty, housing, and food insecurity – are working to survive and hold the line. And engagement opportunities relying on virtual tools and environments underscore access disparities to technology and the internet, widening existing gaps which disproportionately affect people who are poor and people of color.

It is in this dual reality of intense struggle and remarkable human accomplishment where the importance and value of public engagement efforts become even more clear. Heartbreakingly, sometimes the communities with the greatest capacity to understand problems – especially through lived experience and other ways of knowing which aren’t rooted in academic expertise – are the most strained on the front-lines of crisis. Therefore the time of COVID-19 reinforces the responsibility and trust that institutions of higher education – especially public research universities – hold in serving the public good. Such institutions have much greater capacity to weather these trying times. We must leverage our relative stability and unique strengths and take active steps to be of service to our communities.

In addition, the pandemic – including all the associated challenges and opportunities – will become an anchor for the cohort’s reflective practice throughout the experience. We hope that continuing to offer this experience for Fellows during the time of COVID-19 will reinforce core tenets of ethical engagement work, including recognition, respect, and equitable partnership, and reinforce the need to be of service and to partner in mutually-beneficial ways with our communities in the times ahead.

We’ve highlighted just a few ways that the academic community continues to engage in important public engagement work. Certainly there are many more. There is also more work to be done – especially in the context of supporting scholars and community partners to be able to pursue this work sustainably and equitably.

  • How might institutions of higher education learn from the current moment – especially in how they value products of engagement work that aren’t rooted in grants and publications?
  • What infrastructure and policies are necessary to institutionalize a commitment to respectful and equitable engagement, including honoring the expertise and compensating the efforts of long-time public partners? How might such a commitment extend the shelter of the university to our communities during times of crisis?
  • How might universities help to address disparities to accessing much of the public engagement effort currently requiring a device and internet connection?
  • What other opportunities do you see to strengthen or support public engagement in a time of crisis?

The duality of this crisis truly poses an opportunity for all of us to reflect on the multiple roles we occupy and the communities of which we are members – both within and beyond institutions of higher learning. We hope to learn from this moment and use these lessons to build towards a more resilient future.

EU to roll out new approach to managing Horizon Europe and R&D policy

By SalM on March 4, 2021 in News

The European Commission’s research directorate (DG-RTD) is to focus more on policy development and work more closely with member states in the reform of national research systems and implementation of the European Research Area (ERA), following a reshuffle of the organisation that will see oversight of the implementation of EU research programmes delegated to a string of new executive agencies.

Jean-Eric Paquet, head of DG-RTD told Science|Business the restructuring will help the Commission align research and innovation policy with global challenges. The new organigram was approved by the college of commissioners last week and will come into force on 1 April.

Paquet said the latest changes are a fine tuning of a reshuffle he initiated in 2019. “There is a large degree of continuity with the effort of two years ago,” he said.

The reorganisation also completes a process started back in 2007, when the first executive agencies – bodies set up by the EU to carry out specific technical, scientific or administrative tasks – were created. These were the European Research Council, handling fundamental research, and the Research Executive Agency, responsible for managing parts of EU research programmes. “The Research Executive Agency is now expanding its remit,” said Paquet.

“The strategic choice which was proposed by [EU research commissioner] Mariya Gabriel and myself is that with entrusting executive agencies with the implementation of the projects, we can then focus even further on research policy,” said Paquet. “We are not losing the deep link to projects because executive agencies are part of the Commission.”

The need to focus more on policy development and to help member states reform and strengthen their research and innovation systems was first floated by former director general Robert-Jan Smits. He proposed moving staff responsible for implementing projects to executive agencies, reducing headcount in DG-RTD by one third and allowing DG-RTD to concentrate on policy development.

In the event, 184 people, or 15% of DG-RTD staff, will move to executive agencies. The Commission has done a cost benefit analysis, and while Paquet stressed the main motivation is not to cut costs, he noted agencies have greater freedom to employ people on fixed term contracts, rather than recruiting highly-paid career EU public servants. “The idea is indeed to [make] savings,” said Paquet.

Research stakeholders are worried that the reshuffle will put a dent in DG-RTD’s influence, but Paquet says the revamped organisation is intended to help the directorate shape policies, not to diminish its power.

The main changes

The reshuffle will see DG-RTD downsize from 50 to 43 units, while the number of executive agencies will grow from 29 to 48.

The previous organigram had three health units, now it only has two. Also, instead of two units working on materials, DG-RTD will have one industrial transformation unit, as project implementation moves to a new executive agency.

Work on implementing research infrastructures, materials of tomorrow, coal and steel, now will be handed to several executive agencies, Paquet said. The European Innovation Council (EIC) will become a fully-fledged agency.

The health directorate has also been shrunk significantly, with at least 50 people moving to a new Health and Digital Executive Agency.

DG RTD’s department that oversees contracting and payments, has been shrunk to three units with one financial team instead of three.

The former directorate for programming will become a common policy centre, supporting the entire research and innovation system of the Commission. Its staff will help executive agencies implement the Horizon Europe programme. “That’s really the engine room of research policy,” said Paquet.

The outreach directorate has now been reorganised under the name of “ERA and Innovation” to reflect recent efforts by Gabriel to convince member states to get behind the Commission’s plan to create a single market for research under ERA.

Paquet hopes the new RTD organisation will convey the message that Horizon Europe is the engine of EU research and innovation, but that the Commission is also there to support member states in boosting R&D capabilities and attracting and retaining talent for a successful revamp of ERA.

Pandemic Shows Significance of Open Data

By SalM on March 3, 2021 in COVID-19

Since March 2020, we have witnessed numerous parallels between COVID-19 and the climate crisis, including a lack of cohesive, coordinated global intervention and a laissez-faire response to a global emergency.

Inadequate government response to the pandemic has led to preventable deaths from a highly contagious virus, just as inadequate government responses to the climate crisis could result in exacerbated effects by drought, fires or flooding as well as rising food insecurity. These risks are fast approaching their tipping points.

COVID-19 has inundated our lives with numbers and social media updates to the point of statistical overload, a phenomenon WHO refers to as an ‘infodemic.’ The flood of information has occasionally caused the public to question the veracity of certain claims, which in turn has hampered effective public health responses. Nonetheless, the vast availability of data encourages scientists and citizen scientists across the world to disseminate models, ideas and scenarios for better progress against the virus.

Open Data for Forests

The pandemic reinforces the importance of data for interpretation and dissemination. It is a resource that needs to be carefully curated, because leaders, whether in health or in climate science, need to make informed decisions in order to respond to global agendas such as the Sustainable Development Goals (SDGs). Likewise, citizens will need assurance and transparency to act as individuals and communities, and to advocate for data-driven policy development.

Likewise, forest data transparency is key to supporting higher levels of ambition for the roles of forests in climate change action. Progress in National Forest Monitoring Systems (NFMS) in the past ten years has catalyzed solutions for forests and climate action, such as for REDD+. In many countries, greater transparency of countries’ forest-sector data and information has resulted in improved national decision-making, and for the first time, detailed and transparent forest data has been reported internationally, with 50 countries having submitted forest reference emissions levels to the UN Framework Convention on Climate Change (UNFCCC).

Still, countries’ efforts towards forest data transparency must be strengthened. Under the Enhanced Transparency Framework of the Paris Agreement, robust data collection is central for reporting on emissions and removals, as well as for tracking the progress of Nationally Determined Contributions. An NFMS that is transparent, reliable, relevant, accessible, and sustainable can support climate action on the ground.

What lessons can forest monitoring practitioners take from the ongoing pandemic?

  1. Sharing openly can flatten the curve: With COVID-19, countries that have embraced data acquisition through frequent testing and have been transparent about infection rates have mostly succeeded in flattening the curve of infections. In Germany, testing, tracing, and transparency are credited with building public trust. Likewise, improved data availability combined with transparency could catalyze more collaborative solutions to the climate crisis that could equally buy some precious time to achieve the terms of the Paris Agreement.
  2. Climate policies need up-to-date and integrated information: Data sets can be shared at unprecedented speed. While the availability of COVID-19 related data has often overwhelmed data consumers and resulted in conflicting guidance, frequent and integrated forest data is likely to enhance public engagement and collaboration on relevant solutions for forests.
  3. Public money means public information: Enormous financial resources have been poured into COVID-19 monitoring. Likewise, national forest data is primarily collected through taxpayer finances, either through national or international cooperation funds. Greater public financing of large-scale data collection and sharing ultimately means greater information available to the public, enhancing public trust and increasing opportunities for investors and researchers. Open, transparent, and reliable forest data can also enhance private investment, which is urgently needed to trigger transformation of forest and land management for climate action and other multiple benefits. Accurate and reliable forest data created from public funds needs to be open and accessible to the public.
  4. Overcoming obstacles to sharing: Under the pandemic, open data has accelerated science, but also introduced vulnerabilities. Both media and politicians have sometimes reported promising results before they have been scientifically validated. Among the obstacles to sharing forest data is the concern of inadequate use of the data and lack of intellectual property recognition. Barely 14% of researchers in 2018 shared their data in repositories. Yet, twisting the tragedy of the commons, while 74% researchers value others’ data sharing as beneficial to their own research, both scientists and governments show reluctance to share their own data.

To overcome this resistance, legal agreements that release individual contributors from conflicts with the institutions involved in data sharing are needed, with updated and harmonized measures to ensure anonymity of legal subjects and/or spatial coordinates. Further, data ownership recognition standards should encourage data sharing. Government officials and researchers need to follow the FAIR (findable, accessible, interoperable, reusable) guiding principles for data management and stewardship, thereby making the case that in the end the benefits of sharing outweigh the disadvantages.

Forest Open Data Platforms and FAO’s Support

Some existing microdata platforms, such as the Global Forest Biodiversity Initiative GFBI, largely compile and share global field data for academic research and require strict rules on confidentiality and control of users. Yet, forest-related information still remains largely scattered across multiple platforms. FAO is working with UN Member States to overcome obstacles to open forest data. Our current efforts include:

  • the Global Forest Resources Assessment 2020, which provides country validated forest data, accessible through an interactive platform and dashboards.
  • the Hand-in-Hand geospatial platform launched recently, representing a major step towards accessible and transparent cross-sectoral geospatial data across agriculture, fisheries, and forestry.
  • a set of free and open source tools (Open Foris) developed by FAO to facilitate flexible and efficient data collection, analysis, and reporting helping to enhance forest monitoring at national level.
  • A new FAO project ‘Building global capacity to increase transparency in the forest sector (CBIT-Forest)’ to establish a Global Field Forest Observation Repository with a view to harmonizing legal assurances in data confidentiality, redistribution policies, and quality assurance conditions meeting international data documentation protocols. Microdata inclusion on this platform would contribute to increased standardization, accessibility and data usage.

A New, Transparent Normal?

COVID-19 has increased awareness about the power of data sharing. We hope that this motivates governments and forest monitoring practitioners to share forest data. At the same time, it is important that open databases follow standards to minimize misuse and misinterpretation. We believe that open forest data can strengthen our collective effort to identify and apply solutions for forests as a key response to the climate emergency.

FAO’s support to forest monitoring aims to strengthen data openness standards, while solving conflicts with data protection and confidentiality. In order to achieve reusability, clear and accessible data use licenses must exist. Legal certainty through license to redistribute agreements, ensure data robustness alongside data sensitivity. A Global Field Forest Observation Repository to facilitate data sharing of forest microdata to technicians and academics while supporting international reporting requirements across countries, is a step towards fostering that necessary transparency.

Source: sdg.iisd.org

Workshop Recap: Funding applications – How to incorporate RRI, SDG and RSSR into funding applications?

By Graham UCC on March 2, 2021 in Workshops & Training

The February 18 (Funding applications – How to incorporate RRI, SDG and RSSR into funding applications?) offered plenty of opportunities for interaction and learning about different perspectives from across the globe.

In this case, the focus was on the requirements of different funding organizations, and included, for example, insights from Japan, Ireland, and Lithuania, among others. The workshop also included examples from funded projects as well as a practical exercise to support the incorporation of the concepts associated with RRI, the SDGs and the RSSR in proposal writing. Top access the presentations please click on the links below.

SESSION 1

  • What is RRI? – Professor Alexander Gerber (Rhine – Waal University)
  • What are the SDGs and RSSRs – Dr April Tash (UNESCO)

SESSION 2: Incorporation of SDGs and RSSR in funding applications – Global perspective

SESSION 3: Incorporation of SDGs and RSSR in Funding Applications – Global Perspective

SESSION 4: Representatives that have worked in RRI projects before (collaborations, budgets, roles and responsibilities)

  • Security Projects – Dr Andrew Adams
  • MUSICA and Intelli-Guard Projects – Dr Gordon Dalton

Workshop Recap: What is RRI, SDG and RSSR: How is it important to your research?

By Graham UCC on March 2, 2021 in Workshops & Training

On Thursday, February 11 RRING hosted its first online members workshop: What is RRI, SDG and RSSR: How is it important to your research?

The one-day event was attended by over 100 participants who were given an introduction to Responsible Research & Innovation, Sustainable Development Goals, and Recommendations on Science and Scientific Researchers.

Over the course of the day participants in the workshop:

  • Engaged with the concepts of Responsible Research and Innovation (RRI), Sustainable Development Goals (SDGs) and Recommendations on Science and Scientific Researchers (RSSR) and their relevance in research from a global perspective.
  • Examined Global Responsible Research and Innovation based on the findings of the RRING project.
  • Explored the UNESCO Recommendation for Science and Scientific Researchers, how it relates to the SDGs and why it is important that these are incorporated in research.
  • Considered how these aspects related to their own work and research.
  • Partook in a survey to offer insight into their own knowledge and experience with RRI and what they would like to see from future RRING events.

SESSION 1 – RRI

SESSION 2 – RRI Briefing

SESSION 3 – SDGs and RSSR

Changing How We Evaluate Research Is Difficult – but Not Impossible

By SalM on March 2, 2021 in News

Declarations can inspire revolutionary change, but the high ideals inspiring the revolution must be harnessed to clear guidance and tangible goals to drive effective reform. When the San Francisco Declaration on Research Assessment (DORA) was published in 2013, it catalogued the problems caused by the use of journal-based indicators to evaluate the performance of individual researchers, and provided 18 recommendations to improve such evaluations. Since then, DORA has inspired many in the academic community to challenge long-standing research assessment practices, and over 150 universities and research institutions have signed the declaration and committed to reform.

But experience has taught us that this is not enough to change how research is assessed. Given the scale and complexity of the task, additional measures are called for. We have to support institutions in developing the processes and resources needed to implement responsible research assessment practices. That is why DORA has transformed itself from a website collecting signatures to a broader campaigning initiative that can provide practical guidance. This will help institutions to seize the opportunities created by the momentum now building across the research community to reshape how we evaluate research.

Systemic change requires fundamental shifts in policies, processes and power structures, as well as in deeply held norms and values. Those hoping to drive such change need to understand all the stakeholders in the system: in particular, how do they interact with and depend on each other, and how do they respond to internal and external pressures? To this end DORA and the Howard Hughes Medical Institute (HHMI) convened a meeting in October 2019 that brought together researchers, university administrators, librarians, funders, scientific societies, non-profits and other stakeholders to discuss these questions. Those taking part in the meeting (https://sfdora.org/assessingresearch/agenda/) discussed emerging policies and practices in research assessment, and how they could be aligned with the academic missions of different institutions.

The discussion helped to identify what institutional change could look like, to surface new ideas, and to formulate practical guidance for research institutions looking to embrace reform. This guidance – summarised below – provides a framework for action that consists of four broad goals: i) understand obstacles that prevent change; ii) experiment with different ideas and approaches at all levels; iii) create a shared vision for research assessment when reviewing and revising policies and practices; iv) communicate that vision on campus and externally to other research institutions.

Understand obstacles that prevent change

Most academic reward systems rely on proxy measures of quality to assess researchers. This is problematic when there is an over-reliance on these proxy measures, particularly so if aggregate measures are used that mask the variations between individuals and individual outputs. Journal-based metrics and the H-index, alongside qualitative notions of publisher prestige and institutional reputation, present obstacles to change that have become deeply entrenched in academic evaluation.

This has happened because such measures contain an appealing kernel of meaning (though the appeal only holds so long as one operates within the confines of the law of averages) and because they provide a convenient shortcut for busy evaluators. Additionally, the over-reliance on proxy measures that tend to be focused on research can discourage researchers from working on other activities that are also important to the mission of most research institutions, such as teaching, mentoring, and work that has societal impact.

The use of proxy measures also preserves biases against scholars who still feel the force of historical and geographical exclusion from the research community. Progress toward gender and race equality has been made in recent years, but the pace of change remains unacceptably slow. A recent study of basic science departments in US medical schools suggests that under current practices, a level of faculty diversity representative of the national population will not be achieved until 2080 (Gibbs et al., 2016).

Rethinking research assessment therefore means addressing the privilege that exists in academia, and taking proper account of how luck and opportunity can influence decision-making more than personal characteristics such as talent, skill and tenacity. As a community, we need to take a hard look – without averting our gaze from the prejudices that attend questions of race, gender, sexuality, or disability – at what we really mean when we talk about ‘success’ and ‘excellence’ if we are to find answers congruent with our highest aspirations.

This is by no means easy. Many external and internal pressures stand in the way of meaningful change. For example, institutions have to wrestle with university rankings as part of research assessment reform, because stepping away from the surrogate, selective, and incomplete ‘measures’ of performance totted up by rankers poses a reputational threat. Grant funding, which is commonly seen as an essential signal of researcher success, is clearly crucial for many universities and research institutions: however, an overemphasis on grants in decisions about hiring, promotion and tenure incentivises researchers to discount other important parts of their job. The huge mental health burden of hyper-competition is also a problem that can no longer be ignored (Wellcome, 2020a).

Experiment with different ideas and approaches at all levels

Culture change is often driven by the collective force of individual actions. These actions take many forms, but spring from a common desire to champion responsible research assessment practices. At the DORA/HHMI meeting Needhi Bhalla (University of California, Santa Cruz) advocated strategies that have been proven to increase equity in faculty hiring – including the use of diversity statements to assess whether a candidate is aligned with the department’s equity mission – as part of a more holistic approach to researcher evaluation (Bhalla, 2019). She also described how broadening the scope of desirable research interests in the job descriptions for faculty positions in chemistry at the University of Michigan resulted in a two-fold increase of applicants from underrepresented groups (Stewart and Valian, 2018). As a further step, Bhalla’s department now includes untenured assistant professors in tenure decisions: this provides such faculty with insights into the tenure process.

The actions of individual researchers, however exemplary, are dependent on career stage and position: commonly, those with more authority have more influence. As chair of the cell biology department at the University of Texas Southwestern Medical Center, Sandra Schmid used her position to revise their hiring procedure to focus on key research contributions, rather than publication or grant metrics, and to explore how the applicant’s future plans might best be supported by the department. According to Schmid, the department’s job searches were given real breadth and depth by the use of Skype interviews (which enhanced the shortlisting process by allowing more candidates to be interviewed) and by designating faculty advocates from across the department for each candidate (Schmid, 2017). Another proposal for shifting the attention of evaluators from proxies to the content of an applicant’s papers and other contributions is to instruct applicants for grants and jobs to remove journal names from CVs and publication lists (Lobet, 2020).

The seeds planted by individual action must be encouraged to grow, so that discussions about research assessment can reach across the entire institution. This is rarely straightforward, given the size and organisational autonomy within modern universities, which is why some have set up working groups to review their research assessment policies and practices. At the Universitat Oberta de Catalunya (UOC) and Imperial College London, for example, the working groups produced action plans or recommendations that have been adopted by the university and are now being implemented (UOC, 2019Imperial College, 2020). University Medical Centre (UMC) Utrecht has gone a step further: in addition to revising its processes and criteria for promotion and for internal evaluation of research programmes (Benedictus et al., 2016), it is undertaking an in-depth evaluation of how the changes are impacting their researchers (see below).

To increase their chances of success these working groups need to ensure that women and other historically excluded groups have a voice. It is also important that the viewpoints of administrators, librarians, tenured and non-tenured faculty members, postdocs, and graduate students are all heard. This level of inclusion is important because when communities impacted by new practices are involved in their design, they are more likely to adopt them. But the more views there are around the table, the more difficult it can be to reach a consensus. Everyone brings their own frame-of-reference, their own ideas, and their own experiences. To help ensure that working groups do not become mired in minutiae, their objectives should be defined early in the process and should be simple, clear and realistic.

Create a shared vision

Aligning policies and practices with an institution’s mission

The re-examination of an institution’s policies and procedures can reveal the real priorities that may be glossed over in aspirational mission statements. Although the journal impact factor (JIF) is widely discredited as a tool for research assessment, more than 40% of research-intensive universities in the United States and Canada explicitly mention the JIF in review, promotion, and tenure documents (McKiernan et al., 2019). The number of institutions where the JIF is not mentioned in such documents, but is understood informally to be a performance criterion, is not known. A key task for working groups is therefore to review how well the institution’s values, as expressed in its mission statement, are embedded in its hiring, promotion, and tenure practices. Diversity, equity, and inclusion are increasingly advertised as core values, but work in these areas is still often lumped into the service category, which is the least recognised type of academic contribution when it comes to promotion and tenure (Schimanski and Alperin, 2018).

A complicating factor here is that while mission statements publicly signal organisational values, the commitments entailed by those statements are delivered by individuals, who are prone to unacknowledged biases, such as the perception gap between what people say they value and what they think others hold most dear. For example, when Meredith Niles and colleagues surveyed faculty at 55 institutions, they found that academics value readership most when selecting where to publish their work (Niles et al., 2019). But when asked how their peers decide to publish, a disconnect was revealed: most faculty members believe their colleagues make choices based on the prestige of the journal or publisher. Similar perception gaps are likely to be found when other performance proxies (such as grant funding and student satisfaction) are considered.

Bridging perception gaps requires courage and honesty within any institution – to break with the metrics game and create evaluation processes that are visibly infused with the organisation’s core values. To give one example, HHMI tries to advance basic biomedical research for the benefit of humanity by setting evaluation criteria that are focused on quality and impact. To increase transparency, these criteria are now published (HHMI, 2019). As one element of the review, HHMI asks investigators to “choose five of their most significant articles and provide a brief statement for each that describes the significance and impact of that contribution.” It is worth noting that both published and preprint articles can be included. This emphasis on a handful of papers helps focus the review evaluation on the quality and impact of the investigator’s work.

Arguably, universities face a stiffer challenge here. Institutions striving to improve their research assessment practices will likely be casting anxious looks at what their competitors are up to. However, one of the hopeful lessons from the October meeting is that less courage should be required – and progress should be faster – if institutions come together to collaborate and establish a shared vision for the reform of research evaluation.

Finding conceptual clarity

Conceptual clarity in hiring, promotion, and tenure policies is another area for institutions to examine when aligning practices with values (Hatch, 2019). Generic terms like ‘world-class’ or ‘excellent’ appear to provide standards for quality; however, they are so broad that they allow evaluators to apply their own definitions, creating room for bias. This is especially the case when, as is still likely, there is a lack of diversity in decision-making panels. The use of such descriptors can also perpetuate the Matthew Effect, a phenomenon in which resources accrue to those who are already well resourced. Moore et al., 2017 have critiqued the rhetoric of ‘excellence’ and propose instead focusing evaluation on more clearly defined concepts such as soundness and capacity-building. (See also Belcher and Palenberg, 2018 for a discussion of the many meanings of the words ‘outputs’, ‘outcomes’ and ‘impacts’ as applied to research in the field of international development).

Source: science.thewire.inc