"False"
Skip to content
printicon
Main menu hidden.

Image: Hubert Neufeld

Published: 2022-05-06

Towards a new reward system for open science

FEATURE The transition to an open science system affects the entire research process. The reward systems also need to be adjusted in order to support and mirror the open research landscape, but what will this work look like, and what will change? We met Gustav Nilsonne, chair of the European working group dealing with the issue and a participant in the SUHF working group on merit reviews.

Text: Sanna Isabel Ulfsparre

The transition to open science is gaining momentum. Creating conditions and infrastructures for open research practices includes the development of methods and systems for assessing research merit in a way that does open science practices justice. This ambition is clearly stated in the EU's open science policy.

Current assessment systems have long been criticised for rewarding factors that do not reflect the actual quality of research, factors such as the number of publications and the journals where researchers publish. The impact factor measurements have been particularly contested.

How to assess the merits of research and researchers is a hot topic, as it is closely linked to research funding and viable strategies for developing a research career. Sanna Isabel Ulfsparre, Librarian at Umeå university library, met with Gustav Nilsonne, Docent at Karolinska Institutet, to get an insight into what is happening in the area right now.

Nilsonne is working on the issue both nationally and internationally. Partly as chair of the EOSC Association's working group Researcher careers, recognition, and credit. Partly as a participant in the Association of Swedish Higher Education Institutions (SUHF) working group on credit assessment. He is also one of the domain specialists that you as a researcher or doctoral student can contact through the Swedish National Data Service (SND) for advice on research data management.

Concerns and research fields

Ulfsparre: I have been looking forward to writing this feature because more information in the area has been in demand for a long time. Many people have concerns about how their merits and credentials will be affected when you talk to them about open science. Now it's possible to publish openly in a different way than before, but in the early days, many people were worried that they would have to choose between publishing openly and publishing prestigiously. Do you feel that there are any such concerns left?

Nilsonne: Yes, that is absolutely my experience. One fear, in particular, is that researchers will have to pay sky-high fees in order to publish in prestigious journals. Most often traditional journals with a high impact factor. From a broader perspective, it is clear that any reform of reward- and meritation systems risk disadvantaging those who have done well in the current system. There I perceive a certain conservatism, for better or for worse.

U: What lies in that fear? If previous or existing systems rewards high-quality research, surely the same research would still be rewarded in new systems for fostering high-quality research?

N: There is a fear, which I think is justified, that we will start rewarding elements that are irrelevant to the quality of the research itself. If, for example, gender equality is stated as a rewarding factor, it may be beneficial in some ways, but it is not self-evident that gender equality is a characteristic for research quality as such. It could lead to political steering of research in ways that collide with freedom of research and counteract research in areas where equality wouldn't be relevant for the research question. Now, I took gender equality as an example, but it can also be other things that are perceived as politically viable - sustainability, responsible research, collaboration in different ways and so on.

U: I have just written a feature on the CARE principles for managing indigenous data. In that context, there is a similar clash between ethical systems, where the principles are based on an ethical or moral stance, yet parts of them are not compatible with established systems for research ethics where the focus is on the freedom of research.

It is interesting that similar themes also recur in the discussion on reward systems. Perhaps it is a contentious issue in general? If such criticism or debate surrounds research practices in general, maybe it is not surprising to see it in this area too. Ethics, reward systems, and funding are, after all, closely linked, and such factors determine whether a researcher will be able to research at all.

N: Exactly. If we reward open research practices, such as open access to research data and other things that concern transparency and reproducibility, I personally think that it means a return to and a write-up of the inherent norms for quality of research. But not everyone agrees with that either. And if you lack the knowledge and skills to practice open science, it can be perceived as something dangerous. Something that diminishes one's position.

U: If you look at the basic principles of research, where reproducibility and transparency are central, then access to data must also have historically been a natural part of the reciprocal conversation that research somehow is? If you, as a colleague, were to be able to see where research results come from, make your assessment, and be able to perform the reproduction that you ideally should. Was there a period of openness before this period of closed data which we are now moving away from, and how does the current sharing of open data differ from how researchers historically shared data?

N: I guess it depends on how you look at it. This model for communicating science through articles was once the most practical and concise way to get results out there. Then you could sometimes publish your data openly, in tables, for example, if there wasn't a lot of data. But more often, it was just a summary with statistic results, averages and so on.

...if we reinvented the scholarly literature today, when we have the Internet, sharing data would be a given.

If we reinvented the scholarly literature today, when we have the Internet, sharing data would be a given. And in some fields, it is a given. For example, geneticists who don't work with sensitive information have a strong tradition of sharing their data in open databases, because they saw the value of doing so early and it became the norm.

I think that the resistance toward data sharing has to do with several things. An unfamiliarity, an unwillingness to expose oneself to scrutiny, perhaps a fear that others will be the first to analyse data one have collected oneself. This illustrates the need to value collecting, generating and sharing data as a scientific activity. It is not enough to value the article, which - I think - mainly is a kind of flag or advertisement showing that the work has been done. The article doesn't in itself contain the really valuable stuff that would allow others to further the research, which, not the least, is the primary data behind the conclusions of the article.

U: Does that apply to all kinds of research? I mainly have a background in the humanities myself. They work with a lot qualitative analysis, and research can sometimes focus more on choices of method and theory than on quantitative data collection in the way that is often discussed when it comes to sharing data?

N: Of course, there are significant differences between fields. I've mostly done quantitative research on people.  I think what I've said so far is pretty universal for quantitative research, but I can't speak with any authority about what applies to the humanities.

U: If I have understood correctly, data sharing that cannot be attributed to individual authors or contributors has been going on for some time. Is there any risk that an increased focus on rewarding open research data management could inhibit or harm existing practices, apropos how reward systems may affect attitudes towards ownership and recognition?

N: One risk of linking the reward system to metrics and evaluation of open research data may be that it might lead to counterproductive activities. For example, researchers might start splitting their datasets and dividing them into fragments to make them count several times. Or they might take data that is already openly available, do some minor processing, and republish it without adding any significant value, just to get better "scores".

Truth and originality

U: I sometimes notice that there are different currents in the discourse about what data or research data IS at a fundamental level, and that this affects the rest of the discussion. Some enter the conversation with the presupposition that data and scientific results directly reflect the truth. At the same time, I'm schooled in areas where the concept of truth itself is debated. At a university like Umeå university, with a comprehensive range of academic fields - from STEM to the arts - it feels important to keep both perspectives in mind and be aware that this affects how people think about research data.

For example, I see both of these perspectives in discussions about copyright and data ownership. In fields where data is considered a representation of truth or reality, the idea that data is a public document without intrinsic originality, and thus cannot be copyrighted, seems less controversial. From this point of view, producing data appears to be more about finding ways to record a truth that is already there. The originality of the work comes into play at a later stage. So how does one evaluate the work of producing the product that is the data itself in a way that does the often complex and highly qualified work justice?

N: On the question of ownership of research data, I am personally sceptical that it is conceptually possible. It is a legal issue. Data would have to, as you say, reach a certain level of originality in order to acquire a so-called database right (katalogskydd). I do not know of any specific examples of research data that anyone has claimed to be protected under the database right. But I don't think it matters. Originality or not, research data is valuable to other researchers and should be shared when possible.

U: I think many people might be relieved to hear that, because part of the discussion about ownership may also be related to wanting recognition and rights to one's work.

N: Exactly, and that is really important, I think so too. Those who compile data should get credit for their work. But it should be through the usual mechanisms of academia: Co-authorship and citations, regardless of the legal state of ownership.

Planning and development

U: At the beginning of 2022, The Association of Swedish Higher Education Institutions (SUHF) set up a working group on merit assessments. At the same time, the European Commission's Paris Call on Research Assessment came, following UNESCO's recommendations on open science and the publication of the European Commission's Towards a reform of the research assessment system: scoping report. Did something come loose then?

N: The European Commission has been pushing this issue very hard and for a long time. They've now got this Paris call, which I think is a welcome step forward. But it's probably not directly related to the creation of the SUHF group. The issue is simply at the top of the agenda at the moment.

U: For me, sitting a bit away, it was striking since the issue of reward systems for open science has been swimming around under the surface, but it feels like it is starting to emerge in more concrete contexts. Have we moved from some kind of planning stage to approaching some type of implementation?

N: I rather think it's a planning phase that has moved into an even stronger phase of planning! This SUHF group will work for two years and then come up with some kind of recommendations, which the Swedish Higher Education Institutions (HEI) will subsequently start thinking about implementing... So from my perspective, we are moving slowly and there is probably a bit to go before implementation.

U: Is there anything individual researchers can do to get more involved?

N: It is very much we ourselves in the research community who are in charge of assessing the merit of research. It is very collegial. I think that each colleague needs to think about how to use their own role in the system to steer towards common goals within the framework of what one's mission is as a reviewer.

...each colleague needs to think about how to use their own role in the system to steer towards common goals...

For example, Horizon Europe has introduced various parameters for open science as criteria when reviewing applications. My experience - as a reviewer myself - is that some reviewers take it very seriously, and others see it as a formality. So one thing you can do as a researcher yourself is to think through your own role and become better at contributing to an evaluation that fills it purpose.

Then, of course, you can get involved in different contexts. For example, there is an Open Science Community Sweden, a grassroots network that any researcher can join.

U: What about assessment at HEI's? Is it not until SUHF has done this planning-stage work and the educational institutions have had time to consider it that you can expect to see the actual change at the seats of learning?

N: The HEI's have, one would assume, a constant internal discussion about what merit and research quality is, that is valuable. I believe that Swedish HEI's generally have not yet been having especially in-depth conversations on how to take open science into account in their evaluation and reward systems.

U: Is there anything researchers themselves can do raise the issue at their institutions?

N: Yes, in principle. I think it's very good if researchers raise the issue in their academic environments and at their HEI's. It is helpful to contribute to keeping this discussion, which I assume exists and is ongoing, alive. I would warmly encourage that.

Exploring new incentive structures

U: What would a new reward system look like? Would it be a further development of what we have today, or something completely different?

N: I imagine further development. In the international discussions on merit systems, there is a fairly broad consensus on what not to want. They want to get away from poorly functioning proxies as a means of measuring scientific quality. The impact factor is probably the most odious standard of all. Then you would need something else instead. What is usually pointed out as desirable is more qualitative evaluation - that is, research being evaluated based on an expert assessment of what it actually contains.

They want to get away from poorly functioning proxies as a means of measuring scientific quality.

Another track is to construct new indicators and metrics. ​There are those, not least in the EU system, who believe that we cannot do without metrics, and that we must therefore have metrics for open research practices as well. But there are very few examples of how such metrics could be constructed without the risk of being "gamed" and manipulated by someone who wants high scores without doing the actual work.

I think more experimentation is needed. I would like to see us trying different innovations in evaluation and then carefully monitoring the effect. For example new indicators and metrics, new methods for meritation and new incentives. 

Some think that the HEI's and the research funders should gather and say, with one voice, that "we should have a new system that works like this". That would solve the coordination problem, if they would succeed. It would be a way to ensure that nobody gets caught out because they tried to lead the way but without others following, as is sometimes feared. The Young Academy of Sweden (Sveriges unga akademi), among others, have pointed out that you can't change the system too quickly because people would get stuck in the middle. And you can't go alone in one direction without the rest of the world following. And there is something to that.

At the same time, you have to start somewhere if you want to make any change. I think there needs to be different, pluralistic initiatives to investigate how to measure and evaluate open science. They may be in different fields, disciplines, systems, but they need to be properly evaluated. The implementation also needs to be designed from the start so that it is possible to evaluate.

U: Based on what you say, it sounds like you could do both? Keep the existing system at the same time as commissioning at least the forms of alternative systems? It doesn't sound like there has to be a contradiction between these perspectives. The newly introduced test systems could also function as advisory to the existing system. In addition, it would provide an evaluation of the current system. If you then get the same results, then you can really call those results solid.

But, speaking of impact factors. It is, as you say, a disdained type of metric. As someone who has some basic training in bibliometrics and works closely with bibliometricians, I think there is an unfortunate mixup between bibliometrics as a field and impact factors as a phenomenon. That they would somehow be the same thing. I perceive bibliometrics as a statistical method with qualitative elements, and I see bibliometricians being very conscientious and careful to reflect on their chosen metrics. However, maybe those aspects of the field does not come into their full right in framework of the evaluations.

Will metrics still be a factor when working with these quantitative ways of measuring research merits?

N: I would think so. Bibliometrics does serve different purposes, as you point out.

At some HEI's we have quite strong incentive management linked to impact factors. Departments that get money, for example, based on how high the impact factor is, how many publications they have last year, and things like that. In that way, it has a lot of influence in Swedish academia today. But there are a lot of variation between HEI's as well. Stockholm university is an example of a university that has signed The Declaration on Research Assessment (DORA), and that has since moved away from bibliometric resource allocation to its departments. I think that could be looked at more closely within the sector, and that more HEI's could follow suit.

Also, of course​ it's not the bibliometricians' fault that the impact factors are linked to resources and incentives. And bibliometrics can be used for many other things, to see what kind of patterns and research there are at your HEI. It can be used to map open research practices, and I am all for that. I think it is useful. But at the same time, I fundamentally think that the journal-based model for communication that we have needs to be reformed.

U: Very interesting. I'm not a bibliometrician, but since I work in their proximity, I see many potential areas of use - not least in communication. Understanding your institution, or understanding your field of research. So I sometimes find it a bit disheartening to see how it gets called out for being linked to a factor that some bibliometricians themselves might be critical of, in some ways.

...one of the eight ambitions in the EU open science policy is "new generation metrics", to develop new indicators for open science...

It is also interesting that one of the eight ambitions of the EU open science policy is to develop new generation metrics for open science in different ways. It states that "New indicators must be developed to complement the conventional indicators for research quality and impact, so as to do justice to open science practices." I think it will be very exciting to follow how the relationship between merit and reward, methods for evaluating quality, and bibliometrics will develop in the future. It is not certain that such metrics and indicators will be linked to merit and funding, i think that outcomes can also be measured on a more general level, without linking it to an individual researcher, group or organisation.

But if we talk about the more qualitative elements of assessment models, how do you deal with the risks of bias and prejudice? It must be similar to peer review, where this is a constant discussion.

N: Yes, exactly, it becomes peer review. I would argue that peer review today, in a "black box" as it is often done, has very high risk of bias and prejudice. And there is no obvious reason why a similar assessment done outside of a journal would have any greater risk. But that's something I know many would disagree with. I know many colleagues who perceive journal editors as fairly impartial judges of what is good. I don't think so at all, for a variety of reasons.

U: What other options are there?

N: Additional options for assessing the quality of research?

U: Well, if, on the one hand, we have bibliometrics and quantitative measures receiving a lot of critisism. Then on the other, we have peer review with a high risk of bias. What other avenues are there to pursue? What other areas, what other ways of assessing quality, and what other ways could be developed?

N: I think peer review and quantitative measures are basically the only measures available. But then, expert assessment can be done more or less systematically and transparently. I would argue in favour of using open and systematic methods of assessment. In peer review, when reviewing submitted articles and the like, but also in other contexts where research is to be assessed. When the reviewer writes a completely unstructured and narrative assessment it entails a higher risk of distorted results compared to a systematic method, in my opinion.

U: So more systematic and more transparent?

N: Yes, I think so. It's perhaps easier to point out in the case of journal articles. There I think you could ask reviewers to review in a more systematic way. For example:

  • Have I looked at the data? YES/NO
  • Have I reviewed the quality of the data? YES/NO [Describe how]
  • Have I verified that the results will be the same when I run the code again? YES/NO

These are things often not included in a traditional peer review today.

U: Thinking about the quantitative measurements, can quality be increased in similar ways there?

N: Yes, in part. In terms of the quantitative measurements, I think they need to be transparent and reproducible in themselves. One of the many well-known shortcomings of impact factors are that it is negotiated rather than calculated, and that it's difficult for an independent analyst to calculate it and get the same results. I think that to the extent that metrics are used, they need to be transparent and open to the research community. It must be possible for anybody to calculate and get the same results, and they must certainly not be owned by any company which then get a very big influence on the research process, I think.

Dealing with uncertainty, doing the best research and keeping on pushing the glacier

U: We come from a period where different journal names have been given great weight and publishers have had great influence. Aspects of research assessment have become linked to the commercial, corporate world rather than the academic sphere. Can we see any change there? And might the existing dynamics affect the process of developing metrics and measurements for open science, as there may not be the same commercial interests in open access?

N: As far as I can see, we are still in that period. In the fields I can say something about I haven't seen any movement away from assessments based on being published in highly ranked journals.

Personally, I'm torn by this question on a daily basis. It is obviously something that affects how I structure my own research to make it easy to publish and preferably in highly ranked journals. On the other hand, I want to do what I think is the most important and best research. Sometimes it is possible to reconcile these two, but there is an underlying conflict that I have to deal with from a professional and ethical perspective in my own daily life.

U: What about research data as a research output? There, the established publishing structure does not exist to the same extent. Can you somehow live your ideals through the way you make research data available, even if you have to adapt to the current evaluation systems when publishing articles?

... that researchers' incentives are askew is one of the biggest obstacles I see when I try to help colleagues​...

N: Yes, I think so. From my perspective it is a gamble to assume that research data will be valued and that it is, therefore, good to be ahead. I still believe that, and it is partly coming to fruition. But that researchers' incentives are askew is one of the biggest obstacles I see when I try to help colleagues as well as in discussions about policy. That by focusing on making data available, you lose time and resources that you could have spent on optimising your publication list.

U: Although the practical implementation is some way off, there are quite a few manifestos and discussions and principles about what you want in a modified or new reward system. I've looked a bit at DORA, The Leiden manifesto, The Paris call and The Hong Kong principles of research assessment. I also looked at the Munin conference from last year, where different Scandinavian countries discussed their processes around research meritation. There seems to be at least some consensus on what they want more of and what they want to get away from. But how do you take it from theory, policy documents and aspirational documents to something that is practical?

N: I'll keep pushing the glacier, and we'll see what happens. There is a huge push for open science from younger researchers and from our funders, such as the European Commission, that will penetrate through the system and also make this large mass of researchers in the middle see the value of new ways of working. Also: "One funeral at a time."

U: One funeral at a time? Explain.

N: It's an old saying that has been attributed to Max Planck about how research progresses "one funeral at a time".

U: Aha, so those who are ingrained in the old pattern will retire (to be a bit kinder and less brutal)? So it's simply a matter of waiting for a generational shift?

N: Yes, you could say that.

Developments in the near future 

Since the interview was conducted on 8 March this year, the Royal Library of Sweden (KB) and the Swedish Research Council (VR) have presented reports on what is happening in their respective governmental coordination assignments for open science in Sweden. The reports were followed by a webinar, "Vägen till öppen vetenskap" ("The Road to Open Science"), in which SUHF also participated with a presentation. One topic that came up in the discussions was reforms of the research reward systems, and what would be required to revise the systems on a broader scale. The video of the webinair will be available with subtitles.

Together with his colleagues, Gustav Nilsonne has published opinion pieces in Biblioteksbladet and Curie on the importance of research libraries and the research community getting more involved in the move towards open science. Sanna Isabel Ulfsparre and Kristoffer Lindell (Head of Department at Umeå University Library) wrote a debate response in Biblioteksbladet, in which they additionally highlight the importance of libraries getting involved in the work of developing interoperable and sustainable metadata standards for research data, metrics for open scince and the role bibliometrics might have in future reward and evaluation systems.

Work is currently underway to supplement the SUHF Open Science Roadmap with an action plan or guidance document. The result will be a more concrete description of what is needed to achieve the government's goals for open publishing and accessible research data, and for Swedish higher education institutions to have the prerequisites for inclusion in the international infrastructure European science cloud (EOSC). The working group includes members from SUHF's national working group on research data and representatives from SUHF's reference group for the EOSC Association, including Gustav Nilsonne.

Further reading on all topics can be found in the links below.

On 17 May, the Research support and collaboration office will host a one-day conference on the theme "Collaborating. Merits, means and goals" in Rotundan at Umeå University.

Links

Research data consulting and domain specialists (SND)

Work group: Research, careers, recognition and credit (EOSC Association)

Workinggroup for meritation assessment (Arbetsgruppen för meritbedömningar, SUHF. In Swedish only)

Open access (publishing with open access)

CARE: Indigenous rights and open data

Paris call on research assessment

UNESCO recommendations on open science 

Towards a reform of the research assessment system: scoping report 

Horizon Europe

EU support for open access

Open Science Community Sweden 

The Young Academy of Sweden (Sveriges unga akademi) 

Analysis and evaluation (about Impact factor and the Norwegian registry)

The Declaration on Research Assessment (DORA) 

The EU’s open science policy

The Leiden manifesto 

Moher D, Bouter L, Kleinert S, Glasziou P, Sham MH, Barbour V, et al. (2020) The Hong Kong Principles for assessing researchers: Fostering research integrity. PLoS Biol 18(7): e3000737. 

No. 4 (2021): The 16th Munin Conference on Scholarly Publishing

KB and VR release reports on open science

Forskningsbibliotek viktiga för Europasatsning på öppna data (Research libraries important for European investment in open science, Biblioteksbladet. In Swedish only)

Sverige får inte missa tåget när EU satsar på öppen vetenskap (Sweden can't miss the train as the EU invest in open science, Curie. In Swedish only)

"Vi behöver se var vi är relevanta och bjuda in oss själva" ("We need to see where we're relevant and invite ourselves in" (Biblioteksbladet. In Swedish only)

A Swedish action plan for open science is underway within SUHF

Collaborate. The merits, the means and the goals (conference May 17)

Information on how the material may be used and shared

The text of the feature "Towards a new reward system for open science" is licensed with a Creative Commons Attribution 4.0 license (CC BY 4.0). This allows for extended sharing and use of the material.

You must give appropriate credit, provide a link to the license, and indicate if changes were made.

You may do so in any reasonable manner, but not in any way that suggests the licensor endorses you or your use.

You may not apply legal terms or technological measures that legally restrict others from doing anything the license permits.

When you use the material, following information, including links, should be included: 
Towards a new reward system for open science” by Sanna Isabel Ulfsparre is licenced under CC BY 4.0