The biomedical science is evaluated by (variously measured) quality of published findings, its contribution to patients, the number of experts trained, openness to collaboration, as well as by – again variously measured – social impact. In the project environment, these measures take the concrete, sometimes rather unwieldy, form of so-called threshold indicators that must be met if the project as a whole is to be considered a success by the provider and have a chance of further development and support.
Threshold indicators are clearly defined targets which, in some cases, form a natural part of scientific practice. In other cases, however, they make the impression of merely an administrative burden that makes no real contribution. In extreme cases, they amount almost to bossing about and we feel greatly hampered by them: they cost us a lot of time, energy, and creativity we would rather expend elsewhere.
Within our NICR consortium, we are doing well in terms of realising our research and academic plans: we have results, the work continues, and the project has a real impact and visibility. Even so, we must admit that when it comes to threshold indicators, we have some reserves. That could jeopardise not only our evaluation but, above all, the continuation and sustainability of what we have achieved so far. We must be able to translate our work into the language required by the provider – and threshold indicators play a key role in this language.
Let me just briefly mention a few things about indicators, including their problematic aspects. I think I can relatively well assess which of those evoke among us more emotions. After all, I also report on behalf of my group.
The number of outputs per one FTE
This is a straightforward ratio between the number of research outputs and the part of FTEs financed from the project. This is simple and there is hardly any way of arguing against it: it’s about who, how many, and how much for.
At least two outputs that present the gender dimension of research
The rule says: ‘This is not a formal appendix but a reflection of how biological differences, social factors, or access to care influence cancer research and cancer treatment.’ This is a slightly different cup of tea, and we deal with it in more detail in SCIIndicators, another section of this newsletter. It is nevertheless clear that in this area, we have some problems: the provider seems rather strict and did not recognise some of the outputs reported by us.
A non-publication output with financial benefits
This can be contract-based research, applied results, commercialisation of know-how, and the like. Here too, we have some outputs and some of our partners are strong players in the area of intellectual property and commercialisation. But these results have not been formally reported by our partners, and in the eyes of our provider thus ‘do not exist’.
At least two outputs to support evidence-based decision-making of public authorities
This means overviews, recommendations, methodical guidelines, data that can be used for instance in policy creation, but also expertise provided by our colleagues to the administrative bodies in charge of science, healthcare, etc. In this area, we are having intensive communication with the provider: we are responding to their comments on our report and try to communicate in all possible ways with our supervisory organ. In short, our supervisory organ still does not seem quite satisfied with various outputs whose impact we see as evident, which we find, well, … exasperating ☺
We must be able to not only make but also clearly report good science
We are repeatedly running into situations where some partner institutions do not dedicate to our consortium some of their outputs which had received support from NICR. When this happens, we lose – and not only formally – the chance to prove the actual impact of NICR, which lowers its visibility. From the outside, it can then seem that the project is not meeting its targets. From the inside, it feels as if the partner did not identify with the project. We understand that no researcher wants to spend time by filling in reporting tables but this, too, decides about the future of this project. Naturally, help should also be provided by our administrators and the management of partner institutions. After all, the culture of individual NICR partners is important for the project’s success.
Our consortium aims at becoming a stable national authority in academic cancer research. At the same time, it also aims at having international impact – and to achieve it, we must not only make good science but also clearly demonstrate that we do. Threshold indicators are thus milestones we must pass if we want to continue our journey together. They are the price for continuity, for a chance to build on what we have quite successfully started.
Aleksi Šedo, NICR Director