The seminar discussed several topics related to the goals and roadmaps to Open Science, Open Knowledge and Career Assessment. Transitioning to open science is proposed mainly because of the widely known issues we can encounter nowadays in the ways scientific knowledge is spread and the criteria under which researchers and scientists are evaluated to build up a successful career. Probably the most visible issue regarding the dissemination and access to scientific knowledge comes from the conventional model of publication in academic journals. Usually, research articles and papers are published behind paywalls, which limits the wide access of scientific knowledge. Moreover, a publication fee might also be requested to authors in order to proceed with the publication after peer-reviewed and accepted. Peer-review is usually done by other researchers without recompense, and in some cases even the editorial board of a journal is not recompensed either. This is a publishing workflow which in my opinion is economically inefficient (for science): Unpaid work and fees are invested into the publication of a paper (which is in principle done for the benefit of humanity’s scientific knowledge), but then that knowledge is not available for anyone. Open access publishing might be considered a more efficient alternative, but like the conventional model, it still relies on the valuation of a journal by simplistic indicators such as the impact factor. The wide acceptance that such indicators of a journal are strongly correlated with the quality of any paper published in it, entails a series of issues regarding the assessment of researchers that pursue an academic career. I believe a paper is more likely to be of high quality if it is published in a top ranked journal, than if published in a low ranked one. However this cannot be taken as a strong or even reasonable evidence about the quality of a paper. Thus, universities and regulating institutions judging the quality of work through indicators such as the impact factor of a journal are probably not deciding objectively in most cases. Moreover, university departments and regulating institutions sometimes score the work of a researcher by using formulas involving the impact factor of the publishing journals. I personally find it hard to believe those quite arbitrary scoring systems lead to fair conclusions strongly correlated to the quality of work.
Besides research, other aspects must be considered to assess careers in academia, such as e.g., education, leadership and impact. It has been suggested to be academically more efficient if instead of weighting all of these aspects in the career assessment of one individual, different career paths are considered based on the strengths of the individual. Thus, specific evaluation criteria are followed based on the path followed by the pursuant of an academic career.
I personally believe that solving the aforementioned issues is very difficult for two main reasons: i) There are no widely known indicators that can objectively assess the quality of work, and ii) there is no widely accepted way to certify a paper has been subject to rigorous and high quality peer-reviews, other than publishing in journals with some degree of prestige.
Here’s a bunch of crazy ideas: what if we develop digital platforms to help address both issues above?
Imagine changing the concept of paper or scientific publication, and let’s call it «e-paper» for further reference below. I will focus on features of technical papers which is the area I am experienced with, but the concept can probably be generalized to any field. Instead of sticking to a pdf document where static text, equations and figures are included, let’s have an online open platform where interactive content of a research work can be included. We can think about this as an arXiv on steroids. Plots in the e-papers can be, for example, manipulated interactively against various parameters in a way that might resemble Wolfram Mathematica’s notebooks. Notes can be added to the document, with links to other e-papers or other works. Other papers or e-papers can be easily referenced pointing to specific sections. Raw data, and processing algorithms can be also incorporated to be readily available to anyone, so others can replicate experiments and results. Edits to the e-paper are optionally tracked in a «git»-like way. The e-paper is interactive and anyone can request to make comments or ask questions on specific parts (stackexchange-like feature?). The authors can read the comments or questions and maybe decide which ones are worth making visible (or highlighting) to anyone because they contribute to the e-paper content or discussion. Edits or contributions could also be done by other authors, so the main authors can decide to make them public to anyone if they think they are valuable. This promotes and enhances collaboration between authors and research groups, and generalizes the concept of co-author! Now a third-party can make significant contributions to an already published e-paper, and upon agreement of the original authors, the contributions are published and their contributors added as co-authors, specifying their exact contributions.
E-papers would generate tons of metadata which can be arranged for each author and be readily available. Hence, an author can have a set of numerical variables such as e.g., number of publications; number of comments, reads and interactions per publication; number of contributions as co-author to other e-papers; number of citations, etc. After such large set of variables is readily available in the platform, it can be used by any institution or university to create their own indicators based on their needs, and then rank and assess individuals. After papers are published in the platform by anyone and for free (like it happens now with arXiv), then authors can ask publishing companies (or vice-versa!) to start a certified peer-review process. If the paper passes the requirements of the reviewers, then the paper is stamped as certified. This stamp is just another metadata item, and has the weight the university or institution considers as appropriate when creating their indicators.
Gabriel Arturo Santamaría Botello