LEXONOMICA Vol. 14, No. 1, pp. 127–152, June 2022 https://doi.org/10.18690/lexonomica.14.1.127-152.2022 © 2022 University of Maribor, University Press Accepted 22. 2. 2022 Revised 4. 5. 2022 Published 21. 6. 2022 Keywords scientific journals, editors, quality criteria, quality assessment, copyright EDITORS' RESPONSIBILITY FOR PUBLISHING HIGH-QUALITY RESEARCH RESULTS: A WORLDWIDE STUDY INTO CURRENT CHALLENGES IN QUALITY ASSESSMENT PROCESSES KATARINA KRAPEŽ University of Primorska, Faculty of Management, Department of Law, Koper, Slovenia katarina.krapez@fm-kp.si CORRESPONDING AUTHOR katarina.krapez@fm-kp.si Abstract The contemporary scientific community faces significant challenges in assuring research credibility. Since the onset of the COVID-19 pandemic, there has been an even greater interest in swift publications of valid research results, and individual editors of reputable academic journals might feel pressured to ensure high quality of published papers. Aiming to explore how editors perceive their management of the quality assessment processes and to identify decisions and processes which editors believe to be the most critical for securing high-quality publications, we conducted a qualitative survey. 258 editors of scientific journals from 42 countries participated in the study. Key findings of the first part of the study show that across disciplines, editors perceived the manuscripts as excellent if they were innovative, scientifically sound, well-written and well-argued, addressed a significant topic, contained useful results, and had the potential to change (improve) something. Secondly, originality emerged as the leading quality criteria in manuscript quality assessment, followed by validity and significance. Finally, results indicate that factors influencing the overall complexity of editorial quality assessment are: (i) clearly defined minimum threshold of required quality, (ii) consistency of assessment between individual quality criteria, and (iii) experience of the editor. 128 LEXONOMICA. 1 Introduction In recent years, the objectivity of the quality assurance processes in scientific journals, specifically the reliability of editorial checks and the peer review process, has been continuously questioned. A latter evolution of processes within editorial offices has also been particularly dynamic. Although highly regarded by the vast majority of researchers (Harley et al., 2010: 11-12; Mulligan et al., 2013: 132; Elsevier and Sense about Science, 2019: 5-14), the conventional model of quality assessment (with initial phase of quality checks, peer review process and final decision-making regarding publication) has been recognised to overburden editors, their teams, and reviewers (also known as 'reviewers’/editors' fatigue'; Severin and Chataway, 2020: 19-21, and 2021: 539-540). As the traditional model of performance assessment in science is based on the number (and to a far lesser extent, the quality) of publications (Harley et al., 2010: 10-13), the influx of manuscripts at the editorial offices of reputable journals significantly increased. This resulted in a high proportion of 'desk' or 'initial' rejections of manuscripts (Primack et al., 2019), delays or decline in quality of peer review reports (Glonti et al., 2019: 7-8), a lack of available reviewers (Virlogeux et al., 2018: 32-33), inadequate resources to motivate and compensate reviewers (Zaharie and Osoian, 2016; Warne 2016) and editors (Varna, 2018: 371), etc. Despite the novel challenges described above, the editor's responsibility for publishing high-quality materials remains the same and has been widely defined and elaborated (Council of science editors, 2012; ALLEA, 2017; COPE et al., 2018; ICMJE, 2019; Valkenburg, 2021). As a person in charge of the quality assessment protocols, a chief scientific editor oversees the entire chain of decision-making regarding a publication of the manuscript and sets standards for all other stakeholders (e.g. editorial team, reviewers, authors). Only a limited number of studies comprehensively investigate how editors across scientific fields perceive their role and how they react to current challenges. Even less research is dedicated to editors' perception of quality criteria/standards in scientific publishing. Literature review indicates two dominant ways of obtaining knowledge in the field. Researchers approached the subject either by studying data from actual reviews of the submitted manuscript to a particular journal(s) and editors' responses to such review reports (Miller and Perrucci, 2001: 96-98; Turcotte et al., 2004: 550; Newton, 2010: 135-138) or by researching editors experience and beliefs, related to their tasks, roles, K. Krapež: Editors' Responsibility for Publishing High-Quality Research Results: A Worldwide Study Into Current Challenges in Quality Assessment Processes 129. management of the manuscript processing and editorial decision-making (Mustaine and Tewksbury, 2013: 391-393; Glonti et al., 2019: 2). Miller and Perrucci (2001: 96-98) studied the quality assessment processes of the 673 manuscripts performed by the Social Problems journal editors from 1993 to 1996. They were examining editors' 'gate-keeping role', which is exercised at two points in time: first, when a decision is made to deflect a manuscript from the review process (also desk rejection), and second, when the editor decides to publish a paper, invites author revisions, or rejects a paper. In relation to the deflection process, Miller and Perrucci (2001: 98-99) stressed the importance of rejection guidelines. Exemplary desk rejection standards included: a) limitation in the scope of the manuscript, c) manuscripts which are at least a modest contribution to a theoretical perspective, and c) are not exclusively an opinion essay. The latter or second gate-keeping decision is closely related to the outcome of the review process. Miller and Perrucci's (2001: 103) research results showed that in the case of the mutual recommendation for publication by both reviewers, the chances of publication of the manuscript improved; however, such agreements of reviewers were rare (only 21.2% of the time). Reviewers' recommendations advised fewer acceptance decisions compared to the editors. If reviewer disagreement occurred, editors reacted in three different manners: they sought more reviews, they wrote a letter to the author explaining the reason for the rejection, or they sought help from an associate editor, and they decided together. When editors received help from associate editors, they based their common decision on: a) the more insightful of the two reviews, b) additional insights provided by colleagues with the necessary expertise, if their own reading of the manuscript and reviews left some questions unanswered or c) the help of an advisory editor, in case they perceived any potential conflict of interests. As further analysed by Hausmann and Murphy (2016: 285), in a reflection on the past, on the occasion of the 60th anniversary of the Journal of Neurochemistry, editors, as the gatekeepers to reviewers, tended to set their criteria on desk rejections to a more or less conservative standard, and they were conscious of the possibility to err in their decisions. Nevertheless, editors were in an indispensable and responsible position to either triage a paper or send it out to reviewers. The larger the pool of competent reviewers with scientific expertise and high ethical standards, the easier the editor's task. 130 LEXONOMICA. A Canadian study (Turcotte et al., 2004: 550) aimed to identify the characteristics of the manuscripts submitted to the Canadian Journal of Anesthesia (CJA) associated with their acceptance or rejection. Researchers analysed peer review material (the reviewers' comments) from 213 submissions and their impact on the editors' decision to publish. The study recognised that the most important criteria with regard to manuscript acceptance or rejection are: overall consistency (namely, the relationship between experimental design, results, and conclusions), the originality, and the use of an appropriate study design. Newton (2010: 135-138) researched anonymously written reviews and editorial responses to peer review of eight studies of textbooks that were submitted to subject-oriented, academic journals in education. Results showed that editors conducted their quality management duties in different ways, ranging from the mechanical processing of articles and reviews, which gave the decision-making power to reviewers, to a thoughtful and critical engagement with content. An important limitation of this study was that the scientific standards of the journals were not examined and that the sample was small, and that it did not allow for an in-depth investigation into the editors' beliefs. In a 2012 study that focused on identifying the processes and resources drawn upon by journal editors in fulfilling their roles, Mustaine and Tewksbury (2013: 391-393) researched sociology and criminal justice journals editors' perspectives and experiences. 53 survey respondents were editors of journals ranked in Journal Citation Reports. In relation to desk rejections, Mustaine and Tewksbury (2013: 395- 397) confirmed the importance of the 'fit' of the manuscript to the journal's mission and focus. In their research, participating editors rated it as the second most important factor in evaluating a manuscript, the most important being 'quality of methods'. Mustaine and Tewksbury (2013: 399) stressed that those authors who submit manuscripts without regard for considerations of 'fit' are likely to encounter rejection and possibly see the process of publishing as too harsh and perhaps unfair. Other important manuscript components in publications decisions of study respondents were: 'clarity of the findings' (the third most important manuscript component), followed by 'description of methods'. Respondents used several methods of identification of possible reviewers: they maintained a journal database of potential reviewers and their topics (92 %), they used the reference list of a submitted manuscript to select reviewers (86 %), they solicited authors of recently K. Krapež: Editors' Responsibility for Publishing High-Quality Research Results: A Worldwide Study Into Current Challenges in Quality Assessment Processes 131. submitted manuscripts to serve as reviewers (81 %) and conducted online searches for scholars in particular areas (75 %). The limitation of this study is that it only investigated the importance of 'manuscript components' or essential parts of the manuscript in editors' publications decisions, while publication criteria as qualities that editors look for in a manuscript were not systematically investigated. Also, this study did not focus on the entire chain of processes involved in editorial decision- making. Recent studies show that editors' perspectives on stakeholders' roles and tasks within the quality assessment processes differ significantly according to each journal's unique context and characteristics (Vrana, 2018; Glonti et al., 2019: 6-7). In a worldwide survey, Glonti et al. (2019: 2-7) conducted semi-structured face-to-face or telephone interviews with 56 biomedical journal editors from general and speciality journals. This study aimed to provide an in-depth account of participating editors' experiences with, and expectations towards, peer reviewers. Even if there is a broad agreement on expected tasks related to the assessment of the scientific aspects of manuscripts by a particular stakeholder, especially by peer reviewers, research findings indicate that there were different expectations in the level of depth of the performed tasks. Participating journal editors, however, agreed regarding their own role as the 'ultimate decision makers' and made it clear that the decision-making process is shaped and influenced by the interplay of a complex web of factors, not solely peer-reviewer reports. Among such factors are editors' expert knowledge and the ability to assess different aspects of manuscripts, authors' replies, opinions of the editorial office or editorial board members etc. The above-mentioned studies that researched editors' beliefs (Mustaine and Tewksbury, 2013: 391-393; Glonti et al., 2019: 2) were both focused on editors of the journals in particular scientific fields (sociology and criminal justice journals and biomedical journals) and observed certain parts of the quality assessment processes within the editorial offices. With the intention to bridge this gap, I surveyed editors across scientific fields about their beliefs related to the most complex decisions in quality assessment processes. The aim of the qualitative study, which I have been conducting since 2015, was to gain an understanding of various criteria that identify the quality of academic communication, with a focus on reputable scientific journals. The study focused on the examination of a) criteria on the basis of which the quality of manuscripts is assessed; b) processes established in the editorial offices for the 132 LEXONOMICA. purpose of quality assurance; and c) factors influencing the quality assurance processes in the editorial offices. 2 Methods The methodological orientation that underpinned the study was qualitative content analysis. This paper summarises the finding of the first part of the study, in which an online survey among experienced editors of reputable scientific journals was used to collect data on quality criteria applied by the editorial offices during quality assessment protocols. On the basis of these results, the study continued in 2016 and later, with online interviews that were focused on an exploration of particular factors that were found to influence the processes established in the editorial offices. Sampling Based on a purposeful (criterion) sampling approach (with predetermined criteria of importance, Palinkas et al. 2015), the range of variation was narrowed to editors of the journals, for which there is an indication that they are familiar with developments in the field of quality standards, quality assessment processes and ethics in scientific journal publishing. For this reason, the sample was limited to editors of the journals that are members of either the Committee on Publication Ethics (hereinafter: COPE) or The Directory of Open Access Journals (hereinafter: DOAJ). For more than two decades, COPE, an independent organisation of editors (and to a lesser extent also publishers), has been supporting members in issues related to publication ethics and quality policies, exposing best practices (COPE et al., 2018) in the quality assessment protocols of the scientific journals. To assure the participation of the open access journals' editors in the survey, I also invited editors of the journals indexed in DOAJ, an independent community-curated online directory that indexes and provides access to high quality, open access peer-reviewed journals. Participants were sampled from COPE (at the time of the sampling, it consisted of more than 10.000 member journals) and DOAJ (more than 11.000 journals) online lists of member journals, which made a population of approximately 21.000 journals. In August 2015, 1000 journals were randomly sampled from online databases of members of each organisation. For each randomly sampled journal, an additional online search was made to obtain an official email address of the editor-in-chief or K. Krapež: Editors' Responsibility for Publishing High-Quality Research Results: A Worldwide Study Into Current Challenges in Quality Assessment Processes 133. executive editor of the journal from the official webpage of the journal. There were 1992 editors of scientific journals in the final sample since 8 editors were randomly sampled from both databases. Potential participants were approached via the official journal's email. Two invitation emails were sent, first on 18th August 2015 and a reminder on 2nd September 2015. Out of 2000 sampled editors, 258 (13 % of sampled potential participants) responded by starting to fill out the survey. Data collection The online survey was active between 6th August and 24th September 2015. The questionnaire, which consisted of 35 questions (suppl. File 1), was based on an initial review of the literature and further refined after an informal pilot testing. In case respondents were editors of multiple journals, they were asked to respond from the viewpoint of the highest-ranked journal that they oversee as editors. Also, if respondents were holding different editor positions (e.g. editor-in-chief, managing editor, associate editor, assistant editor etc.) in multiple journals, they were asked to answer questions from the viewpoint of the highest position. The questionnaire covered the entire chain of quality assurance activities, from the initial check of the paper by the journal's editorial team to setting the quality criteria (originality, validity, significance), management of the peer review process, to the final decision on publication. The questionnaire consisted of a mix of open and closed-ended questions. Aiming to understand what associations respondents got without the researcher's intake, the questionnaire was designed in such a way that it first posed an open-ended question, followed by a multichoice closed-ended question. Data analysis By means of content analysis, I analysed respondents' textual answers to open questions. The thematic analysis was conducted in two steps and entailed both deductive and inductive elements to gain a complete understanding of the topic. The first step included developing a preliminary coding framework. This was based on the analysis of existing research results and theory, which enabled me to delimit the area of research. In the second step, I used inductive coding (Gehman et al., 2018) and read and reread the text and developed codes by employing phrases used by the 134 LEXONOMICA. participants. I updated and revised the codebook continuously until no additional information was gathered from repeated coding. Where codes appeared in a patterned way, they became a theme (Vaismoradi et al., 2016). Data gathered in multichoice closed-ended questions was analysed using descriptive statistics. As the study seeks to provide a description of journal editors' perspectives and experiences, descriptive statistics were the primary data analysis tool. 3 Results Respondents' characteristics. Out of the 258 respondents who started filling out the survey, 19 % (n = 49) were female. Participants were from 42 countries, while 39 % (n = 100) of them were from three countries: USA (n = 57), United Kingdom (n = 25), and Germany (n = 18). The participants were experienced senior scientists: 87 % (n = 221) had more than 10 years of working experience in science, while 66 % (n = 167) of all participants had more than 20 years. 53 % (n = 129) of journals that participants oversaw were from STEM fields (natural sciences, engineering and technology, medical and health sciences, agricultural sciences), while 31 % (n = 76) were from social sciences and 9 % (n = 21) from the humanities. More than half (n = 129, 53 %) of participating editors reported that they use a double-blind peer review (a review in which the reviewers do not know who the authors are, and the authors do not know who the reviewers are). Single-blind review, where the reviewers know who the authors are, but the authors do not know who the reviewers are, was used by 37 % (n = 91) of respondents. Peer review, in which reviewers know who the authors are and authors know who the reviewers are, was used by 3 % (n = 7) of participants. Four (2 %) participants reported that they use open peer review (as opposed to anonymous peer review), which can be open identity peer review (used by 2 participants) or open access to the review and open reviewers' identity (used by 1 participant) and an open invitation to peer review (where everyone can join and contribute; used by 1 participant). Nine participants (4 %) that answered that they use other types of peer review specified that they use a combination of listed options/answer choices (e.g. anonymous single peer-review published in open access; single-blind during a review, reviewer identity disclosed after acceptance). K. Krapež: Editors' Responsibility for Publishing High-Quality Research Results: A Worldwide Study Into Current Challenges in Quality Assessment Processes 135. Editorial initial check of the manuscript and author. Participants were asked which quality assurance protocols they apply when they receive a manuscript for consideration for publication and how they divide work in the editorial office. The first question about the initial checks (q17) was open-ended, followed by two closed- ended questions. Table 2 (below) shows typical descriptions of individual initial check criteria (code), as answered by the participants in an open question about initial checks (q17: When you receive a paper for consideration for publication in your journal, what do you generally check first?). The results showed that during an initial check phase, 56 % (n = 122) of participants examine whether a manuscript fits the scope of the journal and/or whether the topic of the manuscript is appropriate. 73 (34 %) respondents answered that they check if the manuscript meets the criteria of a proper scientific study (general scientific quality), while the format of the paper check (especially if technical guidelines are properly followed) was mentioned by 25 % (n = 54) of participants. Among other criteria that were reported, participants wrote that they checked the quality of the language (13 %, n = 28) and the possibility of plagiarism (12 %, n = 26). Four participants (2 %) reported that they check to see if the author had submitted the article to other journals in parallel (double submissions) and explained that they use an online service that compares submitted manuscripts against a large database of published material and provides editors with a summary report that highlights the similarity to previously published work. 7 % (n = 15) of the study participants mentioned that they first read the new manuscript: some read the entire content of the article, and some focus on the abstract and conclusions (results or/and discussion). 9 % (n= 20) of respondents also wrote that they try to determine whether the paper is original or whether it contains an element of novelty, even if this is not typically something that is checked in this phase. 6 % (n = 13) of respondents added that they perform author-related checks. Typically, they inspect the identity of the author, the affiliation of the author and their past publications of scientific and professional works (articles, books, etc.). 136 LEXONOMICA. Table 1: Criteria that study participants use for the initial check of the paper Criteria (code) %, n Description of the criteria by the participants (most common phrases) Overall fit to the scope/aim; appropriateness of the topic 56 % n = 122 »fits the scope of the journal in terms of the topic«, »compliance - topic«, »content generally matches the aim of the journal«, »falls into the area and is in the journal's standards«, »is the topic of the paper within the publishing strategy of the journal«, »topic matching with editorial policy«, »whether topic relates to the journal's profile«, »does the manuscript fit the mission of the journal«, »suitability (of the topic) for the journal«, »consistency with journal's scope« General scientific quality/paper meets the criteria of a proper scientific study 34 % n = 73 »general level of academic quality«, »is it suitable to go online and be sent for review«, »level of scientific sophistication«, »check of scientific approach«, »read for the possibility of 'desk reject', as we are hugely oversubscribed«, »is it of scholarly standards«, »quality concerning the design of the study, the theoretical framework« Appropriate format of the paper (technical, according to guidelines) 25 % n = 54 »check a format«, »formatting«, »journal's guidelines are followed by author«, »technical compliance with standards«, »check of overall structure«, »technical check« Overall quality of the language 13 % n = 28 »quality of writing«, »language check«, »is it written in good English?«, »English«, »is it good enough (language)«, »properly written in good English«, »the correct grammar«, »spelling and grammar« Plagiarism checks 12 % n = 26 »plagiarism check«, »authenticity check«, »I use cross-check to see whether the paper is original«, »I first check if the paper is original«, »submit it to Turnitin to check for plagiarism« * Q17, n = 217 participants answered an open-ended question, 41 skipped. If participants listed more than one criterion, their responses were considered for all of the criteria to which the responses related. In a following closed question (q18, n = 240, skipped = 18) about who performs the initial checks of the manuscript, where multiple selections were possible to four answer options, 64 % (n = 154) of respondents reported that they do it by themselves or they pass it to the members of the editorial team (40 %, n = 96). Out of the 8 % (20 participants) that selected the option 'other', 13 participants (5 %) K. Krapež: Editors' Responsibility for Publishing High-Quality Research Results: A Worldwide Study Into Current Challenges in Quality Assessment Processes 137. explained that initial checks are done by other types of editors ('managing editor', 'editor in chief', 'themed/topical/subject editor', 'associate editor', 'co-editor'), while 4 participants (2 %) wrote that this is a task of the publisher's team ('publisher', 'administrator form publisher', 'editorial coordinator'). In a closed question (q19, n = 234, skipped = 24) about the content of initial checks (with multiple selections possible to seven answer options), 90 % (n = 210) of the participants selected the option 'analysis of the overall fit of the paper to the scope (or focus) of the journal' and 81 % (n = 189) the answer option 'general assessment whether the paper meets basic prerequisites for a proper scientific study'. 'General quality of the language in the paper' was selected by 73 % (n = 170) of the participants. Other answer options were selected less frequently: the 'confirmation of the author's identity and credentials (educational background, other writings, experience)' answer was selected by 30 % (n = 70) of participants and 29 % (n = 68) of respondents selected the answer 'confirmation of the author's affiliations (author's employer or an institution that pays the author for his work)'. Three participants (1,3 %) selected the answer option 'none of the above'. Out of the 25 respondents (10,7 %) that selected the option 'other', 7 respondents wrote that they perform plagiarism related checks, and 3 respondents reported that ethical issues are checked (if relevant and for studies using human and animal subjects). Definition of the criteria for publishing. After establishing which quality assessment protocols the study participants apply during the initial phase, the study focused on the main qualities that editors are seeking in a scientific manuscript that they are considering for publication. The study examined which criteria the respondents use to assess whether the article is suitable for further procedure, i.e. to be sent for an expert review. The participants were asked an open-ended question (q20) about the main qualities they were looking for in the manuscripts. The participants identified four main criteria that they typically use to judge the quality of articles (see Table 3 below), which were coded under 4 main categories: ‘originality’ was the most widely named criterion (56 %, n = 116), followed by ‘scientific attributes’ (37,5 %, n = 78), ‘validity’ (32 %, n = 66) and ‘importance/impact’ (27 %, n = 56). Table 3 lists the most common terms used by participants to describe each criterion. 138 LEXONOMICA. Table 2: Criteria used by participating editors to determine the quality of the manuscript C rit er ia (c od e) %, n Most common terms used to describe criteria Example of the description of the criteria used by study participants O rig in al ity 56 % n = 116 Originality, novelty, innovation, added value, raises interest »goes far beyond what has been published so far«, »innovative and/or raises new questions«, »contains novelty«, »addresses an interesting academic problem, brings new knowledge«, »describes a solution to an unsolved problem«, »cutting edge«, »original work«, »uniqueness of work«, »scientific innovation«, »non-conformist work that not only agrees with conventional knowledge and methods«, »critical position« Sc ie nt ifi c at tr ib ut es 37,5 % n = 78 Compliance with scientific standards, contains a scientific method, scientific soundness »scientific standards«, »adherence to standards«, »robust experimental design«, »appropriate statistical analysis«, »scientific value«, »quality of arguments expressed«, »high standards of presentation of results, discussion and conclusion«, »quality of critical analysis«, »solid science« V al id ity 32 % n = 66 Validity, reliability, credibility and correctness, trustworthiness, clarity, relevance, objectivity »clarity, readability«, »academic rigour«, »conceptual consistency«, »scientific correctness«, »reliability of the proposed theory and experiments«, »author's understanding of the topic«, »clear and coherent argumentation«, »appropriate discussion, real arguments« Im po rt an ce / im pa ct 27 % n = 56 Influence, importance, impact, relevance, promotes the development (of scientific field/knowledge) »represents major advances in the field or contains data that could trigger important research in the future«, »represents a contribution to existing literature«, »represents improvement, either technical or conceptual«, »pushes forward«, »contains something beyond (only) repetition«, »promotes the development of (theoretical and empirical) knowledge«, »convincing justification of the contribution to science«, »very useful in practice« * Q20, n = 207 participants answered an open-ended question, 51 skipped. The participants were invited to list multiple qualities; their responses were considered for all of the criteria to which the responses related. Regarding quality criteria that participating editors use when they consider a manuscript for publication, we also asked a closed question (q21) in which we offered 12 designations (descriptors) for different quality criteria, which were most often found in the literature. Participants were asked to select three labels (descriptors) that they considered most important (q21: After establishing that a paper fits the scope of the journal and meets other initial requirements, 3 of the most important criteria for K. Krapež: Editors' Responsibility for Publishing High-Quality Research Results: A Worldwide Study Into Current Challenges in Quality Assessment Processes 139. publishing are … (please choose three of the most significant criteria). The possible answers were listed sequentially without further structuring (list format), and the respondents were asked to choose three of them. Based on the literature review, the most common descriptors for three major criteria were: − Originality: 'originality of overall paper', 'added value to existing knowledge', 'innovativeness of the research'; − Validity: 'validity of overall paper', 'reliability of the research and conclusions'; 'credibility of the research and conclusions', 'trustworthiness of the research and conclusions', 'appropriateness of the methodology used and conclusions', 'clarity of the discussion in the paper'; − Importance/significance: 'significance of the research', 'importance of the issues/topics addressed in paper', 'up-to-date topic of the research'. Figure 1: Most important criteria for publishing papers in their journals, according to the study participants (Q21, n = 232, skipped = 26) 63% 29% 51% 24% 27% 19% 10% 39% 27% 31% 21% 8% 1% 7% 0% 10% 20% 30% 40% 50% 60% 70% Originality of overall paper Innovativeness of the research Added value to existing knowledge Validity of overall paper Reliability of research and conclusions Credibility of the research and conclusions Trustworthiness of the research and conclusions Appropriateness of the methodology used and… Clarity of the discussion in the paper Significance of the research Importance of the issues/topics addressed in paper Up-to-date topic of the research Nothing of the listed Other 140 LEXONOMICA. The participants most often chose two descriptors, namely 'originality of overall paper' (63 %, n = 146) and 'added value to existing knowledge' (51 %, n = 119). The descriptor 'innovativeness of the research' was chosen by 29 % (n = 68) of participants (Figure 1). Among the descriptors related to the validity criterion, the most frequently selected designation was 'appropriateness of the methodology used and conclusions' (39 %, n = 90), followed by the 'clarity of the discussion in the paper' (27 %, n = 62) and the 'reliability of research and conclusions' (27 %, n = 62). The descriptors ‘validity of overall paper’ (24 %, n =56), ‘credibility of the research and conclusions’ (19 %, n = 45) and ‘trustworthiness of the research and conclusions’ (10 %, n = 24) were chosen less frequently. The descriptor 'significance of the research' was the most often chosen (31 %, n = 71) from the descriptors related to the criterion of importance/significance. The descriptor 'importance of the issues/topics addressed in paper' was chosen by 21 % (n = 48) of the participants, and the descriptor 'up-to-date topic of the research' was chosen by only 8 % (n = 18) of the participants. The criterion of originality (with three designations) and the criterion of validity (with six designations) were chosen equally often, and the criterion of importance/significance (with three designators) was chosen significantly less often (1: 0.42) by the participants than the first two criteria. Regarding publication criteria, participants were also asked an open-ended question to consider the highest quality manuscripts they had published in their career as editors. The question (q22) was focused on how (in what aspects) these manuscripts differed from the others that they received for review and evaluation. The participants were encouraged to describe the qualities that made these articles exceptional. The respondents agreed that these manuscripts were not different in the sense that they had other qualities but that they were »simply better«. 'Better' means that the manuscripts had all the qualities that the participants are looking for (quality criteria: originality, validity, significance), and at the same time, all these qualities were rounded off into a meaningful and holistic piece of writing. The participants further clarified that these are papers that: − were of good quality already in the version that was submitted at the beginning, even before the discussion in the editorial office (»it was obvious that this was not one of those papers that were written because the author had to publish something«, »it was an author in the 'mature phase' «); K. Krapež: Editors' Responsibility for Publishing High-Quality Research Results: A Worldwide Study Into Current Challenges in Quality Assessment Processes 141. − were well received by the reviewers, and they unanimously and unambiguously rated them as excellent; − were well written and argued (»the author masters the language«, »good interpretation«, »it is obvious that the author masters the subject, that he speaks with authority«); − addressed a topic that needed to be addressed (»because it is 'currently hot' «, »cuts to the heart of the problem«, »an unexplored topic or area that is poorly understood«); − contained useful results/conclusions (either useful in practice, not only in theory; or usable immediately/quickly, were useful for the further development of the theory, or were important and interesting to other publics, not just to academics); − they had the potential to change or improve something (»they interfered with established dogmas«). Management of the peer review process. Regarding criteria based on which the respondents selected reviewers, the participating editors were asked an open-ended question (q25, n = 208, skipped = 50) about the last five papers that they processed. 18 % (n = 37) of the respondents reported that they did not send any of these manuscripts in the peer review process (these were all desk rejections), while 82 % (n = 171) did. The respondents that sent at least one of the last five papers to peer review were asked about the process of the selection of reviewers. Based on their answers, I identified inclusion and exclusion criteria and stakeholders that assisted participating editors in the selection of reviewers. The inclusion criteria (desired qualities, knowledge and competencies of reviewers) mentioned by the participants were as follows: − qualification and expertise in a scientific field (reviewer »masters the topic of the article«; reviewer has »similar scientific interests«, reviewer »is highly qualified in this field«); − skill, competence, and experience with review work (»writes constructive and insightful critique«, the reviewer's »past work is good«); 142 LEXONOMICA. − professionalism and willingness to cooperate (»is serious and objective«, »shows enthusiasm«, »is willing to cooperate«). When asked about exclusion criteria, the participants stressed that the reviewer must be independent of the author. The independence means that the reviewer may not work at the same institution as the author and must not be professionally or privately in a close relationship with the author (e.g. the reviewer has not published any article co-authored with the author). Stakeholders who assisted in selecting reviewers were, according to the respondents' answers, as follows: − editorial board or assistant editors; − experts in individual scientific fields (also: the »social network of the editorial board«); − editors invited the author to propose a reviewer (also: the participant reviewed the citations in the author's article and sought a reviewer among the experts cited by the author); − editors asked for the help of a competent colleague who knows experts in a particular scientific field; − the publisher of the journal offers an electronic database of names of experts in individual scientific fields, which was also used by the participant (also: the editorial board has an internal classification system of reviewers based on their expertise in specific scientific fields). When asked in a closed question about how they ensure that the reviewers (of the last five manuscripts that they processed) are aware of the journal's quality assessment criteria (q26, n = 176, skipped = 82), 62 % (n =109) of the participants selected the answer option that 'the journal has prescribed guidelines that must be followed by both the editor and the reviewers' (see Figure 2). Approximately the same proportion of participants (60 %, n = 106) answered that they 'relied on the professionalism of the reviewer and that they believed that the reviewer had the same criteria for assessing the quality of the articles as themselves'. 11 % (n = 20) of the participants answered that they had consulted reviewers on these criteria and 3 % (n = 5) of the participants indicated that the reviewers attended training (seminars, K. Krapež: Editors' Responsibility for Publishing High-Quality Research Results: A Worldwide Study Into Current Challenges in Quality Assessment Processes 143. workshops, trainings) that addressed the criteria for assessing the quality of articles. 27 respondents (15 %) selected the answer option 'other'. 8 participants (5 %) specified they read and checked reviews and ruled out inappropriate reviews. Five participants (3 %) wrote that they know reviewers, while two participants wrote that they ensured the compliance of the criteria with a detailed form, which they sent to the reviewer on the basis of which the reviewer gave an assessment of the article. The criteria were described in this form. Figure 2: Ways of ensuring that reviewers use the same quality assessment criteria as the participating editors (q26, n = 176, skipped = 82) In case they encountered a discrepancy between the reviewers' reports in the last five papers that they processed (q27, n = 161, skipped = 97), 58 % (n = 93) of the participants commented in an open-ended question that they approached this situation by rereading the manuscript and re-evaluating the credibility of the reviewers' reports and then they made a final decision regarding publication. 30 % (n = 48) of the participants reported that in such situations, they usually looked for a new, additional professional reviewer, while 21 % (n= 34) of the participants consulted the editorial team or external experts about the manuscript and obtained 11% 3% 62% 60% 15% 0% 10% 20% 30% 40% 50% 60% 70% Consulted with reviewers Reviewers attended trainings Relied on the professionalism of the reviewer Journal has prescribed guidelines for reviewers Other 144 LEXONOMICA. extra reviews (from other professional editors, co-editors, assistant editors, editorial board, or recognised experts in the field of the topic of the manuscript). Twelve participants (7.5 %) discussed such cases with reviewers of the manuscript and/or the author. Final decision regarding publication. To understand how complex (or difficult), on average, is the decision to accept or reject the manuscript, the participants were again asked to recall the last five manuscripts that they considered for publication. For these five manuscripts, they were asked to establish whether it was difficult to: − either accept and submit these manuscripts for review or reject them after initial check ('immediate rejections', 'desk rejections') (q23, n = 224, skipped = 34), and − either to accept for publication or to refuse to publish them after participants have received the reviewers' reports (q29, n = 221, skipped = 37). For half of the study participants (n = 111; see Figure 3), the decision on whether to reject these five manuscripts or send them for review was easy, and for 9 % (n = 20) very easy, while 21 % (n = 47) of the participants reported that they found it difficult or that they were not sure or were neutral regarding these decisions (19 %, n = 43). The final decision regarding publication was not difficult for most of the participants (the 'neutral', 'easy' and 'very easy' groups represent 78 % of the participants together, n = 172). Both decisions were almost equally 'simple' for the participants. In only 21 % (n = 47 for desk rejections, n = 46 for post-review) of cases do the participants find it difficult to decide whether an article meets the quality criteria. There are almost no very difficult decisions in evaluating the quality of articles (either after the initial check or post-review), or their share is negligible (0.45 %). K. Krapež: Editors' Responsibility for Publishing High-Quality Research Results: A Worldwide Study Into Current Challenges in Quality Assessment Processes 145. Figure 3: Respondents’ perception of the level of difficulty of decisions to accept a manuscript for review (q23, n = 224, skipped = 34) and for publication (q29, n = 221, skipped = 37). When asked why some decisions regarding desk rejections and final publication of the manuscripts are more difficult (open-ended question, q24, n = 181, skipped = 77), the participants highlighted that their final decision regarding the publication of a particular manuscript is more complex if they are not experts in the specific scientific field/topic that is addressed in the manuscript. In such cases, they most often turn to the editorial team. Regarding the assessment of the credibility of the reviewers' reports, the participants commented that they try not to base their decisions regarding publication on reports that are either biased or too subjective. Participating editors emphasised that they know they are responsible for the final decision. Therefore, regardless of the consultations with other stakeholders (reviewers, members of the editorial team etc.), the most important thing is their final professional assessment (and often also a feeling, intuition). They also mentioned that these are the hardest decisions in their line of work. 21% 19% 50% 9% 21% 25% 49% 5% 0% 10% 20% 30% 40% 50% 60% Difficult Neutral Easy Very easy Accept for review Accept for publication 146 LEXONOMICA. According to the participants, the difficulty of assessing the quality of the article is mainly influenced by three factors: − Clearly defined minimum threshold of required quality. The participants who were forced to set a threshold of the lowest required quality (for example, due to the high number of immediate rejections) reported that setting this threshold contributed to their decision-making efficiency. (»The threshold is so high that it is easy to determine when manuscripts don't reach it.«, »The easiest way is to exclude those articles for which there is no hope.«) Editors that help and encourage authors to improve their manuscripts and work with them have more difficulty in decision-making. − Consistency of assessment between individual quality criteria. If participants judge that the quality of an article is low or high based on all or most of the criteria (originality, validity, significance), then their decision is easy. It is harder for the participants to decide in cases where there is inconsistency in the evaluation of criteria, for example, if three criteria are above average and one is completely unsatisfactory: the manuscript is very original and has the attributes of scientific work but is poorly written. (»The most difficult are borderline cases, such as articles, strong in theory and weak in method, and vice versa.«). − Experience. The participants explained that their experience significantly contributed to making decisions. Decisions are significantly easier after a few years of editorial work. (»The first year, when I didn't have a basis for comparison and didn't know what to expect, was the hardest.«). 4 Discussion and conclusion By means of a qualitative semi-structured survey, this study provided an in-depth exploration of how editors of scientific journals across scientific disciplines perceived the quality assessment process (including initial checks of the manuscript and author, the peer review process and the final decision regarding publication), how they defined which quality criteria they use for the evaluation of the manuscript, and which factors they believed influenced the complexity of editorial decisions. K. Krapež: Editors' Responsibility for Publishing High-Quality Research Results: A Worldwide Study Into Current Challenges in Quality Assessment Processes 147. The study participants were sampled from the COPE and DOAJ online member databases. Out of the 1992 editors in the final sample, which were approached via the official journal email, 258 (13 % of the sampled potential participants) responded by starting to fill out the survey. A vast majority of the respondents were male (n = 209, 81 %). While the participants were from 42 countries, 39 % (n = 100) of them were from three countries: the USA (n = 57), the United Kingdom (n = 25), and Germany. The participants were experienced editors: 66 % (n = 167) had more than 20 years of working experience in science, while another 21 % had 11 to 20 years of experience. The respondents were editors of the journals that covered all FRASCATI classified academic fields, among which social sciences (n = 76, 31 %), medical and health sciences (n = 56, 23 %) and natural sciences (n = 49, 20 %) were three of the most represented in the study. The results indicate that there are five general initial checks performed by the participants or their teams when they first received the manuscript. As indicated in previous studies (Miller and Perrucci, 2001: 98-99; Mustaine and Tewksbury, 2013: 393-397), the editors in the initial phase of the quality assessment of the manuscript most commonly examined the overall fit of the manuscript to the scope of the journal. The 'fit to scope' criterion was also the most often reported by the editors in this study and was selected by 90 % (n = 210) of the respondents. In the open- ended question, the editors used the following descriptors for this criterion (selected examples): »compliance - topic«, »content generally matches the aim of the journal«, »Is the topic of the paper within the publishing strategy of the journal?«, »topic matching with editorial policy«, »whether topic relates to the journal's profile«, »does the manuscript fit the mission of the journal«, »consistency with journal's scope«. The second most-often (81 %, n = 189) selected criterion was 'general (scientific) quality of the article'. The respondents described it as (examples): »general level of academic quality«, »level of the scientific sophistication«, »check of scientific approach«, »is it of scholarly standards«, »quality concerning the design of the study, the theoretical framework«. The quality of the language was checked by 73 % (n = 170) of the participants. This contradicts the findings of Mustaine and Tewksbury in their 2013 study, where they concluded that editors do not see the quality of writing of a manuscript as among the primary factors in their decisions. Language quality criterion was described by respondents by the following examples: »quality of writing«, »language check«, »is it written in good English?«, »is it good enough (language)«, »properly written in good English«, »the correct grammar«. In an open- 148 LEXONOMICA. ended question, the respondents also exposed the 'appropriate format of the manuscript' (i.e., correct citation of bibliography and technical compliance with the guidelines of the journal) as an important criterion in the initial check. It was described as (examples): »check a format«, »formatting«, »technical compliance with standards«, »check of overall structure«. Significantly smaller was the share of the respondents who performed verification of the author's identity (30 %, n = 70) and/or affiliation with a certain institution (29 %, n = 68) as part of the initial check. In an open-ended question, the respondents also listed other types of checks that they regularly perform, such as 'plagiarism' check, 'ethical issues' check (if human or animal subjects were involved in the study) and 'double submissions' check. Study results also showed that the initial check of the paper is performed either by the editors (64 %, n = 154) or/and by members of the editorial team (40 %, n = 96). As a further key finding, this study showed that, according to the respondents, originality was the most important criterion they used to assess the quality of scientific articles. For originality, participants most often chose two descriptors, namely 'originality of overall paper' (63 %, n = 146) and 'added value to existing knowledge' (51 %, n = 119). When participants described a manuscript as 'original' in an open-ended question, they used the following descriptors: the paper had an 'added value' and/or that it contains 'novelty' or 'new knowledge'. Among the descriptors related to the validity criterion, the most frequently selected designation was 'appropriateness of the methodology used and conclusions' (39 %, n = 90), followed by the 'clarity of the discussion in the paper' (27 %, n = 62) and 'reliability of research and conclusions' (27 %, n = 62). The descriptor' significance of the research' was most often chosen (31 %, n = 71) from the descriptors related to the criterion of importance/significance. The criterion of originality (with three designations) and the criterion of validity (with six designations) were chosen equally often, and the criterion of importance/significance (with three designators) was chosen significantly less often (1: 0.42) by the participants than the first two criteria. When describing the highest quality papers, they had published in their careers, the respondents highlighted that these articles were not different in the sense that they had different qualities than already mentioned, but that they were rounded off into a meaningful and holistic piece of writing. Findings indicate that these were articles that were already good in the version that was submitted at the beginning, even before the discussion in the editorial office, and which were well received by the K. Krapež: Editors' Responsibility for Publishing High-Quality Research Results: A Worldwide Study Into Current Challenges in Quality Assessment Processes 149. reviewers and unanimously and unequivocally rated as excellent. The articles were, without exception, well-written and well-argued, addressed a topic that needed to be addressed, contained useful results or conclusions and had the potential to change or improve something. Regarding the selection of reviewers for an individual manuscript study, results show that respondents gave preference to reviewers who were well established in the particular scientific field and had a larger number of relevant publications. They favoured reviewers that had experience with review work and were constructive, objective, and accurate in their critiques. The respondents pointed out that there is a shortage of reviewers who are willing to participate and show seriousness and diligence. Participants avoided reviewers who are either privately or professionally closely connected to the author(s). Previous research reported (Mustaine and Tewksbury, 2013: 395) several methods that editors use for the identification of the reviewers, which were confirmed in this study. When searching for reviewers, the respondents used online electronic databases of reviewers, which were usually offered by the journal's publisher. In some cases, the respondents invited authors to suggest reviewers for their paper, or the respondents found a reviewer among those cited by the author in a manuscript. The respondents reported that they were assisted in the reviewer selection by members of the editorial team and, less often, of the editorial board. The study showed that discussions about quality criteria between the respondents and reviewers were rare; only 11 % (n = 20) of the respondents enrolled in such debates. Similarly, only 5 respondents (3 %) reported that reviewers attended training (seminars, workshops, trainings) which addressed the criteria for assessing the quality of articles. When asked about the reviewers' awareness of the journal's quality standards, 62 % (n =109) of the respondents reported that 'the journal has prescribed guidelines that must be followed by both the editor and the reviewers and 60 % (n = 106) of the respondents that they 'relied on the professionalism of the reviewer and that they believed that the reviewer had the same criteria for assessing the quality of the articles as themselves'. 150 LEXONOMICA. The study findings indicate that the most complex decisions that editors faced during the quality assessment of manuscripts were in situations where there are inconsistencies in the reviewers' reports (e.g., when the reviews differ significantly). In such cases, participating editors responded either by: i) rereading the article and re-evaluating the credibility of the reports and making an independent, final decision (58 %; n = 93), ii) finding a new, additional expert reviewer (30 %, n = 48) or iii) consulting the editorial team or external experts (21 %, n= 34). Less frequently, the participants decided to talk to the reviewers or author(s). Their final decision was more difficult if they were not experts in the specific scientific field of the paper. In such cases, they consulted the members of the editorial team/board. This is an example of the engaging role of the editor (Newton, 2010: 140), in which the editor actively contributes to the quality of the process by altering the power relationship between the reviewer and the author. The results, to some extent, support the observations of Newton (2010: 140) that the editor's role can variate from mechanical to engaging; however, it also indicates that this role is transformative. In situations where decisions are easy, editors intervene less and give preference to quality assessment of reviewers, while in 'complex' situations, as described above, they actively engage. Previous research indicated that editors are aware of their own role as the 'ultimate decision makers' (Glonti et al., 2019), even though sometimes they perform their role to some extent mechanically (Newton, 2010: 140). Concerning final editorial decisions, this study revealed how challenging the respondents found such decisions. The results show that the decision on whether to reject manuscripts or send them for review was difficult or very difficult only for 21 % (n = 47) of the respondents. The final decision regarding publication was not difficult for most of the participants (the 'neutral', 'easy' and 'very easy' groups represent 78 % (n = 172) of the participants together). We recognise a number of limitations in our study. Of all stakeholders involved in the quality assessment of manuscripts, this study was limited to the editor's perceptions. There were further limitations in our sampling since we focused only on editors that were members of COPE and DOAJ. All participating editors were editors-in-chief and experienced; hence the generalisation of the study findings is also limited in this aspect. The size of our sample was relatively small: there are currently a little less than 17 thousand journals indexed in the DOAJ database, while K. Krapež: Editors' Responsibility for Publishing High-Quality Research Results: A Worldwide Study Into Current Challenges in Quality Assessment Processes 151. COPE has slightly more than 13 thousand journal members. Since the respondents reported about their own practices, the results might be influenced by socially desirable answering or by inconsistencies between what the editors stated about their use of quality criteria and management of quality assurance processes and how they actually acted or what they experienced in their everyday editorial work. We tried to mitigate such risks by using a combination of open and close-ended questions to detect possible inconsistencies in answers and give the respondents exemplary short quotes. This study provides context for editors' perception and understanding of roles, tasks and responsibilities related to the publication of research results in reputable journals. Having identified what criteria editors of scientific journals apply to distinguish between manuscripts that they believe need to be published (or initially rejected/rejected after peer review) and which challenges they faced during quality assessment processes, this study aids in understanding critical decisions and obstacles in editors' quality assessment management. These key findings offer insights into how these issues can be addressed. References ALLEA (2017) The European Code of Conduct for Research Integrity. Retrieved from https://allea.org/code-of-conduct/. COPE - Committee on Publication Ethics, DOAJ - Directory of Open Access Journals, OASPA - Open Access Scholarly Publishers Association and WAME - World Association of Medical Editors (2018). Principles of Transparency and Best Practice in Scholarly Publishing. Retrieved 16th September 2021, from https://doaj.org/bestpractice. Council of Science Editors (2020 – last update) White Paper on Publication Ethics: CSE's White Paper on Promoting Integrity in Scientific Journal Publications. Retrieved 16th September 2021, from http://www.councilscienceeditors.org/resource-library/editorial-policies/white-paper-on- publication-ethics/. Elsevier and Sense About Science. (2019) Quality, Trust and Peer Review: Researchers Perspectives 10 Years On. Retrieved 16th September 2021, from https://senseaboutscience.org/wp- content/uploads/2019/09/Quality-trust-peer-review.pdf. Gehman, J., Glaser, G. L., Eisenhardt, K. M., Gioia, D., Langley, A., Corley, K. G. (2018) Finding Theory–Method Fit: A Comparison of Three Qualitative Approaches to Theory Building, Journal of Management Inquiry, 27(3), p. 284-300. Retrieved from: https://doi.org/10.1177/1056492617706029. Glonti, K., Boutron, I., Moher, D. and Hren, D. (2019) Journal Editors' Perspectives on the Roles and Tasks of Peer Reviewers in Biomedical Journals: A Qualitative Study, BMJ Open, 9(11), e033421. Retrieved from: https://doi.org/10.1136/bmjopen-2019-033421. Harley, D., Acord, S. K. and King, C. J. (2010) Assessing the Future Landscape of Scholarly Communication: An Exploration of Faculty Values and Needs in Seven Disciplines (Berkeley, 152 LEXONOMICA. CA: University of California). Retrieved 16th September 2021, from https://escholarship.org/uc/item/15x7385g. Hausmann, L., Murphy, S. P. (2016) The Challenges for Scientific Publishing, 60 Years On, Journal of Neurochemistry, 139, p. 280–287. https://doi.org/10.1111/jnc.13550. ICMJE - International Committee of Medical Journal Editors (2019) Recommendations for the Conduct, Reporting, Editing, and Publication of Scholarly Work in Medical Journals. Updated December 2019. Retrieved 16th September 2021, from http://www.icmje.org/recommendations. Miller, J., Perrucci, R. (2001) Back Stage at Social Problems: An Analysis of the Editorial Decision Process, 1993–1996, Social Problems, 48(1), p. 93–110. Retrieved from: https://doi.org/10.1525/sp.2001.48.1.93. Mulligan, A., Hall, L. and Raphael, E. (2013) Peer Review in a Changing World: An International Study Measuring the Attitudes of Researchers, Journal of the American Society for Information Science and Technology, 64(1), p. 132–161. Retrieved from: https://doi.org/10.1002/asi.22798. Mustaine, E. E., Tewksbury, R. (2013) Exploring the Black Box of Journal Manuscript Review: A Survey of Social Science Journal Editors, Journal of Criminal Justice Education, 24(3), p. 386–401. https://doi.org/10.1080/10511253.2012.759244. Newton, D. P. (2010) Quality and Peer Review of Research: An Adjudicating Role for Editors, Accountability in Research, 17(3), p. 130-145. https://doi.org/10.1080/08989621003791945. Palinkas, L. A., Horwitz, S. M., Green, C. A., Wisdom, J. P., Duan, N. and Hoagwood, K. (2015) Purposeful Sampling for Qualitative Data Collection and Analysis in Mixed Method Implementation Research, Administration and Policy in Mental Health and Mental Health Services Research, 42(5), p. 533-44. https://doi.org/10.1007/s10488-013-0528-y. Primack, R. B., Regan, T. J., Devictor, V., et al. (2019) Are Scientific Editors Reliable Gatekeepers of the Publication Process?, Biological Conservation, 238, 108232. Retrieved from: https://doi.org/10.1016/j.biocon.2019.108232. Severin, A., Chataway, J. (2020) Purposes of Peer Review: A Qualitative Study of Stakeholder Expectations and Perceptions [Preprint], SocArXiv. Retrieved from: https://doi.org/10.31235/osf.io/w2kg4. Severin, A. Chataway, J. (2021) Overburdening of Peer Reviewers: A Multi-Stakeholder Perspective on Causes and Effects, Learned Publishing, 29(1), p. 41 – 50. Retrieved from: https://doi.org/10.1002/leap.1392. Turcotte, C., Drolet, P. and Girard, M. (2004) Study Design, Originality and Overall Consistency Influence Acceptance or Rejection of Manuscripts Submitted to the Journal, Canadian Journal of Anesthesia/Journal Canadien D'anesthésie, 51(6), p. 549–556. Retrieved from: https://doi.org/10.1007/bf03018396. Valkenburg, G., Dix, G., Tijdink, J. et al. (2021) Expanding Research Integrity: A Cultural-Practice Perspective. Science and Engineering Ethics 27, 10 (2021). Retrieved from: https://doi.org/10.1007/s11948-021-00291-z. Vaismoradi, M., Jones, J., Turunen, H., and Snelgrove, S. (2016) Theme Development in Qualitative Content Analysis and Thematic Analysis, Journal of Nursing Education and Practice, 6(5), p. 100. Retrieved from: https://doi.org/10.5430/jnep.v6n5p100. Virlogeux, V., Trépo, C. and Pradat, P. (2018) The Growing Dilemma of Peer Review: A Three- Generation Viewpoint, European Science Editing, 44(2), p. 32-34. Retrieved from: https://doi.org/10.20316/ESE.2018.44.17019. Vrana, R. (2018) Editorial Challenges in a Small Scientific Community: Study of Croatian Editors, Learned Publishing, 31(4), p. 369-374. Retrieved from: https://doi.org/10.1002/leap.1188. Warne, V. (2016) Rewarding Reviewers - Sense or Sensibility? A Wiley Study Explained, Learned Publishing, 29(1), p. 41– 50. Retrieved from: https://doi.org/10.1002/leap.1002. Zaharie, M. A., Osoian, C. L. (2016) Peer Review Motivation Frames: A Qualitative Approach, European Management Journal, 34(1), p. 69– 79. Retrieved from: https://doi.org/10.1016/j.emj.2015.12.004.