Scholarly article on topic 'A systematic review highlights a knowledge gap regarding the effectiveness of health-related training programs in journalology'

A systematic review highlights a knowledge gap regarding the effectiveness of health-related training programs in journalology Academic research paper on "Psychology"

CC BY-NC-SA
0
0
Share paper
Academic journal
Journal of Clinical Epidemiology
OECD Field of science
Keywords
{Author / "Peer review" / Editor / "Systematic review" / Journalology / Manuscript}

Abstract of research paper on Psychology, author of scientific article — James Galipeau, David Moher, Craig Campbell, Paul Hendry, D. William Cameron, et al.

Abstract Objectives To investigate whether training in writing for scholarly publication, journal editing, or manuscript peer review effectively improves educational outcomes related to the quality of health research reporting. Study Design and Setting We searched MEDLINE, Embase, ERIC, PsycINFO, and the Cochrane Library for comparative studies of formalized, a priori–developed training programs in writing for scholarly publication, journal editing, or manuscript peer review. Comparators included the following: (1) before and after administration of a training program, (2) between two or more training programs, or (3) between a training program and any other (or no) intervention(s). Outcomes included any measure of effectiveness of training. Results Eighteen reports of 17 studies were included. Twelve studies focused on writing for publication, five on peer review, and none fit our criteria for journal editing. Conclusion Included studies were generally small and inconclusive regarding the effects of training of authors, peer reviewers, and editors on educational outcomes related to improving the quality of health research. Studies were also of questionable validity and susceptible to misinterpretation because of their risk of bias. This review highlights the gaps in our knowledge of how to enhance and ensure the scientific quality of research output for authors, peer reviewers, and journal editors.

Academic research paper on topic "A systematic review highlights a knowledge gap regarding the effectiveness of health-related training programs in journalology"

CrossMark

Journal of Clinical Epidemiology 68 (2015) 257—265

A systematic review highlights a knowledge gap regarding the effectiveness of health-related training programs in journalology

James Galipeaua*, David Mohera b, Craig Campbellc, Paul Hendryb, D. William Camerona,b, Anita Palepud, Paul C. Heberte

aClinical Epidemiology Program, Ottawa Hospital Research Institute, Centre for Practice Changing Research Building (CPCR 1) The Ottawa Hospital -

General Campus 501 Smyth Road, PO Box 201B, Ottawa, Ontario, Canada, K1H 8L6 bDepartment of Epidemiology & Community Medicine, Faculty of Medicine, University of Ottawa, 451, Smyth Rd., Ottawa, Ontario, Canada, K1H 8M5

cOffice of Professional Affairs, The Royal College of Physicians and Surgeons of Canada, 774 Echo Drive, Ottawa, Ontario, Canada, K1S 5N8 dDepartment of Medicine, Centre for Health Evaluation and Outcome Sciences, University of British Columbia, 588 — 1081 Burrard Street, St. Paul's

Hospital, Vancouver, BC, Canada, V6Z 1Y6 eDepartment of Medicine, Centre hospitalier de l'Universite de Montreal (CHUM), Hopital Notre-Dame, 1560, rue Sherbrooke Est, Montreal, Quebec,

Canada, H2L 4M1 Accepted 4 September 2014; Published online 7 November 2014

Abstract

Objectives: To investigate whether training in writing for scholarly publication, journal editing, or manuscript peer review effectively improves educational outcomes related to the quality of health research reporting.

Study Design and Setting: We searched MEDLINE, Embase, ERIC, PsyclNFO, and the Cochrane Library for comparative studies of formalized, a priori—developed training programs in writing for scholarly publication, journal editing, or manuscript peer review. Comparators included the following: (1) before and after administration of a training program, (2) between two or more training programs, or (3) between a training program and any other (or no) intervention(s). Outcomes included any measure of effectiveness of training.

Results: Eighteen reports of 17 studies were included. Twelve studies focused on writing for publication, five on peer review, and none fit our criteria for journal editing.

Conclusion: Included studies were generally small and inconclusive regarding the effects of training of authors, peer reviewers, and editors on educational outcomes related to improving the quality of health research. Studies were also of questionable validity and susceptible to misinterpretation because of their risk of bias. This review highlights the gaps in our knowledge of how to enhance and ensure the scientific quality of research output for authors, peer reviewers, and journal editors. © 2015 The Authors. Published by Elsevier Inc. This is an open access article under the CC BY-NC-SA license (http://creativecommons.org/licenses/by-nc-sa/3.0/).

Keywords: Author; Peer review; Editor; Systematic review; Journalology; Manuscript

Transparency declaration: The lead author (manuscript's guarantor) affirms that this manuscript is an honest, accurate, and transparent account of the study being reported, that no important aspects of the study have been omitted, and that any discrepancies from the study as planned (and, if relevant, registered) have been explained.

Funding: This research was funded by the Canadian Institutes of Health Research (#278874). The funder has no role in the design, collection, analysis, and interpretation of the data; in the writing of the manuscript; or in the decision to submit the manuscript for publication.

D.M. is funded by a University Research Chair. D.W.C. is supported by the Department of Medicine, University of Ottawa at The Ottawa Hospital.

Conflict of interest: None.

* Corresponding author. Tel.: 613-737-8899 x73830; fax: 613-7396938.

E-mail address: jgalipeau@ohri.ca (J. Galipeau).

1. Introduction

There is a growing concern that the quality of conduct and reporting of health care research is much lower than it has historically been believed to be. Research by Chalmers and Glasziou [1] estimates that nearly $100 billion is lost to ''waste'' in biomedical research globally each year, including waste related to the poor reporting of studies in the published literature, highlighting the consequences of poor quality and rigor. For example, over 30% of trial interventions are not sufficiently described, over 50% of planned study outcomes are not reported, and most new research is not interpreted in the context of systematic assessment of other relevant evidence [2]. The launch and growth of publication ethics and quality-focused organizations such as the Committee on Publication Ethics, World Association of

http://dx.doi.org/10.1016/j.jclinepi.2014.09.024

0895-4356/© 2015 The Authors. Published by Elsevier Inc. This is an open access article under the CC BY-NC-SA license (http://creativecommons.org/ licenses/by-nc-sa/3.0/).

What is new?

Key findings

• Studies evaluating training programs on writing for publication for authors were limited in number and poor in quality. Studies evaluating training programs for peer reviewers were limited in number, of higher quality, but did not find any interventions to be effective. We were unable to find any studies related to the effectiveness of training programs for editors.

What this adds to what was known?

• This systematic review supports the assertion in previous studies and editorial pieces that little is known on how to effectively train authors, peer reviewers, and editors. The importance of this finding is that it more reliably establishes the lack of an evidence base for effective training methods to ensure the scientific quality of research output.

What is the implication and what should change

• Research articles published in journals are likely to continue to be of pivotal importance in disseminating knowledge to a wide variety and large number of readers. Central to this process is ensuring writers have the necessary skills to develop transparent, complete, and timely reports of their research, and appropriately trained peer reviewers and editors have thoroughly assessed manuscripts before publication.

• Future research in journalology should be focused on developing more and better quality primary studies on training for authors, peer reviewers, and editors, as well as exploring and studying new methods for training professionals in all areas of journalology, including offering courses in this area.

• The development of core competencies, particularly for peer reviewers and editors, should be a priority because of the central role that these groups play in determining what is accepted as credible and rigorous scientific research by virtue of being published in an academic journal.

Medical Editors (WAME), and the Enhancing the QUAlity and Transparency of health Research (EQUATOR) Network reflects a recent movement toward improving the integrity of reports of health care research through a focus on journalology [3] (ie, the scientific process of

writing for publication, manuscript peer reviewing, and scientific journal editing and publishing).

Despite a slowly growing response to the repeated calls for better, more numerous training opportunities for manuscript authors, peer reviewers, and editors [4,5], there is still a large gap in knowledge about the quality, integrity, and impact of these practices. For example, a 2008 systematic review by Jefferson et al. [6] found no evidence that the training of peer reviewers has any effect on the quality of the outcome. The consequences are far reaching and serious: ''Incomplete and biased reporting has resulted in patients suffering and dying unnecessarily [7]. Reliance on an incomplete evidence base for decision making can lead to imprecise or incorrect conclusions about an intervention's effects. Biased reporting of clinical research can result in overestimates of beneficial effects [8] and suppression of harmful effects of treatments [9]. Furthermore, planners of new research are unable to benefit from all relevant past research [10].''

The objective of this project was to systematically review, evaluate, and synthesize information on formal educational opportunities aimed at ensuring the scientific quality of research output. More specifically, we investigated whether structured, a priori—developed training opportunities in journalology effectively improve educational outcomes for authors, peer reviewers, and editors.

2. Methods

This review is reported according to the guidelines of the PRISMA statement [11] (see Appendix A at www.jclinepi. com). We published a protocol [12] before conducting the review, and there were two deviations from the protocol in this review. First, because of a lack of resources, administrators of training opportunities in journalology identified in a separate Google-based "environmental scan'' (part of a larger project) were not contacted as part of our gray literature search to inquire whether they are aware of any published or unpublished evaluations of their training opportunities. Second, one of the criteria assessing the validity of evaluations originally included in the protocol (ie, ''whether sampling for comparison 2 and 3 occurred in the same time frame'') was removed from the list after it was deemed to be irrelevant to the evaluation of this research.

2.1. Search strategy

We conducted a comprehensive three-phase search to identify evaluations of formal training opportunities, as follows:

1. Using the Ovid interface, we performed a search of MEDLINE In-Process and Non-Indexed Citations, MEDLINE, Embase, ERIC, PsycINFO, and the databases of the Cochrane Library, all from 1990 to

January 2013. A specific search strategy (see Appendix B at www.jclinepi.com) was developed in conjunction with an information specialist and was peer reviewed before execution [13]. There were no language restrictions on the search strategy; however, because of the large expected yield of the planned review and limited resources available, potential evaluations encountered in languages other than English were set aside and are included in Appendix C at www.jclinepi.com. Letters, commentaries, and editorials were not excluded because of the possibility that they may contain reference to evaluations of particular training programs. Studies were not excluded based on publication status (ie, published vs. unpublished).

2. For training that was described in published reports, citations of these reports were forward searched using the Scopus citation database.

3. A gray literature search was conducted, consisting of screening the reference sections of included studies and other well-known publications on the topic that did not fit our inclusion criteria.

2.2. Selection criteria and process

We selected studies according to inclusion criteria related to the population, intervention, comparator, outcomes, and study design. Our population of interest consisted of those centrally involved in writing for scholarly publication, manuscript peer review, and journal editing (ie, authors, peer reviewers, journal editors) or any other group that may be peripherally involved in the scientific writing and publishing process, such as medical journalists. Interventions involved evaluations of formal training in any specialty or subspecialty of writing for scholarly publication, manuscript peer review, or journal editing targeted at the designated population(s). Potential journalology training opportunities meeting all the following criteria were included (1) focused on educational opportunities aimed at ensuring the scientific quality of research output [6], (2) an independent class (eg, not nested within a larger, all-encompassing course in research methods), course, or program, (3) a priori—developed curriculum, (4) offers a mechanism for registration/tracking of participants, and (5) uses objective measurement of one or more educational outcomes. Suitable comparators included the following: (1) before and after administration of a training class/course/ program of interest, (2) between two or more training classes/courses/programs of interest, or (3) between a training class/course/program and any other intervention(s) (including no intervention). Outcomes consisted of any measure of effectiveness of training, as reported, including but not limited to measures of knowledge, intention to change behavior, measures of excellence in training domains (writing, peer review, editing), however, reported.

Because this review was largely exploratory, where other meaningful outcomes were reported, this information was collected as well. Finally, the study design was limited to comparative studies evaluating at least one training program/course/class of interest.

Following the execution of the search strategy, the identified records (titles and available abstracts) were collated in a Reference Manager [14] database for deduplication. The final unique record set and full text of potentially eligible studies were exported to Internet-based software, DistillerSR (Evidence Partners, Ottawa, Canada), through which screening of records and extraction of data from included evaluations were carried out. Given the broad and general nature of many of the search terms (eg, author, editor, education), we expected a large volume of initial search results. Therefore, we conducted an initial screening of titles only, and subsequently the titles and abstracts of articles, by two reviewers, using a "liberal accelerated'' method [15] (ie, one reviewer screens all identified studies and a second reviewer screens only excluded studies, independently). The full text of all remaining potentially eligible evaluations was then retrieved and reviewed for eligibility, independently, by two members of the team using a priori eligibility criteria (see Appendix D at www. jclinepi.com). Disagreements between reviewers at this stage were resolved by consensus or by a third member of the research team.

2.3. Data extraction

A data extraction form was developed to capture information needed for data synthesis (see Appendix E at www.jclinepi.com). It was pilot tested using a subset of included evaluations and modified based on feedback from this exercise. The data were extracted by one reviewer with a second reviewer conducting 100% verification for accuracy. Discrepancies between reviewers were resolved by consensus or by a third member of the research team. Extracted general publication characteristics included the following: first author name and contact information (of corresponding author), year of publication, institutional affiliation of first author, country, language of publication, and funding source. We also collected descriptive study information regarding: the name of training class(es)/ course(s)/program(s) being evaluated (if applicable), participants, sample size, type of intervention, comparator(s), and study design. Extracted outcome data included the following: tool(s) used to evaluate effectiveness of training, timing of measurement, and effectiveness measurement scores (however reported).

2.4. Assessment of risk of bias and validity of evaluations

No tool currently exists to assess the validity (internal and external) of evaluations in methodological reviews such

as this. Study designs were expected to be largely heterogeneous; however, where evaluations using randomized controlled trial (RCT) or controlled clinical trial (CCT) designs were encountered, the Cochrane Risk of Bias tool was used to judge validity [16]. To assess all other evaluations, we used a previously used tool [17] to help readers make their own judgments about the overall validity of the included evidence. The criteria assessed

1. Whether an objective measure of training effectiveness was used (ie, a priori questionnaires).

2. Whether the measurement tool to evaluate training effectiveness was reported to be validated.

3. Whether intended methods align with reported findings.

4. Whether data from all included participants were reported.

5. Whether comparison groups represent similar populations.

2.5. Evidence synthesis

Because of the paucity of literature describing formal training opportunities in journalology, we were unable to anticipate the types of measurement tools that might be used for their evaluation. Study characteristics were summarized narratively in the text and presented in evidence tables. To ascertain whether meta-analysis of the data was possible, we assessed the methodological and clinical homogeneity of studies. We conducted narrative syntheses after concluding that meta-analysis was either not appropriate or not possible. Pooled estimates of effect were not calculated because of heterogeneity of training interventions. Accordingly, planned subgroup analyses and assessment for funnel plot asymmetry were not possible because of few included studies and heterogeneity.

3. Results

We located 24,208 unique records, of which 16,706 of them were excluded during title screening and a further 6,996 records were excluded during abstract screening. The resulting 506 full-text articles were screened, and 442 were excluded. Of the remaining 64 articles, 18 reports of 17 studies were included. Twelve studies related to writing for publication included one RCT, two CCTs, and nine before-and-after (CBA) studies with no control group (BA). Six reports of five studies related to peer review included two RCTs, two controlled CBA studies, and one CBA study. No studies were found to fit our inclusion criteria for training programs involving journal editors. A flow diagram of the study selection process is shown in Appendix F at www.jclinepi.com. Another 13 studies potentially fit all our inclusion criteria, but 11 of them could not be included because of missing critical data (ie,

baseline data, outcome data, or both; see Appendix G at www.jclinepi.com). For each of these studies, the corresponding author of the article was contacted twice to obtain the necessary data. All but two authors either did not reply or indicated that they could not supply the requested data. The two studies for which requested data were supplied are included in the results (and are not included in Appendix G at www.jclinepi.com). A list of all studies excluded at the full-text level, with reasons for exclusion, is reported in Appendix H at www.jclinepi.com.

3.1. Studies involving writing for publication

3.1.1. General characteristics of included trials

Of the 12 included studies (see Appendix I at www. jclinepi.com), the only RCT, a multicenter trial [18], took place in the United States and Brazil, whereas both CCTs [19,20] took place in the United States. The nine remaining BA studies included six [21—26] conducted in the United States, one in Canada [27], and another one in Australia [28]. One study's location [29] was unclear but appeared to take place in Australia. Participants included faculty members [21,22,24,26,27], academic and/or clinical nurses [19,29], and medical residents [20]. Two studies involved multiple groups; participants from medical and allied health professions [18], and nurses and other professionals involved in university education [28]. Sample size ranged from 4 to 621 participants. Interventions included the involvement of writing coaches [21], group meetings/support groups [21,24,25,27,28], mentoring [22,26], online workshops [18], face-to-face workshops [19,23,24,27], writing courses [28], writing retreats [29], independent study [24,27], faculty development programs [24], and graduate training programs [20]. The only study with a control group involved standard writing guidance training [18]. The most frequently reported outcome was the number of peer reviewed journal publications preintervention and postintervention [19—29]. One other study reported the prechange and postchange in manuscript quality [18]. The baseline measurement period ranged from 1-year preintervention to participants' entire research career preintervention. Duration of the interventions ranged from 1 day to interventions spanning over 4 years. Follow-up publication rates were measured from immediately postintervention to up to 5 years after the intervention. The publication date for the 12 articles ranged from 1995 to 2012.

3.1.2. Risk of bias and validity of evaluations assessment 3.1.2.1. Risk of bias assessment. The RCT [18] and CCT [20] involving writing for publication (see Appendix J at www.jclinepi.com) had a low risk of bias for blinding (detection bias) and the "other risks of bias'' category, with the RCT also assessed as low risk of bias for sequence generation and the CCT assessed as low risk for blinding (performance bias) and incomplete outcome data. However, the risk of bias was high for the CCT on random sequence generation and allocation concealment, whereas the RCT was assessed as a high risk of bias related to blinding (performance bias). Both

studies were assessed as unclear risk of bias with regard to selective reporting, and the RCT also had an unclear risk of bias for allocation concealment and incomplete outcome data.

3.1.2.2. Validity of evaluations. All nine writing for publication studies had an objective measure of effectiveness (see Appendix J at www.jclinepi.com). It was unclear in a few studies [21,22,29] whether the intended methods aligned with findings mainly because no clear methods were described in these studies. In all other studies, the methods aligned with the outcomes reported. All studies adequately reported data from all participants with the exception of one study [22], which only reported data on half of the participants, and another [28], which was not clear on how many participants were represented in the results. For the lone writing for publication CCT [19], the comparison groups did represent a similar population.

3.1.3. Effectiveness of interventions

For the writing for publication studies, outcomes included number of preintervention vs. postintervention peer reviewed publications and manuscript quality (see Appendix K at www.jclinepi.com). The vast majority of studies involving writing for publication measured the number of peer reviewed publications of participants before and after the intervention. Five studies [19,22,23,26,28] examined the number of individual participants with peer reviewed publications preintervention and postintervention with all showing an increase in the number of publications from baseline. There were also six studies [20—22,27—29] reporting a "tally" of the number of total publications obtained by the entire group preintervention and postintervention. All six studies reported an increase in the number of total publications among group members compared with baseline. Three studies [23—25] reported the mean publication rate for the group, finding a significant difference between the preintervention and postintervention mean publication rate in two instances [23,25] and a positive but nonsignificant trend toward improvement between the baseline and postintervention period in the third study [24]. One study [18], an RCT, examined the quality of students' written manuscripts between a group participating in an online intervention and a control group receiving standard writer guidance training over a period of 11 months. The online group showed a significantly better overall manuscript quality score than the standard writing guidance group as measured on the Six-Subgroup Quality Scale.

3.2. Studies involving peer reviewers

3.2.1. General characteristics of included trials

Of the five included studies (see Appendix I at www. jclinepi.com), two RCTs [30,31] took place in the United States and the United Kingdom, whereas the remaining CBA [32,33] and BA [34] studies all took place in the United States. Participants included peer reviewers for

major journals [30—33] and students in an undergraduate cell and molecular biology course for engineers [34]. Sample sizes ranged from 22 to 418 participants. Three of the studies' [31—33] interventions were delivered in a workshop format, whereas others included a self-taught training program [31], standard written peer reviewer training plus mentoring [30], and an undergraduate course assignment [34]. Comparators included standard peer reviewer training [30] and no training [31—33]. The main outcome reported in the peer review studies was a mean peer review quality score [30—33], which consisted of a 5-point scale. One study [31] made use of the validated Review Quality Instrument [35], whereas the other three [30,32,33] used a similar scale that was created by senior editors at the journal where the research took place and is part of their regular rating system. Other relevant outcomes included identification of major and minor errors [31] and assessment of knowledge [34]. The baseline measurement period ranged from 20 months to 2 years preintervention. Duration of the interventions ranged from 4 hours to one semester of university. One study [30] had a variable duration of the intervention, depending on how long it took for peer reviewers to review articles. Follow-up publication rates were measured from immediately postintervention to up to 20 months after the intervention. The publication date of articles ranged from 1998 to 2012.

3.2.2. Risk of bias and validity of evaluations assessment

3.2.2.1. Risk of bias assessment. The risk of bias assessment for the two RCTs [30,31] (see Appendix J at www. jclinepi.com) rendered mixed results. Both trial reports were assessed a low risk of bias for sequence generation and the "other risks of bias'' category. One study [30] also had a low risk of bias for incomplete outcome data, whereas the other [31] had a low risk of bias for blinding (detection bias) and selective outcome reporting. However, the latter study was assessed to be a high risk of bias for allocation concealment and selective reporting, whereas the former was assessed as high risk of bias for selective outcome reporting. The risk of bias was unclear for the blinding (performance bias) of both studies, as well as allocation concealment for one study [31] and blinding (detection bias) for the other [30].

3.2.2.2. Validity of evaluations. Both studies [32,33] scored yes on four of the five validity evaluation measures, with both of them lacking a validated measurement tool (see Appendix J at www.jclinepi.com).

3.2.3. Effectiveness of interventions

For peer reviewer training studies, outcomes included mean review quality score, identification of major and minor errors, and assessment of knowledge (see Appendix K at www.jclinepi.com). Four studies examined the mean review quality score using a comparative study design. Three of these [31—33] involved comparing participants in a

workshop to those receiving no training. No difference was found in quality scores between participants in the workshops and the control group. The fourth study [30] involved participants receiving standard written information on peer review plus mentoring compared with those only receiving standard written information, again finding no difference between groups. For the identification of major and minor errors, one study [31] assessed the impact of attendance at a 1-day peer reviewer training workshop or the use of a self-taught training package compared with a control group receiving no training. Participants in both the workshop and the training package intervention groups found significantly more major errors after training than the control group, even after adjustment for performance differences at baseline. However, this benefit was no longer statistically significant in a subsequent follow-up manuscript review. A companion study [36] revealed that the most common error discovered was biased randomization procedure. Other errors frequently uncovered were inadequate reporting of ineligible or nonrandomized cases, poor response rate, and unjustified conclusions. For the assessment of knowledge outcome, one study [34] assessed the knowledge of 36 undergraduate students regarding the scientific publishing process. The number of students responding correctly to questions showed an improvement from baseline to the follow-up on all four of the objective questions; however, no statistical analysis of the change from baseline was reported.

4. Discussion

Despite the recent efforts of journals, publishers, associations of editors, and others to develop more and better training, few studies have been published that examine the effects of structured training programs for authors and peer reviewers of scientific manuscripts and surprisingly, no studies were found on training programs involving journal editors. Included studies were generally small and inconclusive regarding the effects of training on manuscript quality, acceptance/rejection rates, or authors' knowledge about scientific writing and publishing (ie, successfully submitting a manuscript). The same can be said for peer review studies regarding review quality, identification of major and minor errors, and assessment of knowledge. The reports were of questionable validity and susceptible to misinterpretation because of their risk of bias. A search of PubMed and Google Scholar from January 2013 to July 2014 and the conference proceedings from The Seventh International Congress on Peer Review and Biomedical Publication in September 2013 (ie, after the original search date) did not reveal any further studies that fit our criteria.

This systematic review supports the assertion in previous studies and editorial pieces that little is known on how to effectively train authors, peer reviewers, and editors. The importance of this finding is that it more reliably establishes the lack of an evidence base for effective training methods to

ensure the scientific quality of research output. For example, previous research on authors of scientific manuscripts suggests that despite promotion and tenure being closely linked to their research and publication portfolio, most of them have no formal training in writing for publication and that they developed their skills mainly through a process of trial and error [37]. Similarly, most clinicians also receive little [38] to no [37] formal training in writing for publication. Additionally, most incidences of misconduct [39—42] have been found to stem from negligence, poorly performed science, investigator bias, or lack of knowledge, rather than acts of fraud [43], suggesting a need for better training. Although there does appear to be a wealth of literature describing how to go about writing for publication, the provision of information alone may be insufficient to support potential authors [44]. For example, many editors complain that despite clear style guides on journal Web sites, prospective authors continue to produce unsuitably formatted manuscripts [45,46].

Our results are also supported by previous research on training for peer reviewers, which found reviewers' training to be limited [47] and generally ineffective [48], with most reviewers being poorly trained and poorly motivated. A recent covert investigation found that 157 of 304 open access journals accepted a spoof medical article laden with easily detectable flaws [49,50], highlighting some of the quality issues and challenges surrounding peer review. Despite the expressed desire for training among most reviewers, their needs for training and support are not being met [51]. In terms of knowledge, peer reviewers have difficulty identifying major errors in articles submitted for publication [31,52,53], and in some cases, agreement between reviewers of the same manuscript is not much different than would be expected by chance [47]. Evidence also suggests that the quality of one's peer reviewing deteriorates over time [47,48] and that peer reviewers are susceptible to positive-outcome bias [54]. The limited training available means that most reviewers are being guided mainly by journals' instructions to peer reviewers or being forced to learn by trial and error [55]. For most medical residents, the only exposure to appraisal of research manuscripts comes from their participation in journal clubs [43]. In recent years, a number of large publishers have created online resources for peer reviewers of their journals [56]; however, we still do not have a clear picture of what constitutes effective training for peer reviewers. Despite the assertion by Jefferson et al. [6] in 2008 that there is no evidence that peer reviewer training has any effect and the subsequent urgent call for more research, the situation does not seem to have improved. Our lack of studies on training programs for journal editors aligns with previous research, which found them to have informal [57], little to no [58] training in editing skills, and to be unfamiliar with available guidelines [59], despite many saying they would welcome more guidance or training [57—59]. As a group, they performed very poorly on tests of knowledge of editorial issues related to authorship, conflict of interest, peer review, and plagiarism

[57]. This lack of publishing-related knowledge may contribute to the belief of many editors that ethical issues occur rarely or never at their journal [59]. Paul Hebert, former Editor-in-Chief of the Canadian Medical Association Journal, highlighted the need to train the editors of tomorrow, saying that because of our small publishing industry in Canada, "there are few medical editing positions, no obvious career paths and even fewer training opportunities [60].'' These limited training opportunities involve 1-year, full-time fellowships (eg, [61—63]) that are only available to a select few or 1- to 2-month electives offered by journals such as JAMA [64] and BMJ [65] that are only available to medical students. To our knowledge, there are no certification programs or degrees that would allow a physician to train specifically to become a medical journal editor.

The data sets for each of the three participant groups and the interpretations that can be drawn from them are quite different from one another. First, although most of the studies on writing for publication showed an increase in publication rates, this finding may be both somewhat misleading and of little practical value. In most included studies, participants had no publications before the inter-vention(s); therefore, any output constituted an increase in their rate of publication. In most cases, this meant a change from zero publications to a single publication, usually partially or fully developed during the intervention. Another issue is that there are no established reference standards to determine what would constitute an appropriate or meaningful number or rate of publication(s). Finally, publication rates and counts tell us very little about the quality of the manuscripts, the quality of the intervention, the quality of the journals in which manuscripts were published, or the rigor of the peer review process.

The results for peer reviewers, in contrast to those for authors, generally included more rigorous study designs but demonstrated little to no effect of the intervention(s). However, because of the small number of studies, we are still not able to draw any meaningful conclusions from the results. Nonetheless, it is interesting to note that the interventions used in these studies (ie, workshops, mentoring) appear to be among those most commonly used for training peer reviewers [66]. Additionally, although findings showing a lack of effectiveness of these interventions, this does not seem to have discouraged their use for training peer reviewers, as evidenced by the continued prominence of these types of training opportunities for peer reviewers available worldwide [67].

The lack of research on the effectiveness of training programs for journal editors is somewhat concerning. It is also particularly noteworthy, considering the important role that journal editors play in ultimately deciding what does and does not get published in scientific journals, as well as the stated desire of editors for more and better training opportunities. Although a limited number of editorial training opportunities do exist and are offered by reputable entities

[56], the lack of research on their effectiveness creates a void of information that could otherwise be used to improve existing programs and create new and more accessible training opportunities for current and aspiring journal editors. There also appears to be very little information in general on journal editing, such as books, editorials, videos, and "how-to" articles, leaving a larger knowledge gap for editors than exists for authors or peer reviewers.

4.1. Limitations

It is important to acknowledge that potentially a lot of author, peer reviewer, and editor training may occur outside the scope of our research (ie, formalized, a priori-developed training programs), including shadowing, learning from colleagues, on the job learning/training, informal mentoring, and via books or other text-based materials. However, much of this body of knowledge is not well described in the scientific literature, making it very difficult to form an evidence base for effective practice with regard to these methods.

At various points in the research, there were challenges with the specific terminology related to journalology. First, the term ''journalology'' is not common in the literature and therefore not very useful for collecting the variety of topics that fall under its umbrella. Second, many terms that relate to journalology have multiple definitions and uses—some of which relate to journalology and some which do not. These issues may have had an influence on both the search strategy and screening, producing a high number of unrelated results due to multiple definitions for certain terms, as well as the possibility of missing some studies that may have used less common terms to describe journalological training.

4.2. Looking ahead

Research articles published in journals are likely to continue to be of pivotal importance in disseminating knowledge to a wide variety and large number of readers. Central to this process is ensuring writers have the necessary skills to develop transparent, complete, and timely reports of their research, and appropriately trained peer reviewers and editors have thoroughly assessed manuscripts before publication. To this end, the EQUATOR Network has developed toolkits for authors, editors, and peer reviewers [56] and WAME houses a repository of training resources for these three groups [67]. However, the quality and effectiveness of nearly all these resources has not been assessed, participation in many of the training opportunities is expensive and/or limited, and there is no standardized way to teach journalology skills. One way to address these issues could be by introducing the topic of journalology through a required graduate or postgraduate course within universities. Anecdotally, we are aware of only a few comprehensive formal academic courses on journalology in centers of higher learning [68,69]. Such a course could draw attention to and increase awareness of other problems

related to journalology, such as publication bias [70], selective reporting [71], plagiarism [72], and the emergence of open access publication [73].

One possible reason for the inconsistency in training for authors, peer reviewers, and editors might be related to our lack of knowledge regarding the core competencies required for them to be effective. There does not appear to be a consensus about this issue nor any concerted effort to engage in a consensus-building process. This is in contrast to, for example, core competencies required by the Royal College of Physicians and Surgeons of Canada or the Accreditation Council for Graduate Medical Education in the United States for physicians in postgraduate training. If the field of journalology is to develop fully, a similar system will need to be put in place to ensure standardized, mandatory training for all the groups that are involved in the development and publication of scientific manuscripts.

Future research in journalology should be focused on developing more and better quality primary studies on training for authors, peer reviewers, and editors, as well as exploring and studying new methods for training professionals in all areas of journalology. The development of core competencies, particularly for peer reviewers and editors, should be a priority because of the central role that these groups play in determining what is accepted as credible and rigorous scientific research by virtue of being published in an academic journal. The importance of training for all three groups should not be ignored or neglected, as the knowledge and practices of those involved in our health care system are heavily dependent on quality of the research that is available to inform them. Some specific research questions for future consideration include (1) What core competencies are essential for effective writing for publication, peer review, and journal manuscript editing? and (2) When is the most appropriate and effective time in one's academic or professional career to engage in training in journalology?

Acknowledgment

The authors thank Becky Skidmore for her valuable work in developing and conducting the bibliographic literature search.

Supplementary data

Supplementary data related to this article can be found online athttp://dx.doi.org/10.1016/j.jclinepi.2014.09.024.

References

[1] Chalmers I, Glasziou P. Avoidable waste in the production and reporting of research evidence. Obstet Gynecol 2009;114(6):1341.

[2] Glasziou P, Meats E, Heneghan C, Shepperd S. What is missing from descriptions of treatment in trials and reviews? BMJ 2008; 336:1472.

[3] Kumar R. The science hoax: poor journalology reflects poor training in peer review. BMJ 2013;347:f7465.

[4] Hebert PC. Even an editor needs an editor: reflections after five years at CMAJ. CMAJ 2011;183(17):1951.

[5] Benos DJ, Bashari E, Chaves JM, Gaggar A, Kapoor N, LaFrance M, et al. The ups and downs of peer review. Adv Physiol Educ 2007;31(2):145—52.

[6] Jefferson T, Rudin M, Brodney Folse S, Davidoff F. Editorial peer review for improving the quality of reports of biomedical studies. Cochrane Database Syst Rev 2008;MR000016.

[7] Cowley AJ, Skene A, Stainer K, Hampton JR. The effect of lorcai-nide on arrhythmias and survival in patients with acute myocardial infarction: an example of publication bias. Int J Cardiol 1993;40: 161—6.

[8] Sterne JAC, Egger M, Moher D. Addressing reporting biases. In: Higgins JP, Green S, editors. Cochrane handbook for systematic reviews of interventions: Cochrane book series. Chichester, UK: John Wiley & Sons; 2008:297—333.

[9] Dealing with biased reporting of the available evidence. The James Lind library. Available at http://www.jameslindlibrary.org/essays/ interpretation/relevant_evidence/dealing-with-biased-reporting-of-the-available-evidence.html. Accessed May 17, 2013.

[10] Dickersin K, Chalmers I. Recognizing, investigating and dealing with incomplete and biased reporting of clinical research: from Francis Bacon to the WHO. JRSM 2011;12:532—8.

[11] Moher D, Liberati A, Tetzlaff J, Altman DG, Group P. Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. Ann Intern Med 2009;151:264—9. W64.

[12] Galipeau J, Moher D, Skidmore B, Campbell C, Hendry P, Cameron DW, et al. Systematic review of the effectiveness of training programs in writing for scholarly publication, journal editing, and manuscript peer review (protocol). Systematic reviews 2013;2:41.

[13] Sampson M, McGowan J, Cogo E, Grimshaw J, Moher D, Lefebvre C. An evidence-based practice guideline for the peer review of electronic search strategies. J Clin Epidemiol 2009;62: 944—52.

[14] Reuters T. Reference Manager. New York, NY: Thomson Reuters; 2008.

[15] Khangura S, Konnyu K, Cushman R, Grimshaw J, Moher D. Evidence summaries: the evolution of a rapid review approach. Syst Rev 2012;1(1):1—9.

[16] Higgins JPT, Altman DG. Assessing risk of bias in included studies. In: Higgins JPT, Green S, editors. Cochrane Handbook for Systematic Reviews of Interventions. Version 5.0.2. Chichester, UK: John Wiley & Sons; 2008:187—242.

[17] Shamseer L, Stevens A, Skidmore B, Turner L, Altman DG, Hirst A, et al. Does journal endorsement of reporting guidelines influence the completeness of reporting of health research? A systematic review protocol. Syst Rev 2012;1:24.

[18] Phadtare A, Bahmani A, Shah A, Pietrobon R. Scientific writing: a randomized controlled trial comparing standard and on-line instruction. BMC Med Educ 2009;9(1):27.

[19] Lawrence MM, Folcik MA. Writing for publication. J Nurs Staff Dev 1996;12(6):289—93.

[20] West CP, Halvorsen AJ, McDonald FS. Scholarship during residency training: a controlled comparison study. Am J Med 2011; 124:983—7.

[21] Sanderson BK, Carter M, Schuessler JB. Writing for publication: faculty development initiative using social learning theory. Nurse Educ 2012;37(5):206—10.

[22] Oakley M, Vieira AR. The endangered clinical teacher-scholar: a promising update from one dental school. J Dent Educ 2012; 76(4):454—60.

[23] Sommers PS, Muller JH, Bailiff PJ, Stephens GG. Writing for publication: a workshop to prepare faculty as medical writers. Fam Med 1996;28(9):650—4.

[24] Hekelman FP, Gilchrist V, Zyzanski SJ, Glover P, Olness K. An [50 educational intervention to increase faculty publication productivity.

Fam Med 1996;27(4):255-9. [51

[25] Sonnad SS, Goldsack J, McGowan KL. A writing group for female assistant professors. J Natl Med Assoc 2011;103(9-10):811-5.

[26] Files JA, Blair JE, Mayer AP, Ko MG. Facilitated peer mentorship: a

pilot program for academic advancement of female medical faculty. [52

J Womens Health 2008;17(6):1009-15.

[27] Steinert Y, McLeod PJ, Liben S, Snell L. Writing for publication in medical education: the benefits of a faculty development workshop

and peer writing group. Med Teach 2008;30(8):e280-5. [53

[28] Rickard CM, McGrail MR, Jones R, O'Meara P, Robinson A, Burley M, et al. Supporting academic publication: evaluation of a

writing course combined with writers' support group. Nurse Educ [54

Today 2009;29(5):516-21.

[29] Jackson D. Mentored residential writing retreats: a leadership strategy to develop skills and generate outcomes in writing for publication. Nurse Educ Today 2009;29(1):9-15. [55

[30] Houry D, Green S, Callaham M. Does mentoring new peer reviewers improve review quality? A randomized trial. BMC Med

Educ 2012;12(1):83. [56

[31] Schroter S, Black N, Evans S, Carpenter J, Godlee F, Smith R. Effects of training on quality of peer review: randomised controlled [57 trial. BMJ 2004;328:673.

[32] Callaham ML, Wears RL, Waeckerle JF. Effect of attendance at a

training session on peer reviewer quality and performance. Ann [58

EmergMed 1998;32(3):318-22.

[33] Callaham ML, Schriger DL. Effect of structured workshop training

on subsequent performance of journal peer reviewers. Ann Emerg [59

Med 2002;40(3):323-8.

[34] Guilford WH. Teaching peer review and the process of scientific

writing. Adv Physiol Educ 2001;25(3):167-75. [60

[35] van Rooyen S, Black N, Godlee F. Development of the review quality instrument (RQI) for assessing peer reviews of manuscripts. J [61 Clin Epidemiol 1999;52:625-9.

[36] Schroter S, Black N, Evans S, Godlee F, Osorio L, Smith R. What [62 errors do peer reviewers detect, and does training improve their ability to detect them? J R Soc Med 2008;101(10):507-14. [63

[37] Murray R, Newton M. Facilitating writing for publication. Physiotherapy 2008;94:29-34. [64

[38] Pololi L, Knight S, Dunn K. Facilitating scholarly writing in academic medicine. J Gen Intern Med 2004;19:64-8. [65

[39] Rennie D. Editorial peer review: its development and rationale. In:

Godlee F, Jefferson T, editors. Peer Review in Health Sciences. Lon- [66

don, UK: BMJ; 2003:1-13.

[40] Tavare A. Managing research misconduct: is anyone getting it right? [67 BMJ 2011;34:d8212.

[41] Wager E. Coping with scientific misconduct. BMJ 2011;343:d6586.

[42] Brice J, Bligh J. Author misconduct: not just the editors' responsi- [68 bility. Med Educ 2005;39(1):83-9.

[43] Marusic A. Author misconduct: editors as educators of research integrity. Med Educ 2005;39(1):7-8.

[44] Keen A. Writing for publication: pressures, barriers and support [69 strategies. Nurse Educ Today 2007;27(5):382-8.

[45] Driscoll J, Driscoll A. Writing an article for publication: an open [70 invitation. J Orthopaedic Nurs 2002;6:144-52.

[46] Albarran JW, Scholes J. How to get published: seven easy steps.

Nurs Crit Care 2005;10(2):72-7. [71

[47] Callaham ML, Tercier J. The relationship of previous training and experience of journal peer reviewers to subsequent review quality. PLoS Med 2007;4(1):e40.

[48] Callaham ML. The natural history of peer reviewers: the decay of [72 quality. In: Proceedings of the Sixth International Congress on Peer

Review and Biomedical Publication. Vancouver, Canada: Interna- [73

tional Congress on Peer Review and Biomedical Publication; 2009.

[49] Bohannon J. Who's afraid of peer review? Science 2013;342:60-5.

Hawkes N. Spoof research paper is accepted by 157 journals. BMJ 2013;347:f5975.

Freda MC, Kearney MH, Baggs JG, Broome ME, Dougherty M. Peer reviewer training and editor support: results from an international survey of nursing peer reviewers. J Prof Nurs 2009;25(2): 101—8.

Baxt WG, Waeckerle JF, Berlin JA, Callaham ML. Who reviews the reviewers? Feasibility of using a fictitious manuscript to evaluate peer reviewer performance. Ann Emerg Med 1998;32(3): 310—7.

van Rooyen S, Godlee F, Evans S, Smith R, Black N. Effect of blinding and unmasking on the quality of peer review: a randomized trial. JAMA 1998;280:234—7.

Emerson GB, Warme WJ, Wolf FM, Heckman JD, Brand RA, Leopold SS. Testing for the presence of positive-outcome bias in peer review: a randomized controlled trial. Arch Intern Med 2010; 170:1934—9.

Lu Y: Learning to be confident and capable journal reviewers: an Australian perspective. Cambridgeshire, UK: Learned Publishing; 2012;25(1):56—61.

Available at http://www.equator-network.org/toolkits/editors/. Accessed May 17, 2013.

Wong VS, Callaham ML. Medical journal editors lacked familiarity with scientific publication issues despite training and regular exposure. J Clin Epidemiol 2012;65:247—52.

Garrow J, Butterfield M, Marshall J, Williamson A. The reported training and experience of editors in chief of specialist clinical medical journals. JAMA 1998;280:286—7.

Wager E, Fiack S, Graf C, Robinson A, Rowlands I. Science journal editors' views on publication ethics: results of an international survey. J Med Ethics 2009;35:348—53.

Hebert PC. Even an editor needs an editorr: eflections after five years at CMAJ. CMAJ 2011;183:1951.

Available at http://www.councilscienceeditors.org/wp-content/ uploads/v28n4p141-142.pdf. Accessed May 17, 2013. Available at http://www.aafp.org/afp/2006/0715/p211.html. Accessed May 17, 2013.

Available at http://www.councilscienceeditors.org/wp-content/ uploads/v27n6p202.pdf. Accessed May 17, 2013. Available at http://jama.jamanetwork.com/journal.aspx. Accessed May 17, 2013.

Available at http://student.bmj.com/student/static-pages.html? pageId=8. Accessed May 17, 2013.

Souder L: The ethics of scholarly peer review: a review of the literature. Learned Publishing 2010; 24(1): 55-74. Repository of Ongoing Training Opportunities in Journalology. Available at http://www.wame.org/about/repository-of-ongoing-training-opportunities. Accessed May 17, 2013.

Marusic A, Sambunjak D, Jeroncic A, Malicki M, Marusic M. No health research without education for research—experience from an integrated course in undergraduate medical curriculum. Med Teach 2013;35(7):609.

Available at http://www.mefst.hr/default.aspx?id=928. Accessed May 17, 2013.

Song F, Parekh S, Hooper L, Loke YK, Ryder J, Sutton AJ, et al. Dissemination and publication of research findings: an updated review of related biases. Health Technol Assess 2010;14:iii. 1—193. McGauran N, Wieseler B, Kreis J, Schuler YB, Kölsch H, Kaiser T. Reporting bias in medical research—a narrative review. Trials 2010; 11—37: Available at http://www.trialsjournal.com/content/pdf/1745-6215-11-37.pdf. Accessed May 17, 2013.

Das N, Panjabi M. Plagiarism: why is it such a big issue for medical writers? Perspect Clin Res 2011;2(2):67.

Yassine G, Hajjem C, Lariviere V, Gingras Y, Carr L, Brody T, et al. Self-selected or mandated, open access increases citation impact for higher quality research. PLoS One 2010;5(10):e13636.

©2015 Elsevier