Like all scientific journals, JHND has the specific aim to be an internationally important journal which only publishes papers of the highest quality for our field. As an editorial team my colleagues and I strive to ensure that this happens and we make publication decisions which are based either on our own appraisal of the quality of a submission, on the recommendations and advice of our reviewers, or on a combination of the two. Our standards are high and at the present time we are rejecting around 85% of the manuscript submissions we receive, and a fair proportion of those are rejected without ever going out for peer review.
So, how do we define quality in scientific publication and how might editors use such judgement in making their decisions about whether to accept or even review an article? Of course measurements of journal quality are legion and include measures such as the journal impact factor and other bibliometric indicators. Such measures are, however, not particularly useful when we try and assess the quality of a specific article. It is often the case that papers published in a journal such as Nature or Science are assumed to be of the highest quality. That is because those journals really do publish the work at the top of their fields and the competition to publish there is tough. However, their astronomical impact factors reflect the journal as a whole and not the individual papers. There are bad papers in Nature and I have often run journal clubs with undergraduate students where the holes and flaws have been easily exposed. So, again, how do we assess a specific article and it’s quality.
In the UK researchers undergoing a research assessment exercise every 5-6 years, which evaluates the quality of the research environment, quality of research outputs and impact of research in all university departments. The latest iteration of this exercise was called the Research Excellence Framework (REF) and was held in 2014. REF2014 set a series of criteria to define quality of research papers and sought to rank the papers entered to the assessment by the universities as from 4* to 1*. The criteria applied are available here but were essentially considering three criteria:
Originality- how novel was the paper? Did it lead to new knowledge or did it simply confirm what was already known.
Significance- how important is the work for end-users. Would it change practice in industry or policy, or is it purely of academic interest?
Rigour- was the work performed to a high standard, using state-of-the-art methods?
The criteria were applied by reviewers to determine REF star ratings as follows:
Four star: Quality that is world-leading in terms of originality, significance and rigour
Three star: Quality that is internationally excellent in terms of originality, significance and rigour but which falls short of the highest standards of excellence.
Two star: Quality that is recognised internationally in terms of originality, significance and rigour.
One star: Quality that is recognised nationally in terms of originality, significance and rigour.
Unclassified: Quality that falls below the standard of nationally recognised work. Or work which does not meet the published definition of research for the purposes of this assessment.
Those are criteria used in a UK national research assessment exercise and are exercised independently of reference to the bibliometrics of the journal in which articles are published. Although useful to editors and authors alike in considering quality, these criteria don’t fully embrace what editors or readers may be looking for. In my capacity as editor I am really wanting to publish work that I think is at least 2* by those criteria and would be not sending unclassified material out to reviewers. However, as an editor I will also be weighing up how widely cited a paper might be (as that determines my impact factor and how my journal will be judged). Is the article good but too niche? Do I want to publish excellent work that nobody will ever read? I also have to assess how I can publish papers without breaking the page allowance set by my publishers. If I can only publish 90 articles a year, but I have 150 potential submissions, then my decisions must be informed by quality of the work, likelihood of citation, appeal and priority to my core readership (in this case the members of the British Dietetic Association).
For a reader the indicators of quality will depend on why they are reading the paper. An apparently low quality paper that lacks ‘significance’ and ‘originality’ may actually prove incredibly useful to many because it presents technical information or a methodological approach. Similarly something of quality because it educates undergraduates would be low quality from a REF perspective.
The criteria used in REF are certainly having an influence on how authors in the UK write their papers. As we know that REF reviewers may only really read parts of the paper in making their assessment, we make sure that statements that indicate significance and originality sing out from the abstract and introduction. That’s a good strategy for authors to get their papers past the initial editorial triage and out for review too. If titles and abstract look dull or fail to show significance, originality and rigour then the editor will not be impressed.
I do think that in nutrition and dietetics attaining the four star quality rating may be more challenging than for other disciplines. Rigour is often an issue as in nutrition making precise measurements of food intake, for example, or other staple data items relies on methods which have major limitations. Novelty can also be a challenge as our discipline largely depends on developing a mass of evidence through repetition of previous studies (e.g. the dozens of epidemiological studies that are typically necessary to establish the relationship between diet and cancer). In dietetics we also suffer from issues with significance as research in the discipline often only has local validity and importance, since dietetic practice varies considerably between health systems.