References

Barker J, Linsley P, Kane R, 3rd edn. London: Sage; 2016

Ethical guidelines for educational research. 2018; https://tinyurl.com/c84jm5rt

Bowling A Research methods in health, 4th edn. Maidenhead: Open University Press/McGraw-Hill Education; 2014

Gliner JA, Morgan GAMahwah (NJ): Lawrence Erlbaum Associates; 2000

Critical Skills Appraisal Programme checklists. 2021; https://casp-uk.net/casp-tools-checklists

Cresswell J, 4th edn. London: Sage; 2013

Grainger A Principles of temperature monitoring. Nurs Stand. 2013; 27:(50)48-55 https://doi.org/10.7748/ns2013.08.27.50.48.e7242

Jupp VLondon: Sage; 2006

Continuing professional development (CPD). 2021; http://www.hcpc-uk.org/cpd

Kennedy M, Burnett E Hand hygiene knowledge and attitudes: comparisons between student nurses. Journal of Infection Prevention. 2011; 12:(6)246-250 https://doi.org/10.1177/1757177411411124

Lindsay-Smith G, O'Sullivan G, Eime R, Harvey J, van Ufflen JGZ A mixed methods case study exploring the impact of membership of a multi-activity, multi-centre community group on the social wellbeing of older adults. BMC Geriatrics. 2018; 18 https://bmcgeriatr.biomedcentral.com/track/pdf/10.1186/s12877-018-0913-1.pdf

Morse JM, Pooler C, Vann-Ward T Awaiting diagnosis of breast cancer: strategies of enduring for preserving self. Oncology Nursing Forum. 2014; 41:(4)350-359 https://doi.org/10.1188/14.ONF.350-359

Revalidation. 2019; http://revalidation.nmc.org.uk

Parahoo K Nursing research, principles, processes and issues, 3rd edn. Basingstoke: Palgrave Macmillan; 2014

Polit DF, Beck CT Nursing research, 10th edn. Philadelphia (PA): Wolters Kluwer; 2017

Critiquing a published healthcare research paper

25 March 2021
Volume 30 · Issue 6

Research is defined as a ‘systematic inquiry using orderly disciplined methods to answer questions or to solve problems' (Polit and Beck, 2017:743). Research requires academic discipline coupled with specific research competencies so that an appropriate study is designed and conducted, leading to the drawing of relevant conclusions relating to the explicit aim/s of the study.

Relevance of research to nursing and health care

For those embarking on a higher degree such as a master's, taught doctorate, or a doctor of philosophy, the relationship between research, knowledge production and knowledge utilisation becomes clear during their research tuition and guidance from their research supervisor. But why should other busy practitioners juggling a work/home life balance find time to be interested in healthcare research? The answer lies in the relationship between the outcomes of research and its relationship to the determination of evidence-based practice (EBP).

The Health and Care Professions Council (HCPC) and the Nursing and Midwifery Council (NMC) require registered practitioners to keep their knowledge and skills up to date. This requirement incorporates being aware of the current EBP relevant to the registrant's field of practice, and to consider its application in relation to the decisions made in the delivery of patient care.

Advanced clinical practitioners (ACPs) are required to be involved in aspects of research activities (Health Education England, 2017). It is for this reason that practitioners need to know how EBP is influenced by research findings and, moreover, need to be able to read and interpret a research study that relates to a particular evidence base. Reading professional peer-reviewed journals that have an impact factor (the yearly average number of citations of papers published in a previous 2-year period in a given journal is calculated by a scientometric index giving an impact factor) is evidence of continuing professional development (CPD).

CPD fulfils part of the HCPC's and the NMC's required professional revalidation process (HCPC, 2021; NMC, 2019). For CPD in relation to revalidation, practitioners can give the publication details of a research paper, along with a critique of that paper, highlighting the relevance of the paper's findings to the registrant's field of practice.

Defining evidence-based practice

According to Barker et al (2016:4.1) EBP is the integration of research evidence and knowledge to current clinical practice and is to be used at a local level to ensure that patients receive the best quality care available. Because patients are at the receiving end of EBP it is important that the research evidence is credible. This is why a research study has to be designed and undertaken rigorously in accordance with academic and scientific discipline.

The elements of EBP

EBP comprises three elements (Figure 1). The key element is research evidence, followed by the expert knowledge and professional opinion of the practitioner, which is important especially when there is no research evidence—for example, the most appropriate way to assist a patient out of bed, or perform a bed bath. Last, but in no way of least importance, is the patient's preference for a particular procedure. An example of this is the continued use of thermal screening dots for measuring a child's temperature on the forehead, or in the armpit because children find these options more acceptable than other temperature measuring devices, which, it is argued, might give a more accurate reading (Grainger, 2013).

Figure 1. The elements of evidence-based practice

Understanding key research principles

To interpret a published research study requires an understanding of key research principles. Research authors use specific research terms in their publications to describe and to explain what they have done and why. So without an awareness of the research principles underpinning the study, how can readers know if what they are reading is credible?

Validity and reliability have long been the two pillars on which the quality of a research study has been judged (Gliner and Morgan, 2000). Validity refers to how accurately a method measures what it is intended to measure. If a research study has a high validity, it means that it produces results that correspond to real properties, characteristics, and variations in the part of the physical or social world that is being studied (Jupp, 2006).

Reliability is the extent to which a measuring instrument, for example, a survey using closed questions, gives the same consistent results when that survey is repeated. The measurement is considered reliable if the same result can be consistently achieved by using the same methods under the same circumstances (Parahoo, 2014).

The research topic is known as the phenomenon in a singular sense, or phenomena if what is to be researched is plural. It is a key principle of research that it is the nature of the phenomenon, in association with the study's explicit research aim/s, that determines the research design. The research design refers to the overall structure or plan of the research (Bowling, 2014:166).

Methodology means the philosophy underpinning how the research will be conducted. It is essential for the study's research design that an appropriate methodology for the conduct and execution of the study is selected, otherwise the research will not meet the requirements of being valid and reliable. The research methods will include the design for data sampling, how recruitment into the study will be undertaken, the method/s used for the actual data collection, and the subsequent data analysis from which conclusions will be drawn (see Figure 2).

Figure 2. The research umbrella that illustrates the required consistent relationship between a study's methodology and its research methods

Quantitative, qualitative, and mixed-methods studies

A quantitative methodology is where the phenomenon lends itself to an investigation of data that can be numerically analysed using an appropriate statistical test/s. Quantitative research rests on the philosophical view that science has to be neutral and value-free, which is why precise measurement instruments are required (Box 1). Quantitative research is influenced by the physical sciences such as mathematics, physics, and chemistry. The purpose of quantitative studies is to identify whether there are any causal relationships between variables present in the phenomenon. In short, a variable is an attribute that can vary and take on different values, such as the body temperature or the heart rate (Polit and Beck, 2017:748).

Quantitative studies can sometimes have a hypothesis. A hypothesis is a prediction of the study's outcome, and the aim of the study is to show either that the hypothesis is demonstrated as proven, or that it is not proven. Often a hypothesis is about a predicted relationship between variables. There are two types of variables, independent and dependent. An independent variable causes a change in the specific phenomenon being studied, while a dependent variable is the change in that phenomenon. The first example in Box 1 might help to clarify the difference.

An example of a hypothesis would be that older people who have a history of falls have a reduction in the incidence of falls due to exercise therapy. The causal relationship is between the independent variable— the exercise therapy—and the dependent variable—a falls reduction.

A qualitative methodology aims to explore a phenomenon with the aim of understanding the experience of the phenomenon from the perspective of those affected by it. Qualitative research is influenced by the social and not the physical sciences. Concepts and themes arise from the researcher/s interpretation of the data gained from observations and interviews. The collected data are non-numerical and this is the distinction from a quantitative study. The data collected are coded in accordance with the type of method being used in the research study, for example, discourse analysis; phenomenology; grounded theory. The researcher identifies themes from the data descriptions, and from the data analysis a theoretical understanding is seen to emerge.

A qualitative methodology rests on the philosophical view that science cannot be neutral and value-free because the researcher and the participants are part of the world that the research study aims to explore.

Unlike quantitative studies, the results of which can often be generalised due to the preciseness of the measuring instruments, qualitative studies are not usually generalisable. However, knowledge comparisons can be made between studies that have some similarity of focus. For example, the uncovering of causative or aggravating factors leading to the experiences of pain management for oncology patients, and for patients who have rheumatoid arthritis, or another long-term health problem for which pain is a characteristic feature. The validity of a qualitative study relates to the accurate representation of the data collected and analysed, and which shows that data has been saturated, meaning no new data or analysed findings are forthcoming. This is demonstrated in a clear data audit trail, and the study's findings are therefore seen as credible (see the second example in Box 1).

Box 1.Research study examples

  • An example of a quantitative research study Kennedy and Burnett (2011) conducted a survey to determine whether there were any discernible differences in knowledge and attitudes between second- and third-year pre-registration nursing students toward hand-hygiene practices. The collected data and its subsequent analysis is presented in numerical tables and graphs, but these are supported by text explaining the research findings and how these were ascertained. For full details, see 10.1177/1757177411411124
  • 2. An example of a qualitative research study Morse et al (2014) undertook an exploratory study to see what coping strategies were used by women awaiting a possible diagnosis of breast cancer. Direct quotes from the study participants appeared in the writing up of the research because it is a requirement of qualitative research that there be a transparent data audit trail. The research showed two things, both essential requirements of qualitative research. First, how the collected data were saturated to ensure that no data had been left inadequately explored, or that the data coding had been prematurely closed and, second, having captured the breadth and depth of the data findings, the researchers showed how the direct quotes were thematically coded to reveal the women's coping strategies. For full details, see 10.1188/14.ONF.350-359
  • 3. An example of a mixed-methods study Lindsay-Smith et al (2018) investigated and explored the impact on elderly people's social wellbeing when they were members of a community that provided multi-activities. The study combined a quantitative survey that recorded participants' sociodemographic characteristics and measured participation in activities with a focus group study to gauge participants' perceptions of the benefits of taking part in the activities. For full details, see https://bmcgeriatr.biomedcentral.com/track/pdf/10.1186/s12877-018-0913-1.pdf

Sometimes a study cannot meet its stated research aims by using solely a quantitative or a qualitative methodology, so a mixed-methods approach combining both quantitative and qualitative methods for the collection and analysis of data are used. Cresswell (2013) explains that, depending on the aim and purpose of the study, it is possible to collect either the quantitative data first and analyse these, followed by the qualitative data and their analysis. This is an explanatory/exploratory sequence. Or the qualitative data may be collected first and analysed, followed by the quantitative; an exploratory/explanatory process. Whichever approach is used, the cumulative data analyses have to be synthesised to give a clear picture of the overall findings (Box 1).

The issue of bias

Bias is a negative feature of research because it relates to either an error in the conceptualisation of the study due to the researcher/s adopting a skewed or idiosyncratic perspective, or to errors in the data analysis. Bias will affect the validity and reliability of a study, so it is important that any bias is eliminated in quantitative studies, or minimised and accounted for in qualitative studies.

Scientific and ethical approval

It should be noted that, before any research study proceeds, the research proposal for that study must have been reviewed and agreed to by a scientific and ethics committee. The purpose of a scientific and ethics committee is to see that those recruited into a study are not harmed or damaged, and that the study will contribute to the advancement of knowledge. The committee pays particular attention to whether any bias might have been introduced to a study. The researchers will have detailed the reason why the study is required, the explicit aim/s and purpose of the study, the methodology of the study, and its subsequent design, including the chosen research methods for the collection of the data (sampling and study recruitment), and what method/s will be used for data analysis.

A literature review is undertaken and the established (published) international literature on the research topic is summarised to highlight what is already known on the topic and/or to show any topic gaps that have not yet been researched. The British Educational Research Association (BERA) (2018) also gives guidance for research proposals that are deemed to be educational evaluation studies, including ‘close-practice’ research studies. Any ethical issues such as how people will be recruited into the study, the gaining of informed voluntary consent, any conflict of interest between the researcher/s and the proposed research topic, and whether the research is being funded or financially supported by a particular source will also have been considered.

Critiquing a published research paper

It is important to remember that a published paper is not the research report. It is a sample of the research report. The research author/s are presenting their research findings as a succinct summary. Only a passing mention might be made that ethical approval and voluntary informed consent were obtained. However, readers can be assured that all publications in leading journals with a good reputation are subject to an external peer review process. Any concerns about a paper's content will have been ironed out prior to publication.

It will be apparent that there are several particular research designs. The Critical Skills Appraisal Programme (CASP) provides online information to help the interpretation of each type of study, and does this by providing questions to help the reader consider and critique the paper (CASP, 2021).

General points for critiquing a paper include the following:

  • The paper should be readable and have explicit statements on the purpose of the research, its chosen methodology and design
  • Read the paper thoroughly to get a feel for what the paper is saying
  • Consider what the researcher/s says about any ethical issues and how these have been handled
  • Look at how the data were collected and analysed. Are the explanations for these aspects clear? In a quantitative study, are any graphs or charts easy to understand and is there supporting text to aid the interpretation of the data? In a qualitative study, are direct quotes from the research participants included, and do the researcher/s show how data collected from interviews and observations were coded into data categories and themes?
  • In a mixed-method study, how are the quantitative and qualitative analyses synthesised?
  • Do the conclusions seem to fit the handling of the data's analysis?
  • An important test of validity is whether the study's title relates well to the content of the paper and, conversely, whether the content reflect a corresponding match to the study's title.

Finally, remember that the research study could have been conducted using a different methodological design provided the research aims would still have been met, but a critique of the paper relates to what has been published and not what otherwise might have been done.