The next blog in our Top Tips for Health Teaching series considers critical appraisal training. This was one of the presentations at our Knowledge Sharing Workshop event in September 2025, entitled “How I shed my armbands and began to enjoy swimming in the critical appraisal activities pool!” and outlines the experiences of Sarah Gardner, Clinical Evidence Specialist at Doncaster & Bassetlaw Teaching Hospitals NHS Foundation Trust.
Fear and armbands
When started delivering critical appraisal training fifteen years ago, I used articles I’d picked up on courses some years earlier. I duly trotted out my ancient pet papers on all occasions, always to mixed groups advertised as open sessions for all staff. The feedback was good, I got a lot slicker at the delivery, but I always got very apprehensive before sessions, and needed to fake the appearance of confidence and knowledge while quailing inside. On reflection I clutched my pet articles and crib sheets like they were armbands and I was swimming in dangerous waters, only one tricky question away from a puncture leading to drowning or being eaten by sharks.
Physios pushed me in, and I found I could float
So the day came that I met a physiotherapy team that were keen to have a critical appraisal skills session on RCTs as a group. As you may know, physios are a fabulous staff group who are badly served by generally poor-quality evidence, and they deserved better than my irrelevant old RCT on tap water vs sterile saline for wounds. So I designed a session focussed on their needs, rather than one geared towards protecting my insecurities! I found a Cochrane review on an interesting therapy topic and examined the Risk of Bias chart looking for a trial that had a mix of green and red in interesting places. I landed on one, and using various sections in the Cochrane review made a lot of notes on the strengths and weaknesses of the study to help me construct an appraisal crib sheet.
It felt like a real achievement to plan and deliver a session based on a paper I had appraised. What I hadn’t foreseen was how different the session would feel with a targeted paper. Normally the only thing you can guarantee with an “open” session is that absolutely nobody is interested in your pet paper. That is what I was resigned to. Yet in this clinical team focussed session the discussions were of a quality I had never seen before, because the participants cared about the subject and brought their own experience to bear on interpreting the results, assessing the methodology and working out if it could be applicable in their setting. Not just clinically but also discussing how they might do similar research. It was awesome, I felt like I was at the interface of clinical librarianship, evidence-based practice and research, and it’s a great space in which to operate!
Following that I turned my back on open scheduled sessions and focussed on delivering critical appraisal sessions to teams as part of their continuing professional development protected time. It was the obvious and logical extension of clinical librarian work – find out what evidence the team needs, deliver it and then help them gain the skills to appraise it.
It can’t just be about the RCTs
RCTs are not uniformly useful across departments so I had to mix it up with the study types. Emergency Department staff and Clinical Scientists also liked Diagnostic Test Accuracy Studies, Paediatric Occupation Therapists found cohort studies valuable, everyone wants a systematic review, so you do have to be willing to further extend your comfort zone to be maximally useful to clinical teams.
When I’m working with departments which do most of their CPD as lunchtime teaching, like paediatrics doctors, that requires a different approach – no time for a leisurely theory presentation here, and if you try a theory session one week with a practical the next, an entirely different set of doctors come to each! I pick papers which allow a certain amount of theory to surface during the critical appraisal, and prepare a few slides I can whip out if needed to illuminate some specific points or explain some statistical tests.
Other ways to mix it up
- Bring calculations sheets for everyone to fill in with basic results data from the paper you’ve chosen. It starts with creating a 2×2 table with a series of logical questions to build up the stats to the point where it is easy to calculate Absolute Risk Reduction and Number Needed to Treat.
- Use interactive tools to inject a bit of levity – for example I used AHAslides for a cohort study biases + definitions matching game with points and a medals table, handy for virtual sessions, but equally funny to watch people get competitive in person!
- Use something other than CASP. It is great but not the only game in town. Beginners might find the Oxford CEBM checklists more accessible, and the notes for SIGN checklists are fabulous learning tools. QUADAS-2 has revolutionised my understanding of diagnostic test accuracy studies. JBI checklists cover study types that no-one else does – prevalence studies for example. Or do a “cut and shut” e.g. SIGN SR for the methodology, CASP SR for external validity.
- Challenge participants to improve the search terms in a systematic review search strategy. I challenged ED docs to come up with more search terms than me for Ringers Lactate and balanced crystalloids to demonstrate that the quality of the search strategy in a systematic review is fundamental to whether they find all the papers or not. I won hands down, so will you.
- Find two papers on the same topic to show “what good looks like” and the reverse, what poor looks like (yet still gets published). I found two papers on treatment for asthma exacerbations (one adult, one paediatric) in a Cochrane Review, which were like night and day on the Risk of Bias chart, and used them both in a short session, which sounds ambitious but it worked. (see below!)
- Do extra prep to avoid the need for pre-reading if you are doing a short session. I know from painful experience that virtually no-one pre-reads one article never mind two, and it is pointless to try and make them, so I highlighted in advance all the sections in both the asthma papers that related to the CASP questions to speed things up, and randomly handed both papers out so there was an even mix in the group. Participants had 10 minutes to skim read their paper noting the abstract and the highlighted sections, then we went through a few selected methodology questions from the CASP RCT checklist. This allowed enough time to examine the different approaches in the two papers, and talk about how difficult it was to carry out research well in some settings, like ED, for example. It really went with a swing.
Hard won lessons
I’ve come a long way from fear of sharks to offering critical appraisal training and journal club support to any department without too much worry these days about tricky study types or avoiding difficult questions. I eventually realised that you do just have to let go of the rail, take deep breaths to keep you calm and buoyant, and bob about, if not actually swim! It’s so life enhancing to conquer your fears, whatever they may be. Absolutely do go on a course, and maybe go on another course for a different perspective, but I think endless courses just postpone the inevitable. Do start off with familiar material you are comfortable with, but don’t be trapped by it and risk missing opportunities to directly engage with clinicians where they are at – it’s a fascinating and rewarding place, and I quickly realised I don’t want to be anywhere else!



