Why did the RCGP choose to include the CSA in its examination?

In deciding on an appropriate model for a clinical examination in general practice, but mindful of the likely numbers of candidates, and considerations of feasibility, validity and reliability, the College reviewed most of the commonly used methods in medical assessment such as OSCEs, PACES, Long Cases, Oral examinations, mini-CEX, DOPS[1], videos of consultations, Simulated Surgery, and high-fidelity simulations.

The College concluded that it should develop an assessment in a controlled environment that enabled a standardisation of tasks, while still acting as an authentic replication of patient consultations and capable of objective measurement. Some studies of less structured tests of clinical competence, such as using videos of consultations, incognito simulated patients and mini-CEX, had shown that testing in the real world workplace could be achieved.  Although these have been shown to be reliable tests of individuals, they are not standardised in the way that the CSA is and so do not provide the same comparative evidence for standard setting.

Overall, an OSCE-type assessment incorporating some of the high-level, realistic and holistic elements of a simulated surgery was considered to be the most appropriate, robust and equitable design for the clinical skills module of the MRCGP. In addition to focusing on aspects of the learning outcomes emerging from the curriculum statements of the MRCGP blueprint, it provided the best means to fulfill the criteria relating to feasibility, reliability and validity expected by the then regulator PMETB.

The CSA is an assessment of a doctor’s ability to integrate and apply clinical, professional, communication and practical skills appropriate for general practice and does not set out to be a replica of a ‘normal’ GP surgery – it specifically tests a set of consulting competences (data gathering, clinical management and interpersonal skills) in a range of clinical contexts.

 

To give validity to the examination, an assessment blueprint is followed for each palette of cases, with a pre-set combination of acute and chronic conditions, patient presentation ages, and inclusion of diversity in a case.  This is published on the CSA page of the RCGP website.  Reliability statistics on CSA examinations have shown that the examination fully meets accepted standards for reliability.  The assessment allows for the use of different consultation styles; their effectiveness in the CSA is assessed by experienced, trained examiners who adapt their marking depending on the clinical consulting outcomes of cases they are assessing.

 

  1. External evaluation of the CSA

 

The RCGP is committed to developing and delivering high-quality assessments that are rigorously evaluated.  To this end, in addition to regular scrutiny by the CSA core group, Assessment Committee and Postgraduate Training Board, the College has sought the following external evaluations of the CSA and its wider assessment system:

 

  1. i) Approval by the Regulator. All Specialty College curricula and assessment systems are evaluated and approved by the Regulator (PMETB from September 2005 – March 2010, GMC since April 2010).  Detailed descriptions of the assessment are submitted, and are structured around the Regulator’s published standards.  The PMETB scrutinised the paperwork and then invited representatives from the College to a panel meeting where they were asked detailed questions about the proposed system.  The RCGP’s assessment system, including the CSA, was one of the first to be fully approved.

 

  1. ii) Review by International Assessment Experts. The RCGP commissioned an external review of the new licensing exam in 2007 from three international experts, Geoff Norman, Cees van der Vleuten, and Chris McManus, to advise the college on their assessment of progress to date and the recommended direction of future developments of the Assessment. Key points in relation to the CSA are:

 

  • In broad terms the reviewers saw the RCGP as being at the forefront of licensing tests;
  • They described nMRCGP as being ‘state-of-the art’ in assessment terms and ‘in the front line of approaches to specialty certification worldwide’.

 

The reviewers comments included:

 

“Serious consideration should be given to use of a compensatory approach to standard-setting in the CSA. Statistical modelling should be used to ensure comparable standards to current norms.

The selection and training of assessors is rigorous and notably well done. The calibration exercise is outstanding.”

 

 “For licensing purposes the use of the AKT and CSA are conventional and valid assessment methodologies.”

 

 “I would describe the CSA standard as ‘implicitly criterion-referenced’. In some ways it is similar to a separate group of standard setters saying in advance what is required of a particular case and another group of assessors marking to that standard. It uses professional judgement in the context of the detailed particulars of an individual case, and it is arrived at through consensus. I would regard that as satisfactory.”

 

A summary of the reports is attached as appendix 8.

 

iii)        Review by Peninsula Medical School.  In May 2009 the RCGP commissioned the Peninsula School of Medicine and Dentistry to review the CSA and to carry out some modelling of the examination data using different standard setting mechanisms. They were asked to carry out a Generalisabilty study to look for sources of error and the reliability of the examination.  It concluded that the slight variability of case mix from day to day might not always be to the benefit of some candidates, depending on which day they undertook the examination.  The modelling also showed that using a borderline group method was feasible and would allow effective compensation thus addressing problems associated with the number to pass method.  An example of the problem with the N2P method is that eight marginal passes with four clear fails was a pass whilst seven clear passes with five marginal fails was an overall fail.

 

  1. iv) Evaluation by Birmingham University. The RCGP appointed the University of Birmingham to undertake a three-year evaluation of the impact of the new General Practice Specialty Training Curriculum on general practice training.  The evaluation began with the introduction of the new curriculum and assessment system in 2007 and the team is currently drafting its final report which will be presented to the next meetings of the Postgraduate Training Board and Council.  The findings of this evaluation have continually informed the work of the College over the past three years and have led to the current re-structuring of the curriculum.   As part of the study Birmingham University has held focus groups, conducted case studies and a survey of trainees and trainers in order to feedback the views of trainees and trainers to the College.

 

  1. v) Current research activity. Mindful of the concern that candidates fail the CSA on consulting skills, the RCGP has also looked into patterns of behaviour during the CSA amongst failing candidates.  A paper presentation at the AMEE conference 2010 “The Structure of Failure – an analysis of the patterns of consulting behaviour amongst failing candidates in a high-stakes postgraduate OSCE towards enhancing the quality of candidate feedback” (ML Denney, R Wakeford) showed that the most common feedback statement given by examiners as a reason for failing a CSA case concerned clinical management.  This was true for both UK-trained graduates and international medical graduates, and unpublished work in the deaneries indicates that international medical graduates fare worse in their workplace-based assessments and are more likely to be referred to deanery Annual Review of Competency Progression (ARCP) panels.

A list of further reading which relate to performance in comparable examinations can be found in appendix 9.

 

  1. Fairness / Equality and diversity

 

  1. a) IMGs perform less well in the MRCGP than UK-trained doctors. Is this problem unique to the MRCGP?

 

Direct comparisons of end-point results with other examinations, such as MRCP, are rarely possible because of the differing structure of these other postgraduate examinations.  Many other Royal Colleges have staged examinations, and poorer performing candidates are often removed from the cohort before they get to the end-point summative examinations.  However, similar patterns in regard to difference in performance, relating to gender and place of primary medical qualification, have been reported in the MRCP(UK) and anecdotally by the RCPsych.  Data from the United States Medical Licensing Examination (USMLE) shows that the percentage of passing candidates who had attended medical school in the US or Canada was consistently higher than for IMGs.  Data from their exam cycles from 1993 to 2009 can be found by following this link: www.usmle.org/Scores_Transcripts/performance.html.   A recent UK report from NCAS shows  that place of primary qualification has a powerful influence on referral to their services (appendix 12).

In addition, it is interesting to note that the performance of IMGs, as a group, in the CSA is mirrored in their performance in the AKT.  It is, therefore, likely that any problems experienced by this group are not solely due to the CSA itself.  It is further interesting to note that the AKT is a machine-marked test which corroborates our findings in the CSA that the results are not influenced by examiner bias.  Data from the 2009 MRCGP shows this, and is attached as appendix 10.

With regard to pass rates in general most other colleges do not publish their results in the detail that we do, so it is difficult to make comparisons. It can, therefore, appear that the RCGP in particular has a problem with the performance of IMGs because we are publishing information about it, where others are not.  The aggregated years’ pass rates for the MRCGP are likely to be higher than other colleges while those for September were similar to the MRCP clinical exam PACES.  The GMC commended the RCGP for the quality of the data that it published in its Annual Specialty Report to PMETB in 2009.

The RCGP has been in the forefront of enquiring into E and D issues in assessment, with papers in the literature from the old MRCGP from the early [2]1990s onwards.

A BMA publication in January 2008 “Examining equality – a survey of royal college examinations” commended the RCGP in its transparency regarding publication of results, stating “the Royal College of General Practitioners operates a general policy of openness and the Examination Board encourages the publication of academic papers on its activities”.  It also noted differential performance regarding UK trained graduates and international medical graduates in a wide variety of postgraduate medical and surgical examinations – not only was this shown to be the case in the old MRCGP, it was also the case in the examinations relating to the Royal College of Physicians and Surgeons of Glasgow, the Membership of the Royal College of Paediatrics and Child Health (MRCPCH ) examination, the Fellowship of the Faculty of Accident and Emergency Medicine (FFAEM) examination, and the Membership of the Royal College of Physicians (MRCP(UK)) Diploma.

[1] Objective Structured Clinical Examination (OSCE), MRCP(UK) Part 2 Clinical Examination (PACES), Mini Clinical Evaluation Exercise (Mini-CEX), Directly Observed Procedural Skills (DOPs).

[2] Wakeford R, Farooqi A, Rashid A, Southgate L. Does the MRCGP discriminate against Asian doctors? BMJ 1992;305:92°©4.

 

Leave a Comment