Royal College Logo       

Evaluation of Continuing Professional Development Group Activities
Robyn L. Houlden MD, FRCPC; Christine P. Collier PhD, FCACB


In The Royal College of Physicians and Surgeons of Canada Maintenance of Certification program, group learning activities (rounds, journal clubs, workshops, courses, conferences, distance learning) must include an evaluation component to be approved under section 1 of the framework of continuing professional development (CPD) options. Traditionally, evaluation of these activities has either been neglected or has consisted of a form distributed at the end of the session in which participants rank their satisfaction with the "teaching." A "happiness" index is typically applied in questions that focus on the speaker's presentation style and the organization of the meeting. As this type of evaluation provides limited information to the speakers and participants, CPD planners should consider other strategies. This article is a practical guide to refocusing the evaluation of group CPD events on learning outcomes.

Step 1: Plan Evaluation
Evaluation is an integral component of the CPD planning cycle. It ties the session to the original needs assessment and learning outcome objectives. By determining what learning has occurred, the focus is learning-outcome-centred not speaker-centred. The speakers, CPD planners, and participants as adult learners share the responsibility to determine whether learning has occurred and what impact this has had. Too often, evaluation is an afterthought that is done to satisfy accreditation requirements. Evaluation of group CPD activities serves many purposes for participants, speakers, and CPD planners. These include:

  • feedback about the value of the event, including what learning objectives were met, the effectiveness of the teaching strategies, and the suitability of the learning environment
  • assistance with CPD planning, including the identification of ongoing learning needs and additional learning strategies
  • assessment of competency in knowledge, skills, and attitudes addressed in the CPD event
  • documentation of the impact on the participants' clinical practice, and ultimately on patients' and health-care outcomes.

Step 2: Focus on Learning Outcomes
Evaluation of group learning activities has typically addressed participants' satisfaction with the teaching. By refocusing evaluation on the learning that has occurred during the session, or as a result of the session, participants are encouraged to become active learners, not just passive listeners; and speakers are encouraged to present the material in a way that ensures learning is facilitated for each participant. Speakers are more than just good presenters, they are teachers.

Dixon has defined four levels of evaluation for continuing medical education that are widely accepted (Table 1).1 The level of evaluation and the teaching strategies used must match the learning outcome objectives defined for the session.


Level 1: Perception and satisfaction assessment
Level 2: Competency assessment of knowledge, skills and attitudes
Level 3: Professional performance assessment
Level 4: Health-care outcome assessment

Level 1: Perception and Satisfaction Assessment
This is the easiest and least expensive type of evaluation for speakers, CPD planners, and participants. Questions are typically asked in a written survey. Useful questions and their rationale are outlined below.

  • Did the content meet your perceived learning needs?
    An effective needs assessment identifies both perceived and unperceived needs of the potential audience.2 Information from this question provides feedback about the accuracy of the identification of perceived needs. If participants indicate that their expectations were exceeded, this may suggest that unperceived needs were also met.
  • Were the learning outcome objectives clearly stated?
    Learning outcome objectives assist physicians in choosing group learning activities that meet their perceived educational needs.3 Effective objectives describe what participants should be able to do at the end of the learning experience.
  • Were the learning outcome objectives met?
    It is useful to list each learning objective on the evaluation form and ask participants to rate how each one was met. This helps speakers to differentiate between objectives that were perceived as having been achieved and others that were not. A refinement of this concept is to ask participants to indicate if they were able to meet any of the learning outcome objectives before the session to determine if the learning was new.
  • Was at least 25 per cent of the time allocated for interactive learning?
    Interaction with colleagues enhances learning among physicians. Strategies include posing questions to the audience, touchpad voting, small-group discussions, problem-solving or case-based sessions, and question and answer periods.4
  • Were the teaching methods effective?
    Teaching methods can enhance or distract learning. Participants can be asked questions about the allocation of time during the sessions, the teaching formats used, and the amount and level of information. The educational methods used should complement the desired learning outcomes. For example, sessions designed to teach technical skills should allow enough time for learners to gain hands-on experience. Sessions designed to enhance communication skills benefit from role-playing or videotaping, and a chance for self-assessment or feedback.

    For the questions listed above, participants can rate their responses on a Likert scale of 1 to 5 (with 1 being definitely not true, 2 being probably not true, 4 being probably accurate, and 5 being definitely accurate). Mark the number of respondents who gave each rating for a particular question. For example, if 30 participants answered question 1, and two participants gave a rating of 2, 10 gave a rating of 4, and 18 gave a rating of 5, then an overall score of 134 would be obtained (2 × 2 + 10 × 4 + 18 × 5 = 134). A "satisfaction index" can be calculated by multiplying the overall score by a factor of 20 (100 per cent divided by the maximum rating value of 5), and dividing it by the number of participants (30). This gives a satisfaction index of 89.3 per cent for this example (134 × 20/30). Questions with a satisfaction index below 60 per cent may indicate an area of concern, and are worth investigating.
  • What topics would you like to hear at future events?
    Providing space on the evaluation form for participants to suggest future topics is one way of performing a simple needs assessment. In addition, the topics may give hints as to what learning needs were not met.
  • What changes do you plan to make in your clinical practice based on what you learned? What additional learning do you plan to pursue?
    It is useful to ask participants what they plan to do with the material learned. Studies have shown that physicians who make a commitment to change an aspect of their practice at the end of an educational intervention are more likely to be successful in implementing the change.5

Perception and satisfaction assessment can also be performed using structured interviews or focus groups after the group learning activity.

Level 2: Competency Assessment
Although satisfaction surveys are the most commonly used evaluation tool, they do not provide an objective assessment of whether any new learning has occurred. Competency assessment provides feedback to the physician about his or her abilities, and can serve as documentation for credentialing or licensing bodies.

For group learning activities devoted to the acquisition of new knowledge, multiple-choice or true-false questions can be used, and answers can be shared verbally or by means of touchpads. Pre-tests and post-tests are useful to exclude the possibility that the participant's knowledge was pre-existing and to demonstrate to the speaker and participants that something new was learned from the CPD experience. To promote reflective and critical thinking, open questions can be posed to participants about how they would manage or diagnose a case.

For learning activities devoted to the acquisition of new skills or professional attitudes, participants can be directed to self-assessment tools. Direct observation or videotaping in a real or simulated setting using a standardized patient, simulator, or objective structured clinical examinations (OSCE) format with feedback is valuable if resources permit. This type of assessment is commonly used with technical skills workshops such as advanced trauma life support (ATLS) and advanced cardiac life support (ACLS).

Level 3: Professional Performance Assessment
There can be a discrepancy between what physicians know and what they do in practice. Level 3 assessment focuses on whether participants incorporate what they have learned. Traditionally, group learning activities have not used educational strategies that promote a change in practice. For change to occur, physicians must be given opportunities to apply and practise new learning.6 With the recognition of the importance of evidence-based medicine, this level of assessment provides evidence about the effectiveness of a CPD activity.

Participants may be observed directly or videotaped in their work setting after a CPD group learning intervention. Chart reviews can be done by another physician or trained individual. Physicians can be surveyed or interviewed using chart-stimulated recall. Other proxy measures such as surveys of patients, prescribing data, and length of stay data may provide ancillary evidence.

The practice audit is another valuable performance assessment tool.7 Under the Maintenance of Certification program, Fellows can earn credits by participating in "best practice courses." These courses promote evidence-based clinical practice. Participants earn one credit per hour for attending the course under section 1. They earn an additional two credits per hour under section 5 (practice review and appraisal) for time spent completing the self-audit tool provided to them. Participants who return the results of their audit to course planners earn an extra credit for each hour of the best practice course attended. They earn additional credits under section 4 (structured learning projects) for documenting changes in practice.

The CPD planner is not solely responsible for ensuring that level 3 assessment occurs. Physicians need to recognize their responsibility and to self-assess that their learning affects their practice. For activities that are closely related to their personal learning goals, physicians should consider designing and performing their own practice-assessment activities, to claim two credits per hour under section 5.

Level 4. Health-Care Outcome Assessment
The ultimate measure of the effectiveness of a CPD group activity is whether new learning makes a difference in health-care outcomes. A variety of measures can be used including patients' outcomes (complication rates, life expectancy, symptom relief), practice patterns (ordering of tests, referral patterns, prescribing practices), and resource use (availability of surgical and diagnostic equipment, clinic and operating-room staff). For research purposes, randomized, controlled studies can be used to compare health-care outcomes achieved by physicians who participate in the CPD activity with those who do not.

Step 3: Use and Share Evaluation Results
At the outset of any group learning activity, participants should be told about the evaluation method(s) to be used. The results must be shared in a timely fashion. For the speaker and CPD planner, they provide information about the success of the event in helping participants learn. This is useful in developing learning strategies. For the participant, they help identify ongoing learning needs.

Aggregate results with anonymity preserved are likely to be perceived as less threatening. If individual feedback is required, such as competency assessment for a new skill, the evaluator should focus on behaviours that can be changed.8 As adult learners, participants must accept as much responsibility for soliciting feedback and evaluation as faculty members face in providing it.

Step 4: Evaluate Sequentially
Not all learning objectives can be met during a CPD event. Long-term evaluation is often needed to assess the extent to which the learning outcomes have been met. This may be formally organized by the CPD planner, or performed by the physician as a personal learning project. For CPD group learning activities that have consisted of a series of sessions, the overall program should be evaluated at its conclusion and at each session.


The acronym "PLUS" is helpful in keeping track of the key steps in evaluation (P = plan up front, L = learning outcome focused, U = use and share, S = evaluate sequentially). It is a reminder that there is more to evaluation than just meeting accreditation requirements and earning credits. We encourage Fellows involved in CPD planning and teaching to incorporate the ideas presented in this article in the evaluation strategy for their next group CPD activity. We also encourage Fellows attending CPD events to evaluate their learning and to consider developing practice-review strategies to complement their attendance at group learning events. By linking evaluation with teaching in the ways that we have described, learning will be more effective, and speakers will be more satisfied.


  1. Dixon J. Evaluation criteria in studies of continuing education in the health professions. Eval Health Prof 1978;1:47-65.
  2. Lockyer J. Needs assessment and grand rounds: how to do it. Ann R Coll Physicians Surg Can 2000;33:134-6.
  3. Houlden RL, Collier CP. Learning outcome objectives: a critical tool in learner-centered education. J Cont Educ Health Professions 1999;19:208-13.
  4. Lockyer J, Toews J. The maintenance of competence credit assessment process: is there evidence to support the questions asked? Ann R Coll Physicians Surg Can 1995;28:87-9.
  5. Parker FW III, Maxmanian PE. Commitment learning contracts and seminars in hospital-based CME: change in knowledge and behaviour. J Cont Educ Health Professions 1992;12:49-63.
  6. McGuire C, Hurley RE, Babbott D, Butterworth JS. Ausculatory skill: gain and retention after intensive instruction. J Med Educ1964;39:120-31.
  7. Houlden RL, Yen D. The practice audit: addressing the difference between knowing and doing. Ann R Coll Physicians Surg Can 2000;33:272-270.
  8. Brown MG, Hodges B, Wakefield J. Points for giving effective feedback. In: Evaluation methods: a resource handbook. McMaster University, 1995:16-22.

Address for reprints: R.L. Houlden, Division of Endocrinology, Kingston General Hospital, 76 Stuart St., Kingston ON K7L 2V7, e-mail

Copyright © 2012 Royal College of Physicians and Surgeons of Canada. All Rights Reserved.