Session Description: Data collected via questions posed throughout CME/CE activities is critical in demonstrating to various stakeholders the value of the educational intervention. However, online user analytics data shows that every “ask” of a learner is a burden that becomes a point of attrition along the learner journey; meaning we lose some participants every time we put a task in front of them (such as a question or a click). The educational provider’s conundrum, then, is collecting enough data to say something meaningful about the activity while not inadvertently driving away learners with cumbersome polling. This is particularly important for grant-supported programs that include participation goals to which the provider has committed.
Despite the need to be thoughtful and strategic about the questions we pose to learners, many providers are using evaluation forms they haven’t reviewed in years and/or employ awkwardly worded questions (demographic, pre/post, satisfaction) that learners find confusing, which subsequently muddies results. Further, questions are sometimes placed in locations along the learner journey that are inefficient (forcing learners to answer the same question repeatedly or collected at a point that lowers the potential sample size of responses).
This session will first distinguish between questions required by accreditation standards, questions that have become industry standards, and optional/elective questions. From there, we will discuss ways to lower the burden/survey fatigue by 1) optimizing question wording/structure to maximize clarity and brevity and 2) determining the best placement for questions (registration page, pre/post, evaluation, follow-up, or other survey) to improve the efficiency of data collection and reduce redundancy.
The session will include real-world case-studies of various types of CME/CE questions to compare different ways of asking for information, strengths and weaknesses of different approaches, and limiting circumstances that may determine why one approach is better than another (such as LMS limitations, access to the learner’s demographics, etc). Participants of the session will learn how to think critically about the quality of data that will be returned based on the wording and placement of a question. We will discuss best practices for survey development to streamline questions, reduce confusion, and increase efficient analysis and reporting of the data captured.
Participants will come away with a more critical eye towards what makes a “good” question, be better able to evaluate their own surveys/forms, and will have resources from the presenters to refer to when editing existing questions or designing new ones.
Learning Objectives:
Describe the difference between accreditation required questions, industry standard questions, and optional/elective questions for evaluations and other forms
Critically evaluate/update your data collection forms (registration through follow-up) for strategic questions placement, clarity, and optimal format
Explain how to lower the burden for learners and lower the barrier to entry to participants being recruited to educational activities