Sustainability, maintenance & quality assurance
Guiding questions:
- How to sustain the use of VPs?
- How to update VPs in the curriculum?
- How to evaluate the quality of VPs?
The development of VPs collections and their implementation are often supported by grants. However, even the most successful projects have to face the moment when the source of funding drains out and new ways of sustaining the initiative are needed. To keep an initiative alive for a long time was never an easy task and the methods depend on the setting. However, there are some that turned out to be successful.
One of such approaches is relying on a collaboration across institutions [Berman 2011]. It was often observed that universities suffer from the non-invented-here syndrome and insist on developing and maintaining their VPs by themselves [Casanova 2019]. In the long run, the costs are difficult to meet, but when divided across many partners become affordable. Sometimes, payment of a subscription fee by users of the VP collection might be required to cover the basic needs [Berman 2011].
It is important that the group of users develop a sense of ownership and feel responsible for its content. This can be implemented by establishing an editorial board that consists of stakeholders (e.g. directors of clerkships) [Berman 2011] interested in the use of VPs at the university. It also helps to align the collection with current national or university learning objectives catalogs [Altmiller 2021] as well as with changes in the professional practice [Morrissey 2014] and modify the content accordingly. For instance, the COVID-19 pandemic introduced several changes in the clinical reasoning process and hospital practice and consequently required an update of an existing collection of VPs as was described by Hege et al. [Hege20]. The VP system should enable evolution and end-user customization of the cases [Botezatu 2010] and empower the teachers to do the update by themselves if they wish to [Zary 2009].
For a sustained long-term use of VPs, the collection needs to reach a critical mass of users [Kolb 2009]. It has to be tightly integrated into the mainstream curriculum because sidetrack add-ons usually do not survive [Casanova 2019]. Leaders at universities must emphasize the importance of VPs and their lasting support [Djukic 2012]. A VP integration also requires time to be established and mature. For instance Kolb et al. in retrospect judged that a 3-year period of the NetWoRM project was perhaps insufficient for successful implementation of the VPs in all centers [Kolb 2009].
Finally, a sustainable use of VPs requires accountability [Botezatu 2010]. There should be a mechanism to report feedback. Even the unsuccessful uses and less favorable events should be analyzed and conclusions drawn. Feedback can be collected using established or self-developed questionnaires. On the one hand, established tools usually are better validated and permit the results to be compared across different implementations. On the other hand, homegrown surveys can be tailored to the local needs. In the category of ready questionnaires, the eViP project has proposed two VP evaluation tools which were later on further developed and validated by Huwendiek and colleagues. One of them is a tool to measure the quality of VP for the purpose of fostering clinical reasoning [Huwendiek 2015]. The other is a set of tools to evaluate the quality of VP curricular integration: a student questionnaire and reviewer checklist [Huwendiek 2009]. These tools have been adopted by iCoViP and are now available on our website together with the results of the pilot evauations. Sobocan and Klemenc-Ketis developed a psychometric tool to measure acceptability and attitudes of medical students towards the use of the VPs in the classroom [Sobocan 2016]. Last but not least, Kleinheksel and Ritzhauptand developed an instrument to measure the adoption and integration of VPs in nursing education [Kleinheksel 2017].
Besides questionnaires, it is also possible to use observational studies (ethnographic approach) to evaluate how students in small groups interact with VPs [Kulasegaram 2018, Edelbring 2018]. When collecting verbal feedback on the VP use, Hirumi et al. suggest to first ask the students what went well in the VP before addressing problems. This helps in directing students’ emotions in a positive manner [Hirumi 2016]. Formal usability studies may involve professional human-machine-interactions tools as eye movement trackers [Boeker 2006, Stevens 2006]. It is also possible to analyze user activities based on VP system logs. This was described in more detail in the section on learning analytics. Lastly, it is of course possible to organize formal studies to explore the effectiveness of VPs in reaching desired learning outcomes in comparison to alternative methods (e.g. simulated patients, human patient simulators) or different VP design or implementation variants. Such evaluation may follow even a randomized controlled trial study and be published as research studies. Kononowicz et al. recently published a systematic review with meta-analysis of such studies [Kononowicz 2019]. When you intend to advance your evaluation to the level of a research study do not forget to seek permission of an ethical review board.
Recommendations: – Encourage your colleagues to support you in sustaining the use of VPs by giving them a sense of ownership of the collection. Invite them to voice their opinion on the quality and let them co-decide on the development of the integration in a VP editorial board or special task force. – Save time to evaluate the quality of VPs by using established questionnaires available (e.g. from iCoViP) – Use also the VP systems logs to detect quality problems. |