Staff educator's guide: Evaluating a clinical orientation program

By Alvin D. Jeffery, Robin L. Jarvis, and Amy J. Word-Allen | 09/04/2019

This chapter from the second edition of Staff Educator’s Guide to Clinical Orientation, published by Sigma, reviews several models for evaluating the success of a program.

Staff Educator's Guide to Clinical OrientationOrientation programs, regardless of their design or structure, should be evaluated for their efficacy. Just as the nursing process and ADDIE model complete their cycles with Evaluation, so too, do all successful programs. By evaluating your orientation program from various perspectives and levels, you ensure an effective, efficient orientation program that adds value to the individual, the unit/department, and the organization—a win-win-win situation.

Evaluating an orientation program should provide you with useful information that will do one of two things:

  1. Describe areas of the program that need to be modified because they are not as effective or efficient as they could be

  2. Supply evidence that the program is in fact doing what it’s supposed to do

Although this may sound simple and self-evident, consider the following two examples in which having documented, objective evaluation data proved useful.


REAL-WORLD EXAMPLE: THE NEED FOR EVALUATION #1
Dan was the staff development specialist in charge of the first week of nursing orientation for all new hires entering his organization. When he assumed this role, he discovered that evaluation of this first week of training was performed by a simple survey on the last day of the week which asked these new hires if they liked the content they learned. Although Dan knew this was a good start, he felt more should be done to evaluate his program. So, he developed a survey for preceptors to complete within the first 2 weeks a new hire spent on the unit taking care of patients. This survey evaluated basic skills observed by the preceptor.

Dan quickly discovered that documentation in the electronic medical record was a problem among new hires in most departments. Therefore, he modified the training day on documentation to include more case-based and simulation scenarios. Post-intervention data revealed improved documentation performance, and anecdotal feedback came to him from unit-based educators who said the new hires’ ability to document efficiently had drastically increased preceptor satisfaction and allowed them to cover more advanced skills much earlier.

This example shows how including various levels of evaluation provides for a more well-rounded assessment of program efficacy and highlights potential opportunities for improvement.


REAL-WORLD EXAMPLE: THE NEED FOR EVALUATION #2
Marie, a unit-based educator, was invited to attend a meeting with other unit-based educators as well as several senior-level managers who had a strong influence on training and development in the organization. Due to economic hardships, the managers informed the educators that various “non-essential” components of initial orientation would be removed. Notably, an 8-hour class on medication safety was being removed from central orientation based on the rationale that licensed healthcare providers should already be familiar with this information, and preceptors should be reinforcing it at the unit level.

Although Marie had a “gut feeling” that this class should not be removed (and she knew that her own new hires found this class beneficial), she knew she would need more objective data to prevent the removal of the class. After the meeting, Marie gathered already-available data on rates of serious adverse drug events, starting with data collected approximately 2 years before the medication safety class was added to central orientation. Marie shared the data with managers and showed them how implementation of this class resulted in a 50% decrease of serious adverse drug events and saved the organization more money than what was spent on salary for attending the class. The managers decided to keep this class in orientation.

This example shows the value of collecting objective evaluation data for the purpose of maintaining orientation components that have proven value.


Click here to read the rest of Chapter 6 and view supplemental materials from Staff Educator's Guide to Clinical Orientation, 2nd Edition, in the Virginia Henderson Global Nursing e-Repository of Sigma Theta Tau International Honor Society of Nursing (Sigma). 

Click here to buy at the Sigma Marketplace.

Alvin D. Jeffery, PhD, RN-BC, CCRN-K, FNP-BC, is a research fellow with the U.S. Department of Veterans Affairs, where he studies nursing-focused informatics interventions.

Robin L. Jarvis, MS, SPHR, is principal of R.L. Jarvis & Associates, providing leadership development and strategic facilitation.  

Amy Word-Allen, BSN, RN, is a case manager with Avalon Hospice in Rutherford County, Tennessee.


Tags:
  • Amy Word-Allen
  • Robin Jarvis
  • Robin L. Jarvis
  • Alvin Jeffery
  • Alvin D. Jeffery
  • nurse orientation
  • orientation program
  • orientation
Categories:
  • RNL
  • RNL Feature
  • Technology
  • Self-care
  • Inspirational
  • Forms
  • Nursing Faculty
  • Ad Campaign
  • Publications
  • Sponsor
  • Books
  • Careers
  • Governance
  • Community
  • Roles
  • Nursing Faculty
  • Nurse Clinician
  • Nursing Student
  • Nurse Researchers
  • Global - Middle East
  • Global - Oceania
  • Global - Europe
  • Global - Latin America
  • Global - North America
  • Global - Asia
  • Global - Africa
  • Staff Educator's Guide to Clinical Orientation, Second Edition