Tamrack New Employee Onboarding (NEO) Evaluation
Business: Tamrack
Business Unit: Learning and Performance Solutions
Evaluand: New Employee Onboarding (NEO)
Prepared by: Team CARP, as part of the Boise State University OPWL program
Note: This example is a case study presented as part of the OPWL 530 Evaluation course. While the evaluation process was real, the business and program are fictitious.
Tamrack, Inc. (a pseudonym) is a cleaning service organization founded in 2018 and headquartered in Boise, Idaho. The company maintains a geographically dispersed workforce with branches across five states in the United States. In response to a strategic shift toward remote work in 2020, Tamrack transitioned its New Employee Onboarding (NEO) program to an entirely online, blended format.
The Learning and Performance Solutions (LPS) department is currently responsible for the maintenance and deployment of the NEO program. This company-wide initiative is designed to provide a consistent introduction to the organization, with the ultimate goal of improving employee satisfaction, enhancing work-life balance, and maximizing talent acquisition while reducing attrition. The program consists of a synchronous virtual session on the first day, followed by self-paced online modules within the company’s Learning Management System (LMS).
Based on discussions with Ms. Gibson, we learned that the primary intent is to assess areas for improvement. The intended users plan to use the findings to revise the program. Thus, it was decided to conduct a goal-based formative evaluation of the NEO program, with the overall purpose being to find areas that need improvement in order to produce more positive outcomes. We followed Chyung's (2019) 10-step evaluation process.
The NEO program serves approximately 30 new hires annually across all roles. The program begins with a synchronous first-day meeting where representatives from various business units present and answer questions. New hires then complete the remainder of the week asynchronously through the company's LMS at their own pace. During the first week, each new hire is paired with a mentor and encouraged to schedule a meet-and-greet. The program includes a one-month onboarding period overall.
Upstream Stakeholders
Gill Gibson, LPS Director and client
Bill Berg, Head of HR
Instructional Designers
Sarah Smith, CEO
IT representatives
HR representatives
Downstream Direct Impactees
New employees
Downstream Indirect Impactees
Direct supervisors/managers of new hires
NEO Mentors
Current employees
Our team developed a Program Logic Model (PLM) that mapped the NEO program's resources, activities, outputs, outcomes, and impact. Using the PLM, we identified two dimensions critical to the desired goals of the program: role readiness and feeling welcomed.
We prioritized the Activities and Ourcomes categories of the PLM for review, as formative evaluations focus on ways to improve the quality of items listed in the resources, activities, and outputs categories (Chyung, 2019). Both dimensions were assigned equal importance weighting based on the client's input:
For each dimension, the team created a set of complementary data collection instruments. These included semi-structured online surveys and one-on-one video conference interviews, targeting both managers and NEO participants. The surveys were designed to capture quantitative data through Likert-scale items alongside qualitative insights from open-ended questions, while the interviews used purposive sampling to gather richer qualitative perspectives.
Where possible, instruments were shared across dimensions (for example, the participant survey and participant interview had specific questions allocated to each dimension), which streamlined the data collection process and reduced respondent burden.
The rubrics established a four-level performance framework: Exceeds Expectations, Met Expectations, Improvement Needed, and Serious Problems Detected. These were applied consistently across all data sources. Quantitative items were scored using Likert-scale averages with defined numerical ranges for each performance level. At the same time, qualitative data from interviews were assessed by pairing scaled ratings with thematic analysis of participant comments.
The team also created dimensional triangulation rubrics that allowed synthesis across all data sources within each dimension. This triangulation approach ensured that no single data source could drive the overall judgment, strengthening the credibility and defensibility of the evaluation's conclusions.
A consistent narrative emerged in all data sources that the NEO program lays a solid general foundation, but falls short of preparing new hires for the specifics of their roles. Survey scores from both managers and participants were generally strong (averaging in the 4.3–4.9 range for most items). However, the self-paced online training modules stood out as a clear weak point, earning the lowest score in the entire evaluation at 3.63, and was the only item to draw "Strongly disagree" responses.
Open-ended feedback explained why. Participants described feeling overwhelmed by the volume of content and struggled to connect the general material to their roles. Managers echoed this, noting that department-specific systems, product knowledge, and process details aren't covered in NEO. This leaves the burden on the new employee’s teams.
Interview data from both groups reinforced the pattern. Ratings were strong (4.33 for managers, 4.25 for participants), but qualitative comments consistently pointed to gaps in role-specific preparation, mentoring consistency, and applied learning opportunities. All four data sources aligned with “Met Expectations” under the rubrics, and the dimensional triangulation confirmed the overall rating.
The cultural onboarding story was notably stronger. Participant survey scores ranged from 4.74 to 4.89, with no negative responses recorded, placing the survey squarely at "Exceeds Expectations." Participant interviewees rated their experience at 4.25 and described positive early connections with their teams (introductions, check-ins, and team lunches were frequently mentioned as highlights).
However, some participants noted that the orientation felt somewhat impersonal and that connection beyond their immediate team was limited, which brought the interview data closer to "Met Expectations." The open-ended survey responses reinforced this, with the most common suggestion being more interaction with people rather than time spent alone at a computer. Under the dimensional triangulation rubric, with one source at "Exceeds" and one at "Met Expectations," the overall rating was determined to be “Meets Expectations”.
Chyung, S. Y. (2018). 10-step evaluation for training and performance improvement. SAGE Publications, Incorporated.