Editor's Note: Today's post comes to us courtesy of guest contributor Chris Dobyns.
Recently I was asked to take responsibility for conducting a formal evaluation of two human capital programs, to gauge the effectiveness of some changes that had been implemented over the last year, or so.
Program Evaluation . . . What Program Evaluation?
While generally familiar with the concept of program evaluation as one of the the steps in fundamental problem-solving and process change (Analysis, Design, Development, Implementation and Evaluation), I was totally surprised to discover how little documentation we had on the topic, and to our eternal shame – how few evaluations of our human capital programs we’ve actually conducted.
A power search of this forum returned only a single reference, using the term “program evaluation”. Maybe other terms were used synonymously, but if they were – I wasn’t able to find them. The notable absence of terminology, or even a reference to this seemingly critical process step, caused me to wonder what was going on.
The What and Why of Program Evaluation
It may be self-evident, but program evaluation are studies that are conducted periodically or on an ad hoc basis, to assess how well a program is working. In human capital management, program evaluation is used to monitor whether HR programs are meeting their originally defined goals and objectives. This is typically achieved by identifying the impact of personnel policies and programs on people and organizations.
But why bother . . . because programs and interventions – particularly ones in the human capital realm, are carefully and objectively designed and implemented to deliver specific and exacting results, right?
Sure they are.
Systems implemented by people can follow a precisely prescribed design and process, and still fail to achieve the desired outcomes. Given that people, organizations, and the external environment are constantly adapting and evolving, program evaluation helps assess the relevance and effectiveness of HR programs to continuously changing conditions.
Conditions Can Change . . . But How Much Else Can Go Wrong?
Plenty.
Besides the moving targets of the organization and the external environment, you have to allow for the possibility of flaws in program design itself. A list of the “usual suspects” includes:
- Solutions targeted to the symptoms, not the underlying problem
- Programs developed in response to vague or ambiguous objectives
- Wrong solution or intervention implemented for a particular situation
- Assumption that applying financial interventions can solve any problem
- Solutions or interventions that are emotionally-driven rather than informed by facts and data
- Programs that are the product of bad planning or execution, inadequate training, support, resources or poor communications
- Some combination of all of the above
Nothing Is Ever Found By Not Looking
So, as we all know, things can (and do) go wrong less-right with any program or intervention and maybe more so with programs in the human capital realm. I'm still not sure I fully understand the predisposition to seemingly forgo conducting program evaluation altogether in our own organization - and maybe a corresponding, albeit anecdotal, absence of robust program evaluation being conducted externally.
Quite possibly program evaluation is occurring, but perhaps when the prevalence of results from those programs so precisely match the predicted outcomes and achieve the desires goals and objectives of the program - open source publication just isn't all that newsworthy.
Yeah, sure.
The Internal Challenges
The benefits to conducting regular program evaluation are readily apparent and the barriers and the challenges are equally so. A handful of the stumbling blocks to taking action or merely to conducting program evaluation include:
- Fears related to negative (or simply neutral) program evaluation findings
- Lack of prompt (or any) decision-making when multiple program factors reflect equivocal or ambiguous results or outcomes
- Uncertainty about when, what and how much to modify programs or interventions when results or outcomes are definitively positive or negative
- Absence of sound evaluation planning and data analysis methodology, to include conducting/completing a thorough baseline
- Inadequate (or no) time, resources or partnering relationships necessary to conduct/complete program evaluation projects
- Failure to identify relevant metrics and trusted data that can be analyzed on a sustained basis
Putting Program Evaluation to Use
As an explanation of our perhaps reticence for conducting human capital program evaluation, there’s probably a lot of wisdom in the statement, “success has a thousand fathers, but failure is an orphan”. Despite the potential for uncovering flaws and failures, a friend recently reminded me that program evaluation is only useful if it is used to make decisions – and insert the final piece of the puzzle.
Everyone probably has a different perspective. What’s yours?
Chris Dobyns, CCP, CBP is currently employed as a Human Capital Strategic Consultant for the Office of Human Resource Strategy and Program Design for one of the largest U.S. intelligence agencies. The Office of Human Resource Strategy and Program Design is responsible for organizational effectiveness, personnel assessment, compensation and incentives, occupational structure, recognition and rewards, HR policy, human capital program design, implementation, evaluation and assessment and internal consulting. Chris has worked in the area of compensation for more than 35 years, and has been employed in various compensation-related positions by a number of large, private sector companies including, Sears, Roebuck, Arizona Public Service and Westinghouse Savannah River Company.
Original image "Missing Puzzle Piece" courtesy of Chris Dobyns.
Thanks for bringing back fond memories, Chris, with this oft-overlooked topic.
First learned pre- and post-program testing procedures from the UW grand training guru Don L. Kirkpatrick https://www.kirkpatrickpartners.com/Our-Philosophy/The-Kirkpatrick-Model back in the prehistoric years when my field du jour was Training & Development rather than Talent Development. Always considered it quite odd that spending on human skill development was measured so much more carefully than the effectiveness of the much more costly total reward packages of payroll plus perks.
But as a realistic professional, I already knew that the maintenance and improvement of human KSAs was considered an easily disposable frippery vs the totally essential disbursement of vast sums of money by finance via operating management for status quo performance, regardless of its effectiveness. After years in compensation, I accepted my years in T&D/MgmtDev as a useful but precarious time. Sure enough, training got cut long before accountants were laid off.
Why ask if it's effective when the answer doesn't matter?
Maybe that's why the initial HR-related program assessment tools were for training ... because everyone knows T&D is a secondary low-priority function compared to the easily justified ever-escalating payroll budgets for basic business or essential service operational personnel.
Posted by: E. James (Jim) Brennan | 07/27/2018 at 04:47 PM