A good question, which is linked to many other questions like, is coaching effective? What is it that makes my coach qualified and capable of delivering the services I need? And, are the services offered by coaches at the appropriate level of quality? These questions tap into some important themes around what it is that coachees actually get from a coaching service. And perhaps more importantly, they relate to the ongoing debates around who should be ‘allowed’ to provide these services.
These questions are not new and have been cropping up in conferences, academic articles, social media posts, and daily discussions between individuals in the field for over two decades. More recently, the issue of whether or not coaching supervision should be mandatory and in a form based on clinical models of supervision, has stirred up some additional controversy as highlighted by Garvey (2014), in the recently published special feature on the future of coaching & mentoring published in the Winter edition of the online publication, eOrganisations and People.
If we take the question of whether or not coaching is effective, the answer to this question is almost always an emphatic “Yes!” from both clients and coaches alike (Kampa-Kokesch & Anderson, 2001). However, when you think about it, this gives rise to some additional distinct, but related, questions which are not answered quite so simply.
Question two: “How do we know when coaching is effective?”
Question three: “Should coaches be credentialed?”
Questions one and two are more related to question three than they may appear at first glance. Question three has salience in the current marketplace, primarily because we have few clear answers to questions one and two. There are theories and some initial findings (Feldman & Lankau, 2005), but by and large, coaching is a fairly new field, or ‘area of practice’ with a small (but growing) empirical research base behind it. Increasingly, coaching research is showing close alignment with established research in fields such as education, adult learning as well as and perhaps more importantly psychology, psychotherapy and counselling which are forms of professional practice regulated by law (McKenna & Davis, 2009; Gregory et al., 2011). Because there are no clear and specific answers to questions of effectiveness, the issue of credentialing and whether or not coaches are practising some other profession without a license, then becomes much more salient. The question of whether or not coaching is actually another, more traditional service dressed up with a different name continues to take up a great deal of airtime at coaching conferences. However, if you take away the issue of whether or not someone is professionally qualified or what tools or approaches they use, and focus more on the outcomes, does this offer a better method for learning and development professionals to address the practicalities of coach selection and performance management?
Credentialing is the well -trodden path of profession. It allows coaches and more broadly, the coaching industry to point to a (relatively) standard metric when selling their services to potential clients. Clients can easily compare a “master” with an “associate” (or whatever the label may be) coach, and with a fair degree of confidence assume the “master” coach to be more effective than the “associate.” But what if there was a standardized way (or set of ways) to assess effectiveness, beyond possession of knowledge, skills, and credentials?
At the University of Central Florida (UCF), ongoing research has been exploring what drives coaching effectiveness. Unsurprisingly, this research has highlighted substantial overlap between coaching and other developmental programs or tools (e.g., consulting, mentoring, training, therapy). To be sure, coach knowledge, skills, and abilities matter. Just as in these other interventions, other factors play a big role too. Coach and coachee personality, the relationship between coach and coachee, the coachee’s organizational environment and even the nature of coaching objectives all exert an influence on coaching outcomes (Coultas, Sonesh, Benishek, & Salas, 2014; Sonesh, Coultas & Salas, 2014).
The most crucial factor for clients, whether individual or organizational, public sector or private, is whether or not the coaching services they have selected are effective. Measures of effectiveness are a key part of the Return on Investment (ROI) conundrum, which is complicated by the idiosyncratic nature of coaching and mentoring practice, and the myriad forces acting to influence any given coaching engagement. With robust data collection and a standardized set of measures, processes can be developed that afford clients greater insight into their coaches’ effectiveness, while still accounting for the forces outside the coaches’ control. In difficult coaching engagements, coaches could leverage measures of effectiveness to better adapt to the needs of the coachee, or to provide clarity if, or when, coaching or mentoring fails to meet expectations. This of course does not preclude qualitative measures, which are also key to solving the puzzle.
In response to these issues and related questions, the UCF coaching research team has developed an initial methodology which has potential to add some value to the sparse but growing body of coaching effectiveness research. This methodology (the CoachINSIGHT Toolkit, or CIT), which focuses on individual coaching outcomes and relationship with the coach/mentor, joins a select few research-based approaches that show promise for being useful to organisations facing the need to evaluate and support successful coaching interventions. The upshot of research being a driving force behind coaching evaluation is that it ensures that validity and reliability are of primary importance. Validity and reliability as well as attractive packaging adds confidence that such toolkits might be meaningful ways to assess coaches’ performance. The research conducted using the CIT has excellent potential to help address some of the key questions around coaching effectiveness. The kind of feedback that the CIT offers can be found here, and you can learn more about the research and how to get involved here.
So, does evaluation offer a potential alternative to credentialing as the main factor in coach/mentor selection and development? In a global and increasingly complex business environment where useful measures are valued by learning and development professionals, it has to be said, that this is possible. There is certainly a trend for learning and development professionals to be more discerning in their selection of assessment tools. There is also a parallel trend, in the level of interest in robust contracting and embedded measures that deliver indicators of success and impact of coaching programmes. Taken together, the next step in establishing evaluation as a viable alternative to supervision or credentialing may be to establish a kind of ‘minimum’ standards approach (similar to that which is offered by all forms of credential based methods) for determining what would constitute valid and reliable evaluation. Rather than remaining static as can be the case for credentialing standards, evaluation that is embedded in the work process drives continuous improvement and systems of measurement that are dynamic and responsive. This kind of evaluation would help to assure clients of the quality in learning and development services delivery (not just for coaching/mentoring), and may set the stage for a technology and work-based, research-driven ‘revolution’. This approach would have important implications for the development of a profession of coaching, versus the professionalisation of coaching practice.
REFERENCES
Coultas, C. W., Sonesh, S. C., Benishek, L. E., & Salas, E. (2014). Executive coaching research: Toward a context-general model. Poster presented at the 2014 annual meeting of the Society for Industrial Organizational Psychologists. Honolulu, HI.
Feldman, D. C., & Lankau, M. J. (2005). Executive coaching: A review and agenda for future research. Journal of management, 31(6), 829-848.
Garvey, R. (2014).Neofeudalism and Surveillance in Coaching Supervision and Mentoring.eOrganisations and People, 21(4), 41-47 Retrieved from http://www.amed.org.uk
Gregory, J. B., Beck, J. W., & Carr, A. E. (2011). Goals, feedback, and self-regulation: Control theory as a natural framework for executive coaching. Consulting Psychology Journal: Practice and Research, 63(1), 26.
Kampa-Kokesch, S., & Anderson, M. Z. (2001). Executive coaching: A comprehensive review of the literature. Consulting Psychology Journal: Practice and Research, 53(4), 205.
McKenna, D. D., & Davis, S. L. (2009). Hidden in plain sight: The active ingredients of executive coaching. Industrial and Organizational Psychology,2(3), 244-260.
Sonesh, S. C., Coultas, C. W., & Salas, E. (2014). How does coaching work? A mixed method analysis. Poster presented at the 2014 annual meeting of the Society for Industrial Organizational Psychologists. Honolulu, HI.
What does CIT measure and how do I get involved?
The research questionnaire and process, has been framed in an accessible format as the CoachINSIGHT Toolkit (CIT), which makes engagement with the research as easy as using many commercialized ‘off the shelf’ assessment/development products. It takes just 15 minutes each for coach and coachee to complete
Chris Coultas and the team at UCD, are asking coaches and coachees to independently offer their perceptions on factors such as coaching style, the coach-coachee relationship, and overall goal attainment. By collecting this data, they are able to provide a standard metric against which coaches can gauge their effectiveness. Whilst this toolkit is in the initial phases of development, the CIT offers great promise with regard to building on existing coaching effectiveness research. And, for coaches willing to embrace evaluation, may be able to add even further value to the clients and coachees they serve by using the personal results provided if five or more of your coachees are included in the assessment. If you are interested in learning more about the CIT or would like to help Chris and his team to conduct this exciting piece of research, go to the sign up scheduler.
What if I participate in the research and my clients want me to continue using the CIT as an assessment tool?
This is often a concern for coaches engaging in research of this nature. A key worry that people express in participating in something presented as ‘free research’, is that many activities of this nature are merely a marketing ploy thinly disguised as research. We are now so used to survey based ploys embedded in a sales process that it is hard to tell the difference between what is, and is not ‘real’ research. There are also sometimes concerns that a questionnaire or tool used as part of a research process may cease to be available when the research is completed.
To address these concerns, please be assured this is very much a real research project conducted at the University of Central Florida under a time limited research grant. The research grant ends in February, 2015. However, if you do wish to continue using the tool with your clients, a small and reasonable fee will be levied to cover the costs of supporting additional future reports you might request . There are no short term plans for the toolkit to be commercialized.
Citation
Willis, P., & Coultas, C. (2015). Is evaluation an alternative to coach/mentor credentialing? [Article]. Retrieved from http://new.coachingnetwork.org.uk/article/is-evaluation-an-alternative-to-coachmentor-credentialing/