Photo by Andres Siimon on Unsplash
AI coaches may soon replace thousands of human coaches. But will they? This is what the research says.
As in many other fields, there is a heated debate about the impact of AI on coaching. Some people believe that the arrival of AI is the best thing since sliced bread, while others think it is a terrible curse threatening our very humanity and that will make us completely irrelevant.
Most people are somewhere in the middle, but who is right? Do we actually know?
This debate is playing out on social media and in the public domain, driven more by myths, fear, and hype than by solid empirical research. Yet, unsurprisingly, similar disputes are also being challenged within the academic world. Researchers and scholars are found on both sides of the main debates about AI coaching.
My goal in this post is to examine the debates surrounding the use of AI in coaching from an academic perspective, based on a review of evidence-based empirical research and peer-reviewed papers, in an effort to shed some light on this field often obscured by economic interests, hyperbole, overexcitement, and doom.
AI coaching
The use of AI in coaching is an emerging trend that has gained prominence in recent years, particularly following the introduction of ChatGPT and other Large Language Models (LLMs) from 2022 onwards. Since then, the academic literature on the subject has expanded; however, it remains relatively nascent, and the body of empirical research is limited (Bachkirova & Kemp, 2024; Passmore, Olafsson & Tee, 2025).
AI has various applications in coaching. It can enhance the work of human coaches by providing feedback synchronously or asynchronously, assisting with data analysis, or facilitating personal development (Passmore & Tee, 2023; Terblanche, 2024a). It could also replace the human coach in what is referred to as AI coaching, which involves an AI chatbot or a coachbot (Passmore &, 2023; Terblanche, 2024a; Passmore, Olafsson & Tee, 2025).
The main themes in the academic literature encompass the benefits and limitations of AI in coaching, its impact and effectiveness, and the ongoing debate about whether AI can effectively replace human coaches.
Let’s look into all of these in turn below.
Benefits of the use of AI in coaching
Scalability and cost
Scalability and cost are the most frequently mentioned benefits of AI coaching in the literature (Terblanche & Tau, 2024; Passmore, Olafsson & Tee, 2025).
Human coaching can be expensive, and there is a limit to the number of sessions a well-trained coach can conduct in a day. An AI coach can simultaneously coach millions of users and is relatively inexpensive, provided that the high energy requirements of LLMs are overlooked (Solovyeva, Weidmann and Castor, 2025). The democratisation of coaching, which would lower costs and increase scalability, is often touted as an argument for using AI in coaching (Terblanche & Kidd, 2022; Terblanche & Tau, 2024).
Nevertheless, AI coaches will be owned ‘by the few’ and clients with resources will continue to have access to top human coaches while the rest will be left ‘with a poor-quality substitute’ provided by AI (Bachkirova, 2024: p. 14).
Having said all this, all the discussion about scalability and low cost is pointless if one condition is not met: AI coaching must be effective. As will be seen in another section below, the empirical evidence is still insufficient to assert that this is the case today.
Training the AI coach
Another benefit of AI is that it can analyse thousands of coaching sessions and learn from them (Passmore & Tee, 2023). It can possess significantly more ‘experience’ than the most seasoned master coach (Graßmann & Schermuly, 2021: p. 119). It can analyse troves of data and make statistical and probabilistic inferences about the best approach, but can we equate that with human experience?
Human experience and perception are embodied, encompassing both rational and emotional elements. When a coach decides which intervention to pursue next, their choice is based on the data analysis of all their prior experiences, but they also rely on their intuition, empathy, creativity, and emotional intelligence.
AI can analyse more data points than human beings, but I don’t think its analysis is qualitatively superior to that of a human. That’s my opinion at the time of writing, in September 2025. Things can change very quickly in this field, so who knows what will happen in the future, but right now I believe human beings still maintain an edge.
AI forces us to think about ourselves and our place in the world
A final and often overlooked benefit of AI in general, and AI coaching in particular, is that it encourages us to reflect carefully on who we are and who we aspire to become as human beings, and to question what coaching truly is and should be (Bachkirova, 2024; Bachkirova & Kemp, 2024). So, AI coaching can be terrible in many aspects, but at least it forces us to look in the mirror and ask hard questions about ourselves.
Your coach in your pocket / Photo by Solen Feyissa on Unsplash
Limitations and risks of AI coaching
The main potential limitation of AI coaching is that it may not truly constitute coaching (Bachkirova, 2024; Bachkirova & Kemp, 2024). This assertion fuels the primary debate in the field (i.e., can AI coaches replace coaches if what they do is not really coaching?), so I address it in another section below.
‘A’ stands for Artificial, not human
Other limitations of AI coaching stem from its non-human nature, leading to the absence of certain qualities that enable a human coach to excel, such as emotional intelligence, empathy, creativity, adaptability, and playfulness (Passmore and Tee, 2023; Passmore, Olafsson and Tee, 2025).
In some studies, participants noticed the robotic communication style of the AI, which negatively impacted their experience (Passmore & Tee, 2024; Terblanche & Tau, 2024). Nevertheless, AI will continue to evolve and will increasingly be able to simulate human attributes convincingly. Does it truly matter if an AI coach feels anything or merely pretends to, if this helps the client achieve their goals?
Some risks
AI coaching entails some risks that have not yet been adequately addressed. Ethical risks exist that coaching professional bodies, such as the ICF, are striving to confront (ICF, 2025). AI systems have been found to exhibit bias and lack sufficient inclusivity (Diller, 2024) or to hallucinate, confidently fabricating information (Terblanche, 2024b).
Additionally, coaching conversations are sensitive, raising natural concerns about data privacy, security, and confidentiality (Passmore & Tee, 2023; Terblanche, 2024a). Nobody knows where all the coaching data goes, how it is stored, or who has access to it. Terblanche and Kidd (2022) argue that other sectors have effectively managed sensitive data for longer, and the study participants did not seem to regard this issue as significant. Nonetheless, all of these are serious concerns, and they should not be brushed aside without a thorough debate first.
Impact and effectiveness
Several studies have attempted to measure the effectiveness of AI in coaching; however, the existing body of research in the field remains limited.
Measuring outcomes of AI in coaching
Passmore, Olafsson, and Tee (2025) found only four randomised controlled trials (RCTs) or quasi-experimental designs in their systematic review, which showed that AI chatbots designed narrowly to improve specific outcomes are effective in achieving these outcomes. For example, in one RCT, researchers measured the efficacy of a chatbot coach and found a statistically significant improvement in goal attainment, but not in other measures such as psychological well-being, resilience, and stress (Terblanche et al., 2022).
In a recent study, Passmore, Tee and Rutschmann (2025) had ICF assessors evaluate coaching sessions delivered by an AI coaching agent to human volunteers using the ICF competency framework. The AI coach demonstrated many elements of the ICF ACC level coaching competencies, and some of the PCC level. The study revealed that AI coaches can be highly competent in certain aspects of the coaching process, but require further development in other areas. It should be noted that the coaching sessions were conducted exclusively in writing, so many of the elements that make a coach proficient or masterful, such as the use of silence, body language, and emotions, could not be assessed.
Bachkirova and Kemp (2024) challenge the excessively narrow benchmark used to measure the success of AI coaching. After all, is coaching solely about helping clients achieve their goals, or is there more to it? AI coaching seems to focus primarily on goal attainment, utilising cognitive behavioural coaching (CBC) with simple frameworks, such as GROW and PRACTICE, while overlooking other methods coaches can use to support their clients (Bachkirova, 2024).
Consequently, the risk associated with pursuing AI coaching is that coaching becomes a mechanistic endeavour focused on achieving specific and easily measurable goals, while marginalising the client’s ‘values, meaning, desires and full-scale subjectivity’ (Bachkirova, 2024: p. 11). In another paper, Bachkirova (2025) writes about the ‘enshittification’ of coaching, or diminution of the quality of all coaching services, thanks to AI.
The common factors
When evaluating the effectiveness of coaching, most studies focus on the outcomes of the coaching process. Another approach is to examine the so-called ‘common factors’, that is, the factors affecting on the efficacy of coaching. Here, the literature on AI coaching is even more scant.
The working alliance, or the working relationship between coach and coachee, is a well-researched concept in coaching and has often been found to be one of the key predictors of success in the coaching process. Personally, I find it difficult to believe that we can build a strong relationship with a piece of software; however, some studies have established that AI coaches can form a working alliance with their clients (Graßmann & Schermuly, 2021; Passmore, Olafsson, & Tee, 2025).
To sum up, there is a lack of substantial empirical research in the field, so more research is needed, particularly RCTs and quasi-experimental studies, studying outcomes that go beyond goal attainment, before it can be said with some conviction that AI coaching is effective (Passmore, Olafsson and Tee, 2025).
The Replacement Debate
Most authors agree that AI is here to stay and can assist human coaches (Graßmann & Schermuly, 2021; Bachkirova & Kemp, 2024; Diller, 2024; Passmore, Olafsson & Tee, 2025). There are no doubts about using AI to augment or enhance human coaches; however, a heated debate about whether AI can replace human coaches is ongoing.
Is AI coaching really coaching?
Bachkirova, with Kemp’s support, represents one side of the debate. Her argument is that AI coaching is not coaching, and labelling it as such does a disservice to the coaching profession (Bachkirova, 2024). Bachkirova and Kemp (2024) examine it from the perspective of the elements necessary for organisational coaching to be effective: joint inquiry, making sense of experience, being value-based, highly contextual, based on trust, and contracting-based. They conclude that AI coaching lacks understanding, subjective experience, and self-awareness, and therefore is absent in most of these elements.
Bachkirova (2024) argues that AI cannot emulate human intelligence, which is embodied, intuitive, and emotional, and therefore cannot help a client feel understood or find meaning, purpose, and value. This is not a question of not being technologically ready yet, but of principle: AI coaching will never be able to successfully replicate the role of human coaches because it lacks some inherently human features that are essential for being an effective coach.
Bachkirova and Kemp (2024: p. 29) have a point when they call AI a ‘stochastic parrot’. AI reproduces a string of words based on statistical and probabilistic inferences, so it does not possess a genuine understanding or awareness, and can only simulate thinking, empathy, or subjectivity.
But does it matter?
Do clients care if their coach has feelings or subjective experience?
Human beings tend to anthropomorphise everything around them, and AI is no exception. AI lacks subjective experience and empathy towards its clients, but as long as the clients delude themselves into believing it does have them, it does not matter what happens inside the machine.
What matters in coaching is arguably whether the work is helpful for the client. The heavy lifting in the coaching partnership should be done by the clients (de Haan, 2024), so a convincing AI coach can assist them in doing so.
The promoters of AI coaching argue that AI will only get better (Passmore & Woodward, 2023). As long as it works for clients, who cares if the coachbot is aware of itself or feels emotions? Terblanche (2024a) argues that AI coaching is inevitable, and human coaches should embrace it and adapt to it, or risk becoming irrelevant.
The issue here is that AI coaching shows promise, but there is insufficient evidence (yet) to assert that it is effective (Passmore, Olafsson & Tee, 2025).
Even if we were to accept the limited evidence and acknowledge its effectiveness, AI coaches have been trained in only a few coaching approaches, such as CBC, motivational interviewing, or solution-focused coaching (ibid., 2024: p. 10).
Coaching is broader than that, and there are many different approaches, suitable for various types of problems, coach personalities, and client types (Myers & Bachkirova, 2018; de Haan, 2024).
AI will advance and progress quickly, but it is still far away from being able to emulate some coaching approaches, like for example those requiring a connection with the body, such as somatic coaching, or Gestalt coaching; those based on a deep human connection between the coach and client, such as person-centred coaching or existential coaching; or those with a psychodynamic approach, where the coach has to be aware of and be able to play with the transference and countertransference happening during the session.
Where will the master coaches come from?
In conclusion, AI will and should be used to support coaches, and it will replace some coaches, but probably not all. We may end up all having our own AI coach in our pocket, which could create an appetite for higher-quality human coaching. Good coaches will not disappear, especially those using approaches that are not easily replicable by AI, but how will they get enough practice to reach that level?
In this scenario, beginner coaches will struggle to obtain the necessary client hours to master the necessary skills, as all basic coaching services will be provided at low cost by AI coaches. They will likely have to practice with AI clients, which makes it all rather ironic: AI coaches will coach human clients while AI clients train human coaches.
How does all this affect my practice?
Based on what the literature tells us, I see two main ways in which the use of AI can influence my coaching practice: one presents an opportunity, the other poses a threat.
The opportunity is that I can utilise AI to enhance my coaching practice. However, this may be for nothing if the threat materialises and AI ends up replacing me as a coach. The opportunity requires me to leverage AI; the threat to make my coaching as AI-proof as possible.
Let’s explore these in turn.
The opportunity: how can AI help me improve my coaching practice?
The phrase ‘AI will not take your job, but someone using AI will’ has gained popularity lately, and it may also apply to coaching. Among other uses, AI can analyse data from recorded coaching sessions to gain insights and develop coaching skills (Bridgeman and Giraldez-Hayes, 2024), assist coaches by proposing questions during sessions (Movsumova et al., 2020), or send nudges to clients between sessions (Passmore & Tee, 2023).
I have thought extensively about how to maximise AI to help me become a better coach. I am considering using software like Ovida. This program utilises AI to provide real-time feedback during coaching sessions and also to record and analyse those sessions based on some metrics. Other similar ones exist in the market.
I still have concerns about data usage and how my clients would feel about their sessions being monitored by an AI, so I am still sitting on the fence on this one.
The threat: will an AI coach replace me?
The threat poses an existential risk. Many of Bachkirova’s (2024, 2025) arguments resonate with me, and part of me wishes AI had never emerged to disrupt our world.
I fear the damage our outsourcing of thinking to AI might inflict on our critical thinking and how we might end up promoting even more mechanistic and excessively rational thinking over more intuitive, emotional, and organic ways of being.
And yet, AI is here to stay, and a significant portion of the population seems to be captivated by it and its possibilities. As AI becomes more adept at being a coach, my thoughts have increasingly turned to what I can do to AI-proof my coaching practice and make myself less vulnerable to being replaced by an AI coach.
Artificial Wisdom anyone?
Bachkirova (2024) believes that human intelligence has been at the centre of coaching’s success. AI is not truly intelligent, but it can emulate and simulate human intelligence with increasing efficacy (Wong, 2025). Nevertheless, it will never be able to emulate or simulate wisdom, as it lacks any consciousness or self that can reflect on itself and gain wisdom through personal experience (Ardelt, 2025).
I believe wisdom will be one of the last human traits to be conquered by machines. I also believe that coaching can be a powerful method to help clients develop wisdom. The body of research on the effects of coaching on wisdom is limited; however, it has been found to impact several dimensions that researchers associate with wisdom, such as emotion regulation (Howard, 2015), open-mindedness (Finn, Mason & Bradley, 2007), and self-reflection (Fontes & Dello Russo, 2021).
This was the subject of my master’s dissertation, for which I conducted a research study with exciting results (more on this soon, watch this space). I plan to continue researching the relationship between coaching, wisdom, and leadership in further studies (more on this also coming soon). I hope that becoming an expert coach who can help clients become wise leaders will help me differentiate myself from more generalist leadership coaches and AI coaches alike.
Another way to AI-proof my practice is by focusing on coaching approaches and methods. As discussed above, some approaches, such as CBC or solution-focused approaches, appear more suitable for automation than others (Passmore, Olafsson & Tee, 2025). I was trained as an ontological coach, so I focus on my clients’ way of being through the use of language, emotions, and the body. I have also trained in Gestalt coaching and I am interested in the use of mindfulness coaching, existential coaching, and somatic coaching. I am integrating all these approaches into my practice in a way that is difficult for AI to emulate.
The verdict
The use of AI in coaching is a significant trend, but it is still emerging and there is not enough empirical evidence to affirm that AI coaching is effective or has the capabilities to replace human coaches.
Most authors agree that AI can be integrated to enhance human coaching, but the ethical and data privacy risks it poses must be debated and considered carefully.
The use of AI in coaching presents enormous opportunities, but also poses the risk of replacement. Therefore, all coaches should closely monitor this emerging trend and observe its evolution. I am certainly doing so.