Recommendation: Call for more research into AI.
Research into AI is burgeoning, and we can make inferences and share perspectives, but from an evidence-based perspective, there is much we don’t know about the interface of counseling and AI. Therefore, we call for more research from the counseling community. We encourage those interested in AI to continue reading and learning about its developments.

 

Recommendation: Interdisciplinary collaboration on the development of AI for counseling
Research on AI for counseling requires interdisciplinary efforts. We encourage the formulation of research teams comprising practicing counselors, counseling researchers, AI developers (e.g., computer scientists), and representatives from diverse client populations. This fosters a holistic approach to AI development, ensuring it meets clinical needs while being ethically sound and culturally sensitive.

Recommendation: Keep a keen eye on bias and potential discrimination in AI.
AI shows great promise, and the potential for great peril comes with that. As AI relates to counseling, there is potential great harm in biased AI output. The training data and algorithms used to create the AI are the typical culprits, but with machine learning, users can “train” the AI to be harmful. See the infamous “Tay bot” case from Microsoft as an example. We should note that most therapeutic chatbots are designed to prevent the bot from going rogue, so to speak. Some chatbots have research backing for their efficacy as mental health support agents. We neither endorse or recommend against chatbots at this stage, but encourage more research and promote efforts to eliminate bias and discrimination in AI.

Recommendation: The benefits and risks identified for AI may shift with additional experience and research. Updates to statements and guidelines about AI are necessary, considering the ever-changing nature of AI.
Few areas change and advance as quickly as AI. The ACA helps to ensure that counselors engage in practice based on rigorous research methodologies. Therefore, as AI advances, the ACA should retain an open mind, work to ensure safety measures are in place, and change its recommendations based on evidence.

Recommendation: Remember the value of human relationships.
AI and technology in a general sense can change the nature of relationships. We know that social media use can impact the quality of human relationships. AI may change relationships further. Avatars, chatbots, and potentially humanoid robots may stress or otherwise challenge human-to-human relationships. We encourage you to remember the value of human relationships.

Recommendation: The ACA should consider including the topic of AI in the next revision of the ACA Code of Ethics.
The 2014 ACA Code of Ethics does not currently mention Artificial Intelligence. The next publication of the Code should include references to the role of AI in counseling and supervision.

Recommendation: Consider adding ‘explicability’ in the ethics code as a principle in relation to AI work.
Explicability is a term used to make AI less opaque (Ursin et al., 2023). Put another way, it implies that AI creators and users, counselors in this context, should have intelligibility and accountability (How does it work? Who is responsible?) (Floridi & Cowls, 2023).

Recommendation: Continuously evaluate and reflect.
Counselors should regularly assess the impact of AI on their practice, seeking feedback from clients and colleagues. They should adjust their approach as needed to ensure the highest quality of care.

Recommendation: Monitor the role of AI in diagnosis and assessment.
A growing body of literature has shown that AI has the potential to assist in diagnosis and assessment (Abd-alrazaq et al., 2022, Graham et al., 2019). AI may help in predicting, classifying, or subgrouping mental health conditions utilizing diverse data sources like Electronic Health Records (EHRs), brain imaging data, monitoring systems (e.g., smartphones, video), and social media platforms (Graham et al., 2019). With all of this in mind, we cannot solely rely on AI for diagnosis. The current state of AI technology does not fully encompass certain real-world aspects of clinical diagnosis in mental health care. AI does not adequately address essential elements like gathering a client's history, understanding their personal experiences, the reliability of electronic health records (EHR), the inherent uncertainties in diagnosing mental health conditions, and the vital role of empathy and direct communication in therapy. As we look to enhance diagnostic processes in counseling with AI, these critical factors must be integrated to ensure comprehensive, empathetic, and accurate care in the 21st century.  Nevertheless, with continual advancements, it might not only be a tool to assist with diagnosis and assessment, but in some regards, it might one day surpass human-level abilities. The ACA should keep a keen eye on the diagnostic abilities of AI.

Recommendation: Developing client-centered AI tools.
When developing AI tools for client use, we encourage the research and development team to involve clients and counselors in the design process, ensuring the AI tools are client-centered, address real-world needs, and respect client preferences and values.

Recommendation: Integrating Ethical AI Training into Counselor Professional Development
Develop a comprehensive and continuous AI training program for counselors and trainees, emphasizing the proper and ethical use of AI in line with the need for continuing education according to the ACA Code of Ethics (C.2.f). This training program should include a detailed understanding of AI technologies, their applications in various counseling services, and crucial ethical considerations such as privacy and confidentiality. Additionally, the program should be part of counselors' ongoing professional development, incorporating regular updates and refreshers to keep pace with the rapidly evolving field of AI. This integrated approach ensures that counselors are both technically proficient and ethically informed in using AI tools.

Selected Publications and References

Abd-Alrazaq, A., Alhuwail, D., Schneider, J., Toro, C. T., Ahmed, A., Alzubaidi, M., ... & Househ, M. (2022). The performance of artificial intelligence-driven technologies in diagnosing mental disorders: an umbrella review. NPJ Digital Medicine, 5(1), 87.

American Counseling Association (2014). ACA Code of Ethics. Alexandria, VA: Author.

Floridi, L., & Cowls, J. (2022). A unified framework of five principles for AI in society. Machine learning and the city: Applications in architecture and urban design, 535-545.

Graham, S., Depp, C., Lee, E. E., Nebeker, C., Tu, X., Kim, H. C., & Jeste, D. V. (2019). Artificial intelligence for mental health and mental illnesses: an overview. Current psychiatry reports, 21, 1-18.

Ursin, F., Lindner, F., Ropinski, T., Salloch, S., & Timmermann, C. (2023). Levels of explicability for medical artificial intelligence: What do we normatively need and what can we technically reach?. Ethik in der Medizin35(2), 173-199.

AI Work Group Members

S. Kent Butler, PhD
University of Central Florida
Chip Flater
Amercan Counseling Association
Morgan Stohlman
Kent State University
Fallon Calandriello, PhD
Northwestern University
Russell Fulmer, PhD
Husson University
Olivia Uwamahoro Williams, PhD
College of William and Mary
Wendell Callahan, PhD
University of San Diego
Marcelle Giovannetti, EdD
Messiah University- Mechanicsburg, PA
Yusen Zhai, PhD
UAB School of Education
Lauren Epshteyn
Northwestern University
Marty Jencius, PhD
Kent State University
 
Dania Fakhro, PhD
University of North Carolina, Charlotte
Sidney Shaw, EdD
Walden University