The purpose of these recommendations is to provide guidance to the counseling field about the use and implications of Artificial Intelligence (AI). The recommendations range across many areas of counseling, such as practice, advocacy, research, and ethics. The recommendations are grounded in professional values and ethics principles. The task force emphasized the importance of client welfare in creating this document, centering their efforts on the overarching question, What should counselors do to serve and safeguard clients most effectively? We hope that the recommendations serve a number of purposes, including (a) to raise awareness and continue the conversation about the influence of AI in counseling; (b) to inform current and future discussions around ethical use of AI in counseling; (c) to assist professional counselors and counselors-in-training regarding AI use in clinical work; and (d) to support the mission of the ACA.

 

The integration of Artificial Intelligence (AI), including machine learning and natural language generation, in counseling practice offers significant potential for enhancing efficiency and effectiveness. Embracing AI in counseling practice has the potential to revolutionize the field, increasing efficiency and effectiveness while maintaining ethical standards. By following these guidelines, counselors can navigate the integration of AI in a way that maximizes benefits for both clients and practitioners, all while upholding the highest standards of professional conduct. Remember, the ultimate goal is to enhance the quality of care and support provided to clients.

Methods

Task force recommendations are based on integrating the following factors, considered central to evidence-based practice.

  1. Research-based and contextual evidence
  2. Client preferences and values
  3. The ACA Code of Ethics
  4. Clinical knowledge and skill

Additionally, as the field of counseling extends to counselor education, recommendations are provided for educators in their use of AI.

AI impacts counseling and extends beyond the field. The origins, implications, and body research of AI encompass many disciplines. Consequently, the task force reviewed an interdisciplinary array of sources to extrapolate recommendations.

Our working definition of Artificial Intelligence entails computers simulating human intelligence. The simulation involves the completion of tasks resembling those carried out by human intelligence, including reasoning, language comprehension, problem-solving, and decision-making (Sheikh et al., 2023).

Recommendations

Recommendation: Learn more about the essentials of artificial intelligence, its subfields, and its applications to mental health. 
Counselors are required to practice within their boundaries of competence (C.2.a). To prepare for work with AI, counselors should learn about AI through three levels of understanding.

  1. The essentials, including algorithms and how AI shows up in daily life, such as in social media, marketing campaigns, and in smartphones.
  2. AI subfields, such as machine learning, neural networks, natural language processing, computer vision, and robotics. For example, counselors can learn about large language models (LLMs) and their use in machine learning. LLMs are used in “generative AI,” such as ChatGPT.
  3. Applications. Research suggests that AI is currently applied to mental health in three main ways: (1) “personal sensing” (or “digital phenotyping”), (2) natural language processing, and (3) through chatbots (D’Alfonso, 2020).

Recommendation: Stay open, informed, and educated.
Remain open to technological advances that can improve professional practice.  Efficiencies that can reduce the administrative burden on practitioners are not automatically unethical.  Evaluate technologies critically before incorporating for practice. Stay updated on the latest developments, ethical standards, and best practices related to AI in counseling.

Recommendation: Avoid over-reliance on AI.
While AI can enhance efficiency, it should not replace the essential human element in counseling. Maintain a balanced approach, ensuring that the therapeutic relationship remains central.

Recommendation: Recognize that AI may contain bias and be capable of discrimination.
AI is imperfect. The perception among users of AI may be that AI does not judge or discriminate, and while this is true in the sense that an AI is essentially a computer system, the perception is false in the sense that the output of AI, such as the text, audio, or visuals, that it generates, may show bias. Unfortunately, AI still has a diversity and inclusion problem (Fulmer et al., 2021). The algorithms and training models that comprise the AI may lack the likes of racial, ethnic, religious, and gender inclusion. Recognize that with all its capabilities and potential, AI may be biased and thus, cause harm.

Recommendation: Career counselors and those who address employment issues should stay informed about how automation is shaping the world of work.
Counseling history is steeped in the vocational guidance movement of the early 20th century. Change has been a constant from the time of Frank Parsons to today, but AI presents special challenges for career counseling. For centuries, technology has rendered some jobs obsolete and created new ones. The future state of the job market in relation to AI is unclear, including whether it will yield a surplus, remain relatively balanced, or result in a deficit of jobs. An analysis from the Brookings Institute suggests that automation may disrupt some industries, and though mass unemployment is unlikely in the near future, worker transitions may grow more common (Bessen et al., 2020). Counselors should explore career developmental models that account for rapid changes in the labor force.

Recommendation: Advocate for transparency in AI algorithms.
A transparent AI algorithm is one that is open to inspection by someone other than the developer (Yudkowsky & Bostrom, 2011). Counselors could be part of inspection teams, helping to ensure that AI is built fairly and is comprehensible, and then relaying their findings back to the counseling community. Transparent AI includes three factors: Accessibility, Interpretability, and Controlled Maintenance (Fulmer et al., 2021). Accessibility means that the AI should be available and responsive to the people for whom it was intended. Interpretability concerns the output of an AI, which must be easy to understand, user-friendly, and clear from the beginning that it is indeed, an AI. Relatedly, AI should inform users of whether a human is available for support. Finally, an AI must adapt and evolve by receiving regular improvements, which should come from client, counselor, and programmer-informed feedback.

Recommendation: Maintain transparency and informed consent.
Counselors should clearly inform clients about the use of AI tools in their counseling process, explaining their purpose and potential benefits (H.2.a). Obtain explicit informed consent from clients for the use of AI-assisted tools, ensuring they understand the implications and potential impact on their treatment.

Recommendation: Leverage AI for data-driven insights.
Employ AI tools for data analysis to gain valuable insights from anonymized and aggregated client data to evaluate and inform evidence-based treatment approaches and interventions. Continuously evaluate the accuracy and appropriateness of AI-generated analytics and content, especially in cases where it interacts directly with clients (e.g., chatbots). Intervene when necessary to correct or modify responses.

Recommendation: Ensure data security and privacy.
AI platforms designed for counseling services and training purposes should prioritize data security and privacy from the outset, incorporating the principles of "Privacy by Design". This approach ensures that personal identifying information and personal health information are protected throughout the system's lifecycle. Furthermore, AI platforms must adhere to the standards set by applicable local privacy laws and regulations (e.g., HIPAA in the United States; H.1.b). By embedding privacy considerations into the design and operation of AI systems, providers can ensure the secure and confidential handling of sensitive data, fostering trust and compliance in their use for counseling services.

Recommendation: Counselors should empower clients to communicate about their AI use.According to the ACA Code of Ethics (2014), empowering a diverse array of clients is a central mission of the counseling profession. Counselors should empower their clients to communicate the use of AI tools to support their mental health with their counselor, as this information can help the counselor understand the client’s approach to mental health. The counselor may be able to guide the client on how to use AI tools safely and effectively.

Recommendation: Supervisors can use AI to enhance the development of supervisees.
The supervisory relationship is key in supporting the development of new counselors (Borders & Brown, 2022). The supervisory relationship should include discussion of AI counseling tools and counseling supervision can be enhanced by the use of AI tools. Counseling supervisors should be aware that AI tools for monitoring supervisees' work exist and can make suggestions for how counselors can improve their work. Supervisors may want to use AI tools to read transcripts of their supervisees’ sessions or to analyze the use of counseling skills by supervisees in session.

Recommendation: Counselors must understand the limitations of AI in diagnosis and assessment in all counseling settings.
Counselors should refrain from using AI as the sole tool for diagnosis and assessment in counseling. Although AI can be a supportive tool to inform a counselor's professional judgment, counselors must attain adequate training to understand the limitations and the use of AI in clinical settings. This approach is aligned with the ACA Code of Ethics (C.2.), which emphasizes the importance of professional competence and judgment in clinical decision-making. Counselors must critically evaluate AI-assisted diagnostic suggestions and incorporate their clinical expertise, understanding of the client's history, and cultural context to ensure a comprehensive and ethically sound assessment. This recommendation supports the responsible and client-centered use of AI in counseling.

Recommendation: Consider conducting research on the intersection of AI and counseling.
A dearth of research currently exists on how AI can and may impact counseling (Fulmer, 2019). AI shows potential to influence several areas of clinical practice (e.g., diagnosis, practice management, automating documentation), counselor education, and research approaches; thus, more research is needed to discover AI’s potential in these areas. Future research should also focus on the ethics of using AI and extend our understanding of the impacts of advanced technologies, including AI, on diverse populations. Counselors and counseling researchers are encouraged to take up the charge to conduct research that transforms counseling practices and training for better client care and wellness. There are three significant benefits of counselor-led research. One, to help ensure that the incorporation of AI into practice is substantiated by research. Two, to help the counseling field lead the way as AI enters client care, broadly speaking. Third, to inform the public about AI’s efficacy and capabilities in providing counseling services and mental health support.

Selected Publications and References

American Counseling Association (2014). ACA Code of Ethics. Alexandria, VA: Author.

Bessen, J., Goos, M., Salomons, A., & van den Berge, W. (2020). Automation: A guide for policymakers. Economic Studies at Brookings Institution: Washington, DC, USA.

Borders, L. D., & Brown, L. L. (2022). The new handbook of counseling supervision

D’Alfonso, S. (2020). AI in mental health. Current Opinion in Psychology36, 112-117.

Fulmer, R., Davis, T., Costello, C., & Joerin, A. (2021). The ethics of psychological artificial intelligence: Clinical considerations. Counseling and Values66(2), 131-144.

Fulmer, R. (2019). Artificial intelligence and counseling: Four levels of implementation. Theory & Psychology29(6), 807-819. https://doi.org/10.1177/0959354319853045

Minerva, F., & Giubilini, A. (2023). Is AI the Future of Mental Healthcare?. Topoi : An international review of philosophy42(3), 1–9.

Sheikh, H., Prins, C., & Schrijvers, E. (2023). Artificial Intelligence: Definition and Background. In Mission AI: The New System Technology (pp. 15-41). Cham: Springer International Publishing.

Yudkowsky, E., & Bostrom, N. (2011). The ethics of artificial intelligence. In K. Frankish & W. Ramsey (Eds.), The Cambridge handbook of artificial intelligence (pp. 316–334). Cambridge University Press.

AI Work Group Members

S. Kent Butler, PhD
University of Central Florida
Chip Flater
Amercan Counseling Association
Morgan Stohlman
Kent State University
Fallon Calandriello, PhD
Northwestern University
Russell Fulmer, PhD
Husson University
Olivia Uwamahoro Williams, PhD
College of William and Mary
Wendell Callahan, PhD
University of San Diego
Marcelle Giovannetti, EdD
Messiah University- Mechanicsburg, PA
Yusen Zhai, PhD
UAB School of Education
Lauren Epshteyn
Northwestern University
Marty Jencius, PhD
Kent State University
 
Dania Fakhro, PhD
University of North Carolina, Charlotte
Sidney Shaw, EdD
Walden University