AI Work Group Recommendations

The American Counseling Association has convened a panel of counseling experts representing academia, private practice and students to comprise its AI Work Group. The work group used research-based and contextual evidence; the ACA Code of Ethics; and clinical knowledge and skill to develop the following recommendations. The goal is to both prioritize client well-being, preferences, and values in the advent and application of AI, while informing counselors, counselor-educators and clients about the use of AI today. The recommendations also highlight the additional research needed to inform counseling practice as AI becomes a more widely available and accepted part of mental health care.

These recommendations are aligned with the ethical framework of the American Counseling Association (ACA), which advocates for the protection and well-being of clients.


Task force recommendations are based on integrating the following factors, considered central to evidence-based practice.

  1. Research-based and contextual evidence
  2. Client preferences and values
  3. The ACA Code of Ethics
  4. Clinical knowledge and skill 

Our working definition of Artificial Intelligence entails computers simulating human intelligence. The simulation involves the completion of tasks resembling those carried out by human intelligence, including reasoning, language comprehension, problem-solving, and decision-making (Sheikh et al., 2023).


Recommendation: Making an Informed Decisions about AI use
Your counselor should explain to you the nature of the services they provide according to the 2014 ACA Code of Ethics (A.2.b.). Should you be interested in AI-assisted tools in counseling services, counselors must ensure that you understand what your selected AI tools can provide and not provide so you can make an informed decision about your use of the AI to assist with your objectives in counseling (H.2.a). AI is not a direct replacement for a human counselor and has its pros and cons. It's important that you understand the function and purpose of the AI to make an informed decision.

Recommendation: Ensure that your information is kept private and secure
It is important to understand how your confidentiality will be protected when using AI. The 2014 ACA Code of Ethics states that counselors must "protect the confidential information of prospective and current clients" (B.1.c). When engaging in counseling that involves AI, it is crucial to ensure procedures are in place to keep your information private and secure. There are several federal and state privacy laws and regulations that are aimed to protect your information such as HIPAA and the California Consumer Privacy Act. Consumers should make sure the AI they use conforms to these laws and regulations to best ensure their privacy and confidentiality. It is your right to have a clear understanding of how your confidential information is protected.

Recommendation: Understand what the AI can and cannot offer
It is important to recognize that while AI can be a valuable tool in counseling, it has its limitations. Counseling ethics emphasizes the importance of counselors being aware of the limitations of the techniques they use (C.2.a, H.4.a). When engaging in counseling that involves AI, you are encouraged to discuss with your counselor about the potential limitations and challenges of using AI in your sessions. It is important for you to understand the capabilities and risks of AI, its potential impact on your counseling experience, and to establish realistic expectations about what AI can and cannot achieve in the counseling process..

Recommendation: There are risks involved with AI
It is important for you to be aware of the potential risks associated with the use of AI in counseling (H.2.a). Potential risks include, but are not limited to, the possibility of false claims or inaccurate information provided by AI tools, as well as inequity in responses, where the AI may not be able to fully understand and respond to the diverse experiences and needs of all individuals (see Recommendation #7; Celi et al., 2022). Counseling ethics emphasize the importance of counselors avoiding harm and ensuring the welfare of clients (A.1.a). When engaging in counseling that involves AI, you are encouraged to discuss with your counselor about the measures in place to mitigate these risks and ensure that the AI tools used are reliable, accurate, and equitable.

Recommendation: Understand that AI should not be used for crisis response
AI should not be relied upon in crisis situations. AI may provide false claims or dangerous information that is not suitable for urgent care. The 2014 ACA Code of Ethics requires counselors to protect clients from harm (A.4.a), and this extends to the use of AI. In a crisis, it is vital to seek immediate help from qualified professionals who can provide the appropriate support and intervention. Always reach out to emergency services, crisis hotlines, or your healthcare provider rather than relying on AI in these serious circumstances.

Recommendation: AI should not be used for mental health diagnosis
AI is not recommended for mental health diagnosis at this point. AI, while a powerful tool, may not fully capture the nuanced understanding and clinical judgment required for accurate mental health diagnoses. Unlike human counselors, AI lacks the ability to holistically consider a client’s complex personal history, cultural context, and varied symptoms and factors among others (Kulkarni & Singh, 2023). Therefore, while AI can be a supportive tool (Abd-alrazaq et al., 2022), it should not replace the professional judgment of professional counselors. It is recommended that AI be used as an adjunct to, rather than a replacement for, the expertise provided by professional counselors. Counselors should ensure that they are competent in the use of AI tools and understand their limitations. Any AI-assisted diagnosis should be critically evaluated and supplemented by the counselor’s professional judgment. For reliable and ethically sound mental health diagnoses, it is imperative to consult with a licensed professional who can offer comprehensive, culturally sensitive, and personalized care, in accordance with the ACA Code of Ethics (E.2.).

Recommendation: AI faces challenges regarding diversity, equity, and inclusion
AI faces significant challenges related to diversity, equity, and inclusion. AI systems often rely on data that may not adequately represent all communities, particularly marginalized groups (Celi et al., 2022). This can lead to a lack of understanding and potential biases in the services provided by AI. The 2014 ACA Code of Ethics emphasizes the profession’s commitment to inclusion and nondiscrimination (C.5). AI's limitations in this regard could inadvertently perpetuate the marginalization of minorities by not fully addressing or understanding their specific needs. Clients are advised to seek support from counseling services that actively consider and address these important factors, ensuring that all individuals receive equitable and culturally sensitive care. 

Recommendation: Ask your counselor to provide guidance on AI use
For those exploring the use of AI for mental health support, it is crucial to consult with a trained, licensed practicing counselor. AI may offer promising benefits, but its claims can sometimes be overly ambitious and simplified, non-evidence based, or even incorrect and potentially harmful. A professional counselor can help you navigate these claims and integrate AI tools into your care appropriately as needed. Engaging in discussions with a counselor is crucial, especially when AI provides novel insights, to ensure these can be effectively contextualized and perhaps applied.  By working with a licensed professional, your mental health support is evidence-based, comprehensive, ethically sound, and tailored to your unique needs.

Recommendation: Accountability in AI Use for Counseling
Clients should be informed about who is responsible for decisions made with AI assistance. Their preference for transparency and accountability in treatment should be respected, aligning with their right to informed decision-making. AI systems should have clearly defined roles, and their outputs should be interpretable by professional counselors to ensure responsible use. The ACA Code of Ethics (Section C.) emphasizes responsibility and accountability in providing counseling services. When using AI tools, it is crucial that the responsibility for decisions and outcomes remains with the licensed professional.

Selected Publications and References

American Counseling Association (2014). ACA Code of Ethics. Alexandria, VA: Author.

Abd-Alrazaq, A., Alhuwail, D., Schneider, J., Toro, C. T., Ahmed, A., Alzubaidi, M., ... & Househ, M. (2022). The performance of artificial intelligence-driven technologies in diagnosing mental disorders: an umbrella review. NPJ Digital Medicine, 5(1), 87.

Celi, L. A., Cellini, J., Charpignon, M. L., Dee, E. C., Dernoncourt, F., Eber, R., ... & Yao, S. (2022). Sources of bias in artificial intelligence that perpetuate healthcare disparities—A global review. PLOS Digital Health, 1(3), e0000022.

Kulkarni, P. A., & Singh, H. (2023). Artificial Intelligence in Clinical Diagnosis: Opportunities, Challenges, and Hype. JAMA.

Sheikh, H., Prins, C., & Schrijvers, E. (2023). Artificial Intelligence: Definition and Background. In Mission AI: The New System Technology (pp. 15-41). Cham: Springer International Publishing.

AI Work Group Members

S. Kent Butler, PhD
University of Central Florida
Russell Fulmer, PhD
Husson University
Morgan Stohlman
Kent State University
Fallon Calandriello, PhD
Northwestern University
Marcelle Giovannetti, EdD
Messiah University- Mechanicsburg, PA
Olivia Uwamahoro Williams, PhD
College of William and Mary
Wendell Callahan, PhD
University of San Diego
Marty Jencius, PhD
Kent State University
Yusen Zhai, PhD
UAB School of Education
Lauren Epshteyn
Northwestern University
Sidney Shaw, EdD
Walden University
Chip Flater
Dania Fakhro, PhD
University of North Carolina, Charlotte