Mental Health Matters

a resource of Shalem Mental Health Network

AI and Mental Health: A tool, not a substitute

Feb 11, 2026 | Mental Health Matters

[7 minute read]

TL;DR: Increasingly, people are turning to AI for mental health information and support. While AI can be a useful resource, it can not provide the connection and empathy of a loved one, or the skill, insight, responsibility and ethics of a human mental health professional. Understanding AI as a map can give realistic expectations about both the benefits and limitations of this technology, particularly in the area of mental health care. Recommendations for effective use and resources for further reading are included.. 

While Shalem is by no means a center of Artificial Intelligence (AI) research, we do pay close attention to emerging conversations in our field. And we’re listening.

You may have noticed – AI is becoming part of everyday life for many of us. It has shifted from my weekend’s science fiction films to my desktop’s editing programs. And things are moving quickly. It’s become a new normal, and this normalcy has seeped into the field of mental health.

A poll by Research Mental Health Canada 2025 indicated that 17-22% of Canadians are currently using AI for emotional support. That figure increases to 30% for people identifying themselves as from a racialized group and for young adults. These figures raise so many questions – why is this number reaching for AI rather than their support system or professionals? Obstacles to care may include cost, professional mistrust, stigma, waiting lists, or even a preference for anonymity. Answers are not easy to come by, as AI is so new in its current form that we’re catching up make sense of its impact.

If you’ve had a conversation with a chatbot, you may have noticed that one can ask it anything, and because of the vast amount of information online about mental health, AI tools can answer questions, offer reflections, or even help organize thoughts. It’s natural to wonder whether these tools might also be helpful when it comes to dealing with our own mental health problems.

We have reviewed the research about AI and mental health in order to understand the presence of this new “player” in the mental health landscape, with the goal of helping you reflect on your relationship with AI. These thoughts are intended for a general audience, and we strongly encourage you not to take our word for this, but do some of your own research on the topic. This learning curve is worth some reading.

You’ll notice a bias in this reflection. A fundamental position that shapes our mental health conversations at the core of Shalem is that healing and recovery happen best through relationships. We believe technology may assist us, but it cannot replace human care, connection, insight and responsibility.

Recovery is Relational
Mental health is not formed in a vacuum. In Dan Siegel’s book The Developing Mind, he makes a compelling case that healthy brain development is closely tied to the quality of relationships in a child’s life. From the beginning, we are created for connection. We are born into the care of parents and caregivers, and shaped by siblings, extended family, neighbours, and community. We learn who we are in the faces of those who love us. They show us that we are of value and worth protecting, and they help regulate our distress—their emotional steadiness lending stability to our own. Their use of empathy shapes our resilience and ability to form relationships.

An agitated child can return to their homework more easily when a parent simply sits beside them. A client may be able to try a relaxation exercise for the first time when a therapist models it calmly with them. Seeing our own pain reflected with empathy on a loved one’s face—these moments remind us that we are designed to heal together. These forms of healing depend not only on information, but on presence, something technology can gesture toward, but cannot fully provide.

When our brain is not working well, it not only needs medication, cognitive tools or prayer. Our belief at Shalem is that mental health recovery thrives in the context of a relationship. It unfolds in the presence of others—within families, trusted friendships, faith communities, and with those who bear witness to our stories with care. At times of struggle, we often seek out people trained to help: therapists, social workers, doctors, psychologists, and other caregivers who bring skill, responsibility, and ethical commitment to their work. 

The vast research on attachment and child development speaks to how eye contact, tone of voice, timing, and embodied presence shape us at levels deeper than words alone, forming our sense of safety, trust, and connection.

Because AI does not have lived experience, it is not able to provide that layer of connection that fortifies recovery. A useful tool for reflection, preparation, or learning, it cannot carry the full weight to healing on its own.

When working with individuals at our Hamilton clinic, something I often find myself saying to people struggling with depression or anxiety is that they can address the problem as an individual, but they are also welcome to bring their partner into the therapy room. They can learn to cope on their own, or they can strengthen their relationship and work on the problem as a team—fortifying their bond and, in doing so, their own individual resilience.

AI is similar. We can learn concepts on our own through AI, but lessons land more deeply and take fuller shape when they are shared with a caring audience—whether a friend, family member, clergy, or therapist.

Possible benefits if used with care:

When used with care and realistic expectations, AI tools may:

  • Help people feel less overwhelmed by organizing thoughts
  • Support reflection and self-understanding
  • Make it easier to begin conversations with others
  • Offer general information in an accessible language

These benefits are modest. They are about support and preparation, not treatment.

Important Limits and Risks

It’s equally important to name what AI cannot do. AI tools:

  • May delay a person from seeking professional help
  • Cannot understand your full story or context
  • Cannot assess safety or risk
  • Cannot flag the bias it may have in its answers
  • Cannot offer accountability or ethical responsibility
  • May provide inaccurate or misleading information
  • May unintentionally encourage isolation if relied on too heavily

Because AI responses can sound confident even when they are wrong, it’s wise to hold them lightly and double-check important information with people you trust.

Mental Health is Different
The questions we pose to AI about directions, film plots and gardening are different in nature from questions about our mental health. We might ask AI about tools to increase our confidence for a conversation with our supervisor, or strategies to improve our sleep. Those questions might emotionally have some importance, but would carry a different weight than questions about dealing with panic or symptoms surrounding old trauma. Those issues can cut deep, into hard and even sacred space.

When vulnerable, we have a harder time dealing with incorrect information, and may not be at our best to test information or discern if something is correct. Consider being kind to yourself and save your most vulnerable questions for trusted people and spaces.

AI is like a Map
Those of us who have interacted with AI may have noticed that it can feel surprisingly conversational, almost like talking with an acquaintance. Sometimes it even adopts a tone that feels friendly or familiar. It’s an easy mistake to begin imagining AI as a sentient person.

A more helpful way to think about AI is as a map. A map can offer remarkable detail. It can show locations, routes, terrain, and bodies of water. It can tell you which way is north, how far two places are from one another, and the traffic patterns we’ll encounter on the way.

What a map cannot do is tell you which destination matters to you. It cannot tell why you feel fond of a particular provincial park, or whether you have the time, energy, or resources to make the journey. It cannot tell you which places feel like home, or which ones will nourish you when you arrive.

Occasionally, the map is wrong. The term for this with artificial intelligence is AI hallucination. A hallucination occurs when an AI system generates information that sounds confident and coherent but is inaccurate, fabricated, or based on a misunderstanding of the question. Because the response is said in a way that sounds friendly or confident, it can be difficult to catch the error. In some contexts, this is only inconvenient. When it impacts issues like our health or mental health, it can be harmful. Increasingly, cautionary examples remind us that when vulnerable individuals in crisis turn to AI for support, the consequences of misinformation can be serious. See this article for more information.

AI has immense breadth. It can recognize patterns, remember preferences, and respond in ways that feel attuned to us. But like a map, it does not see the larger meaning of the journey, or why it matters to you in the first place. It cannot hold your hand on the journey, or share your joy from the adventure. It may even guide you the wrong way. These elements of care live in our humanness and our ability to make meaning, woven into our connections with each other. But as a map, it does make some tasks much easier.

Recommendations

If you are considering using AI as a tool in your mental health journey, consider the recommendations, adapted from the Canadian Mental Health Association (CMHA) a very grounded and reliable source of research and mental health guidelines in Canada:

Ask for help from a human. Mental health professionals come in many shapes and sizes – from a high school social worker, psychologist, psychiatrist and psychotherapist. You could also reach out to your friendly neighbourhood counselling agency – many of us have sliding scales, ensuring that fees are not a barrier. You not only get information from a human, you also receive the intangible benefits of human connection, which we know are deeply important to recovery.

2 Do your research. Be wise in your choice of AI platform. A major lesson for me in my training journey years ago was learning that it wasn’t just important what an article or research study says, but it’s also important to know where it came from and who paid for the study. Google has thrown open the doors on understanding sources, and we strongly encourage you, if you are researching apps or other AI resources, to look into who and how they were developed. Sales conversations are not neutral spaces—they are inherently oriented toward promoting the product or service being sold. However, tools created by mental health professionals or licensed mental health organizations will have standards within their fields they must abide by, which are in place to protect members of the public which you can review. Consider asking your therapist or professional about what platform they recommend.

3 Remember who you’re talking to. Have you noticed how many online tools and resources ask for your email or permission to track your activity? It’s easy to disclose personal information when your guard is down. If you haven’t had a conversation yet with an AI platform, you may not have experienced how very friendly and lifelike it can be. I’ve heard from many that they’ve forgotten their conversation partner was not a human. We can disclose more than we intend to, and if we are not clear on privacy and how our conversation can be used, we can be releasing content into the cloud without intending to.

 

4 Don’t overshare. Be aware of the dangers of oversharing. Before you open the app, reflect on issues that are sensitive and private. Do not share your name or identity – CMHA specifically recommends that you consider erasing your history when you can.

 

5 Don’t assume the content is correct. If in your exchange with AI, you hear facts or content examples, don’t assume they are correct. While it could be accurate, it may also be a clumsy compilation of many sources, which can distort or give faulty advice. AI is not able to extrapolate the meaning behind questions, only blend language from many sources. If you are working with a mental health professional, as well as consulting AI, consider sharing your inquiry or conversation with the professional who can verify information, or help find appropriate sources. Otherwise, research the issue on sites of reputable mental health resources, including CMHA and the Centre for Addiction and Mental Health (CAMH).

Returning to What Matters Most

If you find yourself turning to AI in your moments of distress, that alone is worth paying attention to. Consider why that is – what need is being met by AI? Also consider – what need might still be unmet?

When we use AI wisely, it is a valuable tool.  It can help us identify questions or take a first step on our mental health journey. But healing asks more of us. Healing asks us to risk relationships, allowing others to bear witness to our healing journey, reinforcing our new thoughts and feelings. As this technology becomes part of our lives, we are invited to remain attentive to where we place our trust, and to keep choosing the kinds of connections that sustain us on our journey.

Jennifer Bowen M.Div., RMFT, RP, is the Executive Director of Shalem Mental Health Network.

The article was edited with the help of both artificial intelligence and colleagues. All perspectives, interpretations, and conclusions are the author’s own, informed by sources cited.

Another reason to pause

Cognitive dissonance is uncomfortable, yet it often plays an important role in mental health. In my work with people, I see how distress can arise when lived experiences clash with deeply held values.

A similar tension exists in our relationship with AI. While it is often framed as a tool to support well-being, its environmental impact raises important ethical and social justice concerns. AI data centres require large amounts of water and energy, and in many cases, in regions already facing water scarcity. For communities in those areas, this can mean reduced access to drinking water.

Although steps are being taken to reduce harm, such as using recycled water and renewable energy, these efforts do not erase the complexity. I’m not offering a solution here, only an invitation to pause and acknowledge that our engagement with AI involves real trade-offs that deserve attention.

Sources and Further Reading 

American Psychological Association. (2025). APA Health Advisory on the use of generative AI chatbots.
Professional guidance on ethical and responsible use of generative AI in psychological contexts.

Canadian Association for Suicide Prevention. (2025). AI-driven psychosis: Risks, evidence, and considerations. SuicideInfo.ca.
https://www.suicideinfo.ca/wp-content/uploads/2025/12/AI-driven-psychosis.pdf

Canadian Mental Health Association. (2025). AI and mental health: What you should know.
https://cmha.ca/news/ai-mental-health/
Overview of opportunities, risks, and considerations for mental health care and AI, from Canada’s largest mental health non-profit.

Government of Canada. (2025). Artificial intelligence – ITSP.40. Canadian Centre for Cyber Security.
https://www.cyber.gc.ca/en/guidance/artificial-intelligence-itsap00040
Federal guidance on AI risks, safeguards, and responsible deployment.

Gruzd, A., Mai, P., & Clements-Haines, A. (2025). The state of generative AI use in Canada. Toronto Metropolitan University, Social Media Lab.
Comprehensive Canadian study on generative AI adoption and public perceptions.

IBM. (2025). AI hallucinations. IBM.
https://www.ibm.com/think/topics/ai-hallucinations

Kids Help Phone. (2025). AI for mental health support – Information sheet.
https://kidshelpphone.ca/get-info/ai-for-mental-health-support-2025-info-sheet Youth-focused, practice-oriented perspective on AI-assisted mental health support.

Mental Health Research Canada. (2025). Findings of Poll 25.
https://www.mhrc.ca/findings-of-poll-25 National-level data on mental health trends and public attitudes in Canada.

Psychology Today. (2025). Using prompt engineering for safer AI mental health use.
https://www.psychologytoday.com/ca/blog/experimentations/202507/using-prompt-engineering-for-safer-ai-mental-health-use Applied discussion bridging psychology, safety, and real-world AI use.

Siegel, D. J. (2020). The developing mind: How relationships and the brain interact to shape who we are (3rd ed.). Guilford Press.
Foundational work on interpersonal neurobiology and human development.

United Nations Environment Programme. (2025). AI has an environmental problem. Here’s what the world can do about it.
https://www.unep.org/news-and-stories/story/ai-has-environmental-problem-heres-what-world-can-do-about Overview of environmental challenges associated with AI, including energy use, materials impacts, and policy recommendations for sustainable deployment.

World Health Organization. (2021). Ethics and governance of artificial intelligence for health. World Health Organization.
Global framework addressing ethical, legal, and clinical implications of AI in health care.

Zhang, Z., & Wang, J. (2024). Can AI replace psychotherapists? Exploring the future of mental health care.Frontiers in Psychiatry.
https://pmc.ncbi.nlm.nih.gov/articles/PMC11560757/ Peer-reviewed analysis of AI’s capabilities, limitations, and ethical concerns in psychotherapy.