AI is an unattractive prospect for many of us in mental health care. We enter this field not to be bogged down in algorithmic modeling but to connect with people in meaningful, life-changing ways. Our work is grounded in empathy, understanding, and the human connection that we know makes a difference. But while AI might seem like an unwelcome disruption to the industry, it is inevitable. As a disruptive force, AI will likely be met with a tidal wave of regulations in clinical services—slow, risk-averse, and perhaps unable to adapt in the agile ways that the business world has. Given the immense pressures faced by both consumers and frontline workers in the mental health system, it would be a huge missed opportunity if we do not learn to leverage the potential of AI in ways that can ease the burden and enhance care—similar to how the business world is already finding success with AI.

To better understand the role AI can play in mental health, let’s explore how thought leaders in technology view the intersection of human capability and artificial intelligence. By reflecting on these perspectives, we can begin to envision a future where AI is a partner, not a competitor, in the mental health field.

AI as a Tool to Enhance Human Creativity in Mental Health

"AI is not about man versus machine, but man with machine." — Garry Kasparov

Technology has already shaped the delivery of health services. From desktop computers to tablets in waiting rooms, and from internal databases to cloud-based systems, large health services—such as those in rapidly growing regions like Geelong and the Surf Coast—rely on digital infrastructure to track diverse populations. Yet, despite these advancements, inefficiencies persist. Outdated system models and administrative bloat don’t necessarily improve the consumer experience. Just because something has moved to the cloud doesn’t mean it makes life easier for those navigating care.

The burden of administration in frontline roles is a significant contributor to burnout. Ironically, the same fear that keeps AI at arm’s length—the fear of it interfering with meaningful, human-centered work—is the very thing AI has the potential to alleviate. AI can streamline administrative tasks, freeing workers to focus on what they truly want to do: engage with people and see real change.

Given the right resources, encouragement, and leadership vision, mental health workers can be immensely innovative within their own pockets of bureaucracy. For many, this innovation—finding creative ways to engage consumers despite systemic dysfunction—is the only way they manage the stress and red tape. I’ve seen this firsthand: workers printing, cutting, laminating, building resource folders—crafting tools to empower their clients in ways the system itself doesn’t facilitate. The environment is ripe for an internal shake-up, one that releases these quiet innovators to do the work they are already driven to do, without administrative roadblocks standing in their way.

I’ve personally felt the tension of knowing what preparation, background reading, or professional development would be beneficial in supporting the people I work with, but lacking the time or capacity to fully engage. After trauma-informed care training last year, I immediately felt more confident interacting with consumers—meeting them where they were, safely and with an eye toward both their present and future, rather than being stuck in past distress. If we conceptualize stress as a buildup of pressure caused by stretched resources, then AI holds real potential: not just to free the workforce for more creative, impactful work, but to actively counter the burnout that stems from managing chronic stress.

AI and the Psychology of Threat: Reframing Fear as Opportunity

"We fear the unknown because we cannot control it, but progress is always made by those willing to explore it." — Unknown

There are many things we find threatening, and sometimes for good reason. Much of the work in mental health involves addressing the trauma of legitimate threats—experiences that have profoundly shaped us. It is therefore reasonable to be cautious about new developments like AI, critically assessing its risks and potential harm. At the same time, as professionals trained in trauma-informed care and the navigation of anxiety—both disordered and natural—we are well positioned to approach this "great unknown" with the same principles we encourage in others: curiosity, adaptability, and a focus on growth.

The way we conceptualise wellbeing support already frames challenges as opportunities for development and moving toward a better life. AI should be seen through the same lens. Consumers looking to the mental health profession for guidance should expect us to lead by example, engaging with difficult changes in a way that is constructive, rather than fearful. The opportunity AI presents is vast—but only if we embrace it with humility, allowing consumers to benefit from its full potential rather than gatekeeping access in ways that serve professional comfort over public good.

From intake and care planning to service delivery and discharge, AI has the capacity to offer more—not less—to consumers. If fully integrated, it could enhance accessibility, streamline inefficiencies, and empower individuals in unprecedented ways. However, early developments suggest that some clinical gatekeepers are responding with an instinct to maintain control. This raises a crucial question: Who is truly experiencing the threat here? It is not the consumers, who stand to gain from AI-driven autonomy, but those most deeply invested in clinical systems built on rigid hierarchies and slow-moving authority structures.

AI is not a tool that should be “overseen” by clinicians in a way that limits its potential. The sheer volume of knowledge AI can process and apply will quickly surpass any single practitioner’s expertise. The very definition of clinical authority is shifting. If we define clinical expertise as knowledge gained through higher education and evidence-based practice, then AI’s ability to generate evidence at an unprecedented pace will fundamentally disrupt this paradigm.

"If culture eats strategy for breakfast” then AI will eat the peer-reviewed journal article for dinner.

Personally, I have to grapple with this reality myself. Psychoeducation has been a cornerstone of my mental health practice—one of my greatest strengths. But AI will level the playing field in this space, just as it will for those with formal accreditation in specialized disciplines. How will I respond? With resistance, defensiveness, and a demand for regulation to maintain my edge? I hope not. A better path forward is to accept that every person now has access to an incredible psychoeducation resource in the palm of their hand. The challenge is not to resist this shift, but to find new ways to add value within it.

Creating a Collaborative Future

"The future is already here – it’s just not very evenly distributed." — William Gibson

The divide in mental health care is stark: those with financial resources can access private therapy, while those without are often left waiting in an overwhelmed public system. If you haven't noticed this gap, it's likely because you've either never needed to rely on the public system or, conversely, you've only ever had access to it and aren't aware of what private services offer. In a system struggling to meet demand, many people will inevitably be left out in the cold.

Technology has been pitched as a solution, with mental health apps like Headspace and Smiling Mind aiming to "level the playing field." But let’s be honest—while these tools can be useful for some, they’re often just digitized, dehumanized versions of therapy models like CBT or mindfulness. They lack the core element that makes mental health support effective: meaningful human interaction. At best, they are useful self-help tools; at worst, they are a poor substitute for genuine therapeutic support.

Enter generative AI. Frustrating. Powerful. Life-changing. Sometimes even humorous. But ultimately, they represent a more dynamic way to engage with digital mental health support. Unlike static apps, AI can respond, adapt, and guide someone in real-time, offering an interactive experience that comes closer to mimicking human support. That said, AI’s effectiveness depends on something crucial: how a person interacts with it.

This is where another major inequality emerges—AI literacy. Engaging effectively with generative AI requires a certain level of education, digital literacy, and critical thinking. For those who are comfortable navigating technology, chatbots could provide real value. But for others—particularly those with lower levels of education or access to technology—AI could be yet another frustrating, inaccessible tool that fails to meet them where they are. If AI literacy becomes another dividing line between the haves and have-nots, then the playing field isn't being leveled—it's just shifting the goalposts.

There’s also the question of how public mental health systems will integrate AI. In theory, AI could be used to track consumer journeys, improve continuity of care, and offer early interventions. But if the system adopts AI the same way it has adopted other efficiency-driven models—cutting costs rather than improving care—it could further entrench a two-tiered system. Free services often don’t come with a red carpet experience, and if AI is simply used to ration access rather than expand it, we’ll just end up with a second-rate digital substitute for those who can’t afford better.

However, AI also has the potential to disrupt existing power structures. Public mental health systems are slow-moving, bureaucratic, and often fail to meet people where they are. AI could offer individuals more autonomy, helping them navigate services, track their progress, and even challenge traditional gatekeeping. If developed and implemented with the right ethical framework, AI could give people more control over their mental health care rather than making them dependent on an overburdened system.

The real challenge is ensuring AI isn’t just another bandaid on a broken system but a tool for meaningful change. Who gets to decide how AI is implemented in mental health care? Will it be used to empower individuals, or will it be another way to stretch limited resources at the expense of quality care? The answer will determine whether AI actually levels the playing field—or just reinforces the divide.

Empowering Mental Health Workers with AI

"AI is about augmenting human ability, not replacing it." — Jeff Bezos

Right now, AI may not be front of mind for most mental health workers, whether on the frontline or in leadership. If anything, their perception of AI is likely shaped by broader societal changes—seeing AI tools gradually integrate into daily life but not necessarily feeling an urgent need to assess their impact on their own roles. If concerns do exist, they may revolve around fears of job displacement. However, many in the field assume that mental health work, by its nature, is deeply human and therefore resistant to automation.

This assumption—that mental health work is irreplaceable due to its grounding in human connection, affect, behavior, and cognition—isn’t entirely wrong. But it also underestimates the extent to which AI can augment, rather than replace, core aspects of practice. The most strategic response isn’t to dismiss AI as irrelevant but to critically assess how it can support, enhance, and even expand professional expertise.

For example, professional development in mental health has long been constrained by systemic barriers. Many workers want access to high-quality training—whether in leadership, trauma-informed care, or system navigation—but are frequently blocked by cost, availability, or workplace limitations. This is despite billions being poured into reform efforts aimed at improving the system. AI presents an opportunity to break down these barriers, offering scalable, individualised, and interactive learning experiences that don’t rely on slow-moving bureaucracies to implement.

This doesn’t mean AI will replace educators. Nurses or social workers who move into training roles won’t be made redundant by AI-driven learning systems, but those who fail to integrate AI into their work may struggle to remain relevant. If institutions remain slow to adopt innovative training approaches, frontline workers will look outside their organisations for thought leadership that better aligns with their needs. The role of AI in professional development won’t be to displace trainers but to make learning more accessible, dynamic, and responsive to real-world challenges.

AI and Decision-Making in Mental Health Care

Beyond education, AI has the potential to improve decision-making at multiple levels. Many mental health professionals have sat through care planning meetings, program reviews, or multi-agency discussions that were inefficient—either due to too many people in the room, the wrong people in the room, or dominant voices overshadowing quieter but valuable perspectives. AI-driven data insights and coordination tools could help streamline these processes, ensuring that decision-making is more informed, equitable, and consumer-centered.

However, AI should never be a substitute for human judgment. If used poorly, it risks oversimplifying complex discussions, reducing nuanced care decisions to rigid, algorithmic outputs. The goal should be to use AI to enrich discussions, not replace them—to integrate it as an additional layer of insight rather than a tool for leaders to entrench existing gatekeeping structures.

Ultimately, the key to AI integration in mental health is ensuring that workers use it in a way that enhances their skills rather than diminishes their role. Those who rely on AI as a shortcut—offloading their responsibilities without critical engagement—may find themselves replaced not by AI itself, but by the consequences of their own disengagement. Thoughtful, ethical, and strategic adoption of AI will determine whether it becomes a tool for empowerment or just another layer of dysfunction in an already strained system.

Navigating AI’s Potential and Resistance

"We are only beginning to scratch the surface of AI’s potential. The possibilities are limitless." — Fei-Fei Li

The potential of AI in mental health care is vast, largely untapped, and—if we’re being honest—overwhelming. It’s easy to get caught up in the excitement of what’s possible, but it’s just as important to remain grounded in the reality of systemic resistance. AI may hold limitless opportunities, but that doesn’t mean it won’t face equally limitless pushback.

Even as you read this, you might feel a natural resistance—not necessarily because AI is inherently bad, but because adopting new technology requires adaptation, effort, and change. This isn’t unique to AI. Every major technological shift in history has faced skepticism, from the arrival of the internet in workplaces to more foundational transformations like the printing press. For those in power, these shifts are often seen as threats rather than progress. The printing press, for example, disrupted the church’s control over knowledge, making ideas accessible to the common person. AI poses a similar challenge to entrenched systems—democratising access to information, streamlining inefficiencies, and shifting where authority and expertise reside.

But if you’re reading this, you’re not the one trying to preserve outdated structures of power. You’re more likely the onlooker, watching as the first cracks in the old system become visible. The real question is: will you become fluent in this new landscape, or will you be left behind? Just as literacy was key to navigating the printed word’s revolution, understanding AI is key to navigating the shifts ahead. If we want ethical, consumer-focused, and frontline-driven mental health care to thrive, those with genuine care for the field need to engage with AI, not leave it to those whose interests lie in self-preservation.

This isn’t about buying into hype or predicting every AI-driven change that will come. It’s about focusing our energy where it matters—on building the skills, knowledge, and resilience that will ensure our relevance and wellbeing in the years ahead. Rather than fixating on the forces outside our control, we should be channeling our efforts into what we can shape: how we engage with AI, how we integrate it into practice, and how we ensure it serves, rather than undermines, the human elements of mental health care.

Embracing AI as a Tool for Human-Centered Mental Health Care

AI, when applied thoughtfully, has the potential to enhance both the care individuals receive and the well-being of mental health professionals. Rather than threatening the essence of mental health work, AI can be a powerful ally—helping to reduce administrative burdens, improve decision-making, and expand access to knowledge and training.

Embracing AI doesn’t mean losing the human touch; it means using technology to reinforce what already defines good mental health care—empathy, connection, and thoughtful engagement. Whether in professional development, service navigation, or collaborative decision-making, AI has the capacity to augment human expertise, not replace it. But its impact will depend on how we choose to engage with it. Those who integrate AI strategically will find themselves better equipped to support both consumers and colleagues, while those who ignore its potential risk being left behind.

Throughout this discussion, we’ve explored how AI can empower workers, challenge inefficiencies, and reshape how knowledge is shared in mental health care. The synergy between psychology, mental health, and AI shows that new technologies don’t have to work against our values—they can reinforce them. The question is not whether AI will change mental health care, but whether those who truly care about the field will take an active role in shaping how it’s used.

AI & Mental Health: Augmenting Human Care & Creating a Future of Collaboration