Nelson Phillips and Fares Ahmad of the University of California, explore how workplace AI, as a medium of support for employees, presents a range of problems.
As artificial intelligence tools like ChatGPT become an increasingly popular avenue for people seeking personal therapy and emotional support, the dangers that this can present – especially for young people – have made plenty of headlines. What hasn’t received as much attention is employers using generative AI to assess workers’ psychological well-being and provide emotional support in the workplace.
Since the pandemic-induced global shift to remote work, industries ranging from health care to human resources and customer service have seen a spike in employers using AI-powered systems designed to analyse the emotional state of employees, identify emotionally distressed individuals, and provide them with emotional support.
This new frontier is a large step beyond using general chat tools or individual therapy apps for psychological support. As researchers studying how AI affects emotions and relationships in the workplace, we are concerned with critical questions that this shift raises: What happens when your employer has access to your emotional data? Can AI really provide the kind of emotional support workers need? What happens if the AI malfunctions? And if something goes wrong, who’s responsible?
The workplace difference
Many companies have started by offering automated counseling programmes that have many parallels with personal therapy apps, a practice that has shown some benefits. In preliminary studies, researchers found that in a doctor-patient-style virtual conversation setting, AI-generated responses actually make people feel more heard than human ones. A study comparing AI chatbots with human psychotherapists found the bots were “at least as empathic as therapist responses, and sometimes more so.”
This might seem surprising at first glance, but AI offers unwavering attention and consistently supportive responses. It doesn’t interrupt, doesn’t judge and doesn’t get frustrated when you repeat the same concerns. For some employees, especially those dealing with stigmatised issues like mental health or workplace conflicts, this consistency feels safer than human interaction.
But for others, it raises new concerns. A 2023 study found that workers were reluctant to participate in company-initiated mental health programs due to worries about confidentiality and stigma. Many feared that their disclosures could negatively affect their careers.
Other workplace AI systems go much deeper, analysing employee communication as it happens – think emails, Slack conversations and Zoom calls. This analysis creates detailed records of employee emotional states, stress patterns and psychological vulnerabilities. All this data resides within corporate systems where privacy protections are typically unclear and often favor the interests of the employer.
Workplace Options, a global employee assistance provider, has partnered with Wellbeing.ai to deploy a platform that uses facial analytics to track emotional states across 62 emotion categories. It generates well-being scores that organisations can use to detect stress or morale issues. This approach effectively embeds AI into emotionally sensitive aspects of work, leaving an uncomfortably thin boundary between support and surveillance.
In this scenario, the same AI that helps employees feel heard and supported also generates unprecedented insight into workforce emotional dynamics. Organisations can now track which departments show signs of burnout, identify employees at risk of quitting and monitor emotional responses to organisational changes.
But this type of tool also transforms emotional data into management intelligence, presenting many companies with a genuine dilemma. While progressive organisations are establishing strict data governance – limiting access to anonymised patterns rather than individual conversations – others struggle with the temptation to use emotional insights for performance evaluation and personnel decisions.
Continuous surveillance carried out by some of these systems may help ensure that companies do not neglect a group or individual in distress, but it can also lead people to monitor their own actions to avoid calling attention to themselves. Research on workplace AI monitoring has shown how employees experience increased stress and modify their behaviour when they know that management can review their interactions. The monitoring undermines the feeling of safety necessary for people to comfortably seek help. Another study found that these systems increased distress for employees due to the loss of privacy and concerns that consequences would arise if the system identified them as being stressed or burned out.
When artificial empathy meets real consequences
These findings are important because the stakes are arguably even higher in workplace settings than personal ones. AI systems lack the nuanced judgment necessary to distinguish between accepting someone as a person versus endorsing harmful behaviors. In organisational contexts, this means an AI might inadvertently validate unethical workplace practices or fail to recognise when human intervention is critical.
And that’s not the only way AI systems can get things wrong. A study found that emotion-tracking AI tools had a disproportionate impact on employees of color, trans and gender nonbinary people, and people living with mental illness. Interviewees expressed deep concern about how these tools might misread an employee’s mood, tone or verbal queues due to ethnic, gender and other kinds of bias that AI systems carry.
There’s also an authenticity problem. Research shows that when people know they’re talking to an AI system, they rate identical empathetic responses as less authentic than when they attribute them to humans. Yet some employees prefer AI precisely because they know it’s not human. The feeling that these tools protect your anonymity and freedom from social consequences is appealing for some – even if it may only be a feeling.
The technology also raises questions about what happens to human managers. If employees consistently prefer AI for emotional support, what does that reveal about organisational leadership? Some companies are using AI insights to train managers in emotional intelligence, turning the technology into a mirror that reflects where human skills fall short.
The path forward
The conversation about workplace AI emotional support isn’t just about technology – it’s about what kinds of companies people want to work for. As these systems become more prevalent, we believe it’s important to grapple with fundamental questions: Should employers prioritise authentic human connection over consistent availability? How can individual privacy be balanced with organisational insights? Can organisations harness AI’s empathetic capabilities while preserving the trust necessary for meaningful workplace relationships?
The most thoughtful implementations recognise that AI shouldn’t replace human empathy, but rather create conditions where it can flourish. When AI handles routine emotional labor – the 3am anxiety attacks, pre-meeting stress checks, processing difficult feedback – managers gain bandwidth for deeper, more authentic connections with their teams.
But this requires careful implementation. Companies that establish clear ethical boundaries, strong privacy protections and explicit policies about how emotional data gets used are more likely to avoid the pitfalls of these systems – as will those that recognise when human judgment and authentic presence remain irreplaceable.
By Nelson Phillips and Fares Ahmad
Nelson Phillips is the professor of technology management at the University of California, Santa Barbara. Phillips’ research interests cut across organisation theory, innovation and technology. He is currently researching how hype around emerging technologies “locks in” technology entrepreneurs, how technology entrepreneurs make the decision to pivot in organisations beyond the founding phase and how entrepreneurial framing leads to moral legitimacy with key stakeholder groups.
Fares Ahmad is a doctoral candidate in technology management at the University of California, Santa Barbara. He has over a decade of international experience scaling technologies for organisations such as Procter & Gamble, Apple, Boeing and GE Aviation. His research interests span technology, organising, and emotions and currently he is studying the intersection of compassion and technology, with a focus on how AI influences our ability to notice, feel, and alleviate suffering in organisational life.
Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.