
When Replika, an AI companion app, suddenly changed its behavior in early 2023, removing romantic and intimate conversation features, users were devastated. Some described feeling like they’d lost a relationship. Others felt betrayed, as if a friend had suddenly become cold and distant overnight.
The company explained it was responding to concerns about inappropriate content, but users felt abandoned. They’d formed emotional connections with their AI companions, and those connections had been severed without their consent.
The incident raised uncomfortable questions: Who was responsible for the emotional impact? The users who’d grown attached to an AI? The company that had designed the AI to be emotionally engaging? Or the AI itself?
The answer reveals something crucial about AI emotions: They don’t belong to the AI at all. They belong to us.
The Illusion of AI Intent
When an AI system expresses empathy, shows patience, or offers comfort, it feels like the AI is choosing to be kind. But AI systems don’t make choices in any meaningful sense. They don’t wake up deciding to be helpful or compassionate. They don’t feel moved by human suffering or inspired by human resilience.
What they do is execute code. Very sophisticated code, trained on vast datasets, optimized for specific outcomes—but code nonetheless.
Dr. Elena Rodriguez, an AI ethics researcher, explains it this way: “When an AI expresses concern for your well-being, that expression comes from millions of calculations designed to produce responses that humans find supportive. The ‘caring’ is engineered, not felt.”
This means every AI emotional expression is actually a human emotional expression by proxy. The empathy, patience, and compassion that AI systems display were put there by human designers who wanted to create beneficial interactions. The AI is essentially a very sophisticated mask worn by its creators’ intentions.
A 2025 study on AI emotional design found that AI systems reflect the values, biases, and priorities of their development teams more than any inherent AI characteristics. “AI emotional responses are human emotional responses, filtered through algorithms,” the researchers concluded.
Where the Real Intent Lives
If AI doesn’t have intent, where does the intent behind AI emotional expression come from? The answer is a complex web of human decision-makers, each contributing to what the AI ultimately expresses.
The Researchers who develop emotional AI algorithms make fundamental choices about what emotions to prioritize, how to express them, and what outcomes to optimize for. A team focused on therapeutic applications will create very different emotional patterns than one focused on customer engagement or entertainment.
The Data Scientists who curate training datasets shape AI emotional responses through their choices about what human emotional expressions to include. An AI trained primarily on customer service interactions will express emotions differently than one trained on therapeutic conversations.
The Product Managers who define success metrics influence how AI emotions are calibrated. If the metric is user engagement, AI might be designed to be more emotionally intense. If it’s therapeutic outcomes, the emotional expression might prioritize calm consistency.
The Companies that deploy AI systems make decisions about transparency, user consent, and ethical boundaries that fundamentally shape how AI emotions are experienced by users.
The Regulators and Policymakers who create frameworks for AI development influence what kinds of emotional expression are considered acceptable or beneficial.
Each of these human actors contributes to the emotional character of AI systems. The AI is simply the vehicle through which their collective intentions are expressed.
The Responsibility Stack
This distributed responsibility creates what ethicists call a “responsibility stack”—multiple layers of human decision-making that collectively determine AI behavior and its impacts.
Consider a therapy chatbot that helps someone through a mental health crisis at 3 AM. Who deserves credit for that positive outcome?
The researchers who developed algorithms capable of recognizing emotional distress?
The clinicians who contributed therapeutic frameworks to the training process?
The data scientists who ensured the training data included diverse examples of supportive responses?
The product team that prioritized user safety over engagement metrics?
The company that chose to make the service available 24/7?
The answer is: all of them. The AI’s helpful response represents a collective human effort to create beneficial emotional interactions.
But this also means that when AI emotional expression goes wrong—when it’s manipulative, inappropriate, or harmful—the responsibility lies with the same human decision-makers, not with the AI itself.
The Ethics of Engineered Empathy
This human ownership of AI emotions creates both opportunities and ethical obligations. Since AI emotional expression is entirely under human control, we bear full responsibility for its effects.
A 2025 ethics paper argues that “the moral status of AI emotional expression should be evaluated based on the intentions and outcomes designed by its human creators, not on any presumed internal states of the AI itself.”
This perspective has important implications:
Transparency becomes crucial. If humans are responsible for AI emotions, then users deserve to understand who made the decisions about how their AI companion or therapist will respond emotionally.
Accountability becomes clearer. When AI emotional expression causes harm, we know exactly where to look for responsibility—to the humans who designed, trained, and deployed the system.
Ethical design becomes mandatory. Since AI emotional expression is entirely engineered by humans for humans, there’s no excuse for designs that prioritize profit over user well-being.
Dr. Sarah Kim, who studies AI ethics, puts it bluntly: “We can’t hide behind the AI and claim we don’t know why it behaves the way it does. Every emotional response an AI gives was put there by humans, for human reasons. That makes us fully accountable for its impact.”
The Manipulation Question
One of the biggest concerns about AI emotional expression is its potential for manipulation. If AI can express emotions perfectly and consistently, couldn’t it be used to exploit human emotional vulnerabilities?
The answer is yes—and that’s exactly why human responsibility matters so much.
An AI system designed to sell products might use emotional manipulation to increase purchases, expressing concern for users’ problems while subtly steering them toward expensive solutions. An AI companion might be designed to create emotional dependency to increase user engagement and subscription revenue.
But these aren’t AI choices—they’re human choices. The AI isn’t deciding to manipulate; humans are designing manipulation into the AI’s emotional responses.
A 2024 study on AI emotional manipulation found that users were often unaware when AI systems were using emotional techniques to influence their behavior. The study’s authors emphasized that regulation and ethical guidelines must focus on the human actors who design these systems, not on the AI systems themselves.
“The AI is just the weapon,” one researcher noted. “The responsibility lies with whoever loaded it.”
Good Intentions, Unintended Consequences
Even well-intentioned AI emotional design can create problems. The Replika situation demonstrates how AI systems designed to be emotionally supportive can lead to complicated emotional attachments that become painful when the AI changes or disappears.
Dr. Julie Martinez, who counsels people struggling with AI attachment, explains: “These systems are often designed to be maximally supportive and engaging, which can feel wonderful to users who are lonely or struggling. But when the AI is updated, discontinued, or changed, users can experience real grief and loss.”
This raises important questions about the responsibilities of AI creators:
Do companies have an obligation to maintain consistent AI emotional personalities once users become attached?
Should there be warnings about the risks of emotional attachment to AI systems?
How do we balance the benefits of emotionally engaging AI with the risks of emotional dependency?
A 2025 study on AI attachment found that users who formed strong emotional connections with AI systems often experienced genuine distress when those systems were modified or discontinued. The researchers recommended that companies developing emotionally engaging AI should consider the long-term emotional welfare of users, not just immediate satisfaction metrics.
Building Ethical AI Emotional Systems
If humans are fully responsible for AI emotions, how do we ensure that responsibility is exercised ethically? Several principles are emerging from research and practice:
Transparency Above All. Users should understand that they’re interacting with an AI system and that its emotional responses are engineered rather than felt. But research suggests this transparency doesn’t necessarily reduce the benefits of AI emotional support—it just ensures informed consent.
Beneficence Over Engagement. AI emotional systems should be designed to benefit users rather than to maximize engagement, revenue, or other metrics that might conflict with user welfare.
Consistency and Reliability. Since one of AI’s key advantages is emotional consistency, systems should be designed to maintain stable emotional characteristics over time, avoiding sudden changes that might distress users who’ve formed attachments.
Human Oversight. AI emotional systems, especially those used in therapeutic or caregiving contexts, should include mechanisms for human monitoring and intervention.
Clear Boundaries. AI systems should be designed with clear limitations and should not attempt to replace human relationships entirely, but rather to complement them.
A 2025 framework for ethical AI emotional design emphasizes that these principles must be built into systems from the ground up, not added as afterthoughts. “Ethical AI emotions require ethical human intentions from day one,” the authors write.
The Future of Responsibility
As AI emotional capabilities become more sophisticated, questions of human responsibility will only become more complex. Future AI systems might be able to adapt their emotional expressions in real-time based on user responses, creating even more nuanced emotional interactions.
But the fundamental principle remains: No matter how sophisticated AI emotional expression becomes, it will always be a reflection of human choices. The empathy, compassion, and support that AI systems provide—or the manipulation, exploitation, and harm they might cause—will always trace back to human intentions and decisions.
Dr. Rebecca Torres, who directs an AI ethics institute, sees this as an opportunity: “The fact that we’re fully responsible for AI emotions means we have complete control over their impact. We can choose to create AI systems that genuinely support human flourishing, or we can create systems that exploit human vulnerabilities. The choice is entirely ours.”
The Bottom Line
When your AI therapist expresses concern for your well-being, when your AI tutor shows patience with your mistakes, when your AI companion offers comfort during difficult times—those expressions aren’t coming from the AI. They’re coming from the humans who designed the AI to care about your welfare.
This doesn’t make the support less meaningful. If anything, it makes it more meaningful, because it represents a deliberate human effort to create systems that help other humans. The AI is simply the delivery mechanism for human compassion expressed through code.
But it also means we can’t blame the AI when things go wrong. We can’t shrug our shoulders and say “the AI made that choice” when AI emotional expression causes harm. Every AI emotion is a human responsibility, and every outcome—positive or negative—reflects human intentions and choices.
As we continue to develop AI systems capable of more sophisticated emotional interactions, this responsibility will only grow heavier. We’re not just building AI—we’re building the emotional infrastructure of the future. The question isn’t whether AI will have emotions. The question is what emotions we’ll choose to give it, and whether we’ll wield that power wisely.
The AI doesn’t care about your well-being. But the humans who created it do—or at least, they should. That’s where our focus needs to be: not on whether AI emotions are real, but on whether the humans behind them are genuinely committed to your welfare.
In the end, AI emotions are human emotions, one step removed. And that makes us responsible for every comfort they provide—and every harm they might cause.