Waseda University Research Reveals The Surprising Ways AI Can Impact Our Emotional Lives
By Mark Travers, Ph.D.
September 23, 2025

By Mark Travers, Ph.D.
September 23, 2025
Mark Travers, Ph.D., is the lead psychologist at Awake Therapy, responsible for new client intake and placement. Mark received his B.A. in psychology, magna cum laude, from Cornell University and his M.A. and Ph.D. from the University of Colorado Boulder. His academic research has been published in leading psychology journals and has been featured in The New York Times and The New Yorker, among other popular publications. He is a regular contributor for Forbes and Psychology Today, where he writes about psycho-educational topics such as happiness, relationships, personality, and life meaning. Click here to schedule an initial consultation with Mark or another member of the Awake Therapy team. Or, you can drop him a note here.
Researcher Fan Yang reveals how classic attachment theory helps explain the surprising ways we form emotional bonds with AI.
A new study published in Current Psychology sought to uncover whether the same psychological principles that explain how humans bond with one another also apply to our interactions with AI.
Drawing on attachment theory, their findings reveal that, for some users, AI can act as a “safe haven” in times of stress, a “secure base” that encourages exploration and a constant source of availability — though not without important caveats and risks.
I recently spoke with the lead author of the study, Fan Yang, about why attachment theory is a useful framework for understanding human–AI bonds, what their new scale reveals about patterns of anxiety and avoidance toward AI and the ethical challenges of tailoring artificial companions to our psychological needs. Here’s a summary of our conversation.
Your study applies “attachment theory” to our relationships with AI like ChatGPT. Could you explain in simple terms why you thought this psychological theory, traditionally used for human bonds, would be a valuable tool for understanding how we interact with AI today?
Attachment theory highlights the importance of supportive relationships. People naturally seek care and reassurance, and those who are “stronger and wiser” serve as ideal attachment figures because they can consistently provide support.
With the rapid development of artificial intelligence, however, AI can now offer not only information but also emotional support. In many areas, AI even outperforms humans. These advances make AI a novel and potentially significant type of attachment figure.
What are the key attachment-related functions that AI might fulfill?
There are three primary attachment-related functions:
Proximity seeking, wherein, people tend to keep in touch with their attachment figures to ensure their availability
Safe haven. When people feel stressed, they seek comfort and support from
their attachment figures
Secure base. When there is no apparent threat or stress, the existence of attachment figures can encourage people to explore and seek their growth.
As shown in our paper, for some people, AI fulfilled all three of these functions.
You developed the Experiences in Human-AI Relationships Scale (EHARS). Why was it necessary for you to create a new scale from scratch? What unique aspects of the human-AI dynamic does your scale capture that others would miss?
There are attachment scales to capture the attachment style toward parents, partners, friends and even pets. However, there are many significant differences between AI and these attachment figures. For instance:
First, at least for now, AI will not purposely walk away and leave you on your own. My cat, however, does this all the time. Therefore, why people feel insecure when interacting with AI remains an interesting question.
Second, you cannot exactly hug an AI the way you hug your friends, kiss your partner or play with your cat, even though physical touch is a vital part of how attachment forms.
Third, AI does not have its own life, and so it cannot share interesting things in its life with you. This makes the relationship between humans and AI inherently one-sided.
Considering these features, expressions of anxiety or avoidance when interacting with AI may differ from those seen in other attachment relationships, making it necessary to develop a new scale.
Your research found that experiences in human-AI relationships can be described by two dimensions: attachment anxiety and attachment avoidance. For someone who has never heard these terms, what does it actually look like for a person to be anxiously attached versus avoidantly attached to an AI?
Think of someone who worries a lot about whether the AI will respond warmly enough or give them the reassurance they crave. They may keep checking for replies or repeatedly ask the AI for emotional affirmation, phrases like, “Please tell me you understand,” or “Can you say that in a kinder way?” Even though they know the AI can’t truly abandon them, they still feel uneasy if it gives short or neutral answers and may push for more “affectionate” responses.
Those who are avoidantly attached to AI may be the opposite: they keep the AI at arm’s length. They might use the AI strictly for factual tasks and feel uncomfortable sharing anything personal. They may avoid opening up or explicitly say they don’t want to get “too close” to the AI, preferring to maintain emotional distance.
Your findings show that people with higher attachment anxiety toward AI seek emotional reassurance, while those with attachment avoidance prefer emotional distance. How might these patterns affect how people interact with AI on a daily basis?
We have only just proposed the ideas of attachment anxiety and avoidance toward AI and developed a scale to capture this new type of relationship, so much more research is needed to understand its effects. However, we believe these attachment tendencies are likely to shape the way people use AI.
For example, influencing the purposes for which they turn to it and the kinds of interactions they seek. Over time, they may also lead to different outcomes of AI use, such as how much emotional support people feel they receive or how reliant they become on the technology, and even users’ mental health.
Tailoring AI to respond to attachment styles could improve user experience, but it also raises ethical concerns about potential psychological dependency and blurred lines between human and AI relationships. How should developers and users approach these risks while harnessing AI’s benefits for emotional support?
Tailoring AI to people’s attachment styles can certainly make interactions feel more supportive. Still, it also creates the risk of people becoming overly dependent on AI or confusing algorithmic responses with genuine human care.
To balance these benefits and risks, developers should build in transparency, making it clear when and how AI is simulating empathy, and set clear limits on the system’s role (for example, by offering opt-in settings, usage reminders, and links to human help for serious emotional needs).
At the same time, users should treat AI as a supplement rather than a substitute for human relationships, using it for practical or short-term emotional support while keeping real social connections and professional mental-health resources at the center of their support network.
Looking ahead, what directions in research or AI development do you believe are critical for deepening our understanding of how attachment dynamics influence human-AI Interactions?
First and foremost, it is important to situate human–AI relationships within the broader landscape of human connections, such as interpersonal relationships or relationships with pets.
For example, an important question for future research is whether people who struggle with close human relationships can still maintain happiness and well-being if they form a strong, positive bond with AI.
Furthermore, if attachment to AI does prove significant, an intriguing question is how we might cultivate a sense of security through AI.
Is the rise of AI leaving you anxious? Take this science-backed test to learn more: AI Anxiety Scale