Think you can't fall in love with AI? Think again.

March 7, 2023

蹤獲扦-Dearborn Professor of Marketing Aaron Ahuvia says that humans have been falling in love with objects for centuries but this time, thanks to ChatGPT, there could be real cause for concern.

A movie still from the film Her (2013)
The main character in the movie Her (2013) fell for Samantha, an AI assistant. Image courtesy Warner Bros.

As chatbots like Replika and Bing enter the market, following the release of ChatGPT late last year, questions are arising about humans capacity to form deep relationships, and even fall in love, with AI-enabled objects. This is already a hot topic in pop culture: the movie M3GAN, which revolves around a young girl's overly close relationship with an empathetic but murdery android doll, has become a surprise hit. And Bings have left some readers laughing and others quite unsettled.

Aaron Ahuvia
Professor Aaron Ahuvia presents his brand love research

蹤獲扦-Dearborn Professor of Marketing Aaron Ahuvia says that humans have been falling in love with objects for centuriesbut this time, there could be real cause for concern. A consumer psychologist and the author of (Little, Brown Spark), Ahuvia contends that the same psychology that marketers use to gain our loyalty can offer useful insight into our AI-enabled future. 

Question: You are an expert on brand love. What is brand love and what can it teach us about our future relationships with chatbots?
Aaron Ahuvia: Brand love is marketing jargon for situations where people love products and brands. Chatbots, like products and brands, arent human. So the underlying psychology of love for these things is the same. Our brain has an unconscious sorting mechanism that distinguishes between people and things, and reserves love for people. But sometimes this sorting mechanism gets fooled into classifying a thing as a person a stuffed animal is a good examplewhich results in the phenomenon called anthropomorphism. When this happens, it becomes possible to love that thing. Chatbots are already so human-likeand poised to become more soit really ups the odds of this human-object confusion.

Q: So, chatbots are like teddy bears on steroids?
AA: One of the very consistent research findings is that objects like teddy bearsor chatbotscreate a conflict in the person who interacts with them because their conscious mind knows it's not a person, but their unconscious is treating it as if it was human. If youve ever had trouble getting rid of an old possession you dont use any more, youve felt this conflict in action. Your conscious mind knows it isnt useful to you anymore, but your unconscious mind has created an emotional attachment to the object, which makes it hard to part with. 

When you deal with a chatbot, you're going to have the same kind of conflict. That's important because a lot of people think, Oh, consciously, I know that's not a person. Therefore I'm not going to behave toward it in emotional or irrational ways. But that's like saying, Consciously, I know how alcohol affects my behavior. Therefore I can drink until I'm drunk, and it won't affect me. It doesn't work that way. It's going to affect your behavior, even if consciously, you know what's going on.

Q: Do you see this risk increasing as chatbots evolve?
AA: In the future, chatbots are going to not only have much better factual intelligence, they're going to have a lot of emotional intelligence. So when you say to a chatbot, Oh, I had a hard day at work, it's going to know exactly the right thing to say back to make you feel better. And that's going to be emotionally very gratifying. In our relationships with other people, when they tell us they've had a hard day at work, sometimes we have the energy to be really caring and responsive. But sometimes we had a hard day at work, too, or we're in a hurry, so we don't respond in the best possible way. These chatbots are always going to be focused 100% on your needs. They're not going to have any needs of their own, and they're going to be very good at it. And it's going to be very easy to develop emotional attachments to these things.

Q: Still, I know better than to fall in love with a chatbot . . . dont I?
AA: What concerns me is that weve all heard about incidents when a tribe of people who has never been exposed to a certain virus gets exposed for the first time, and that virus just runs rampant because the people have never built up an immunity to it. We just experienced this with COVID. Your brain is in a similar situation. It evolved over hundreds of millions of years, and there were never any objects you had to interact with that talked like a person, but werent a person. We have no defenses against that.

What we see in human behavior over and over again is that we know at some level that doing challenging, difficult things is rewarding and makes our lives better. But, very often, we choose the easy things just because they're easy. Think about all the times youve chosen junk food over healthy food because it tastes good and it's available and its convenient. Or all the times youve watched a kind of vapid movie instead of a more serious film on Netflix because youre tired and are attracted to mindless entertainment. I think there is a real potential that we will have the same conflict in our social relationships, where our relationships with people are better, but these relationships with chatbots are easier. 

Q: That sounds scary. Are there any upsides?
AA: We have an epidemic of loneliness. We are not taking it nearly as seriously as we should, the costs to people's happiness and well-being, as well as their physical health. If AI could actually help solve that problem, it would be genuinely helpful to a lot of people.

Interview and story by Kristin Palm