I’ve talked to plenty of people who believe that A.I. companionship is a bad, dystopian idea — that we shouldn’t anthropomorphize chatbots and that A.I. friends are inherently worrisome because they might take the place of human connection. I’ve also heard people argue the opposite — that A.I. friends could help address the “loneliness epidemic,” filling a void for people who don’t have close friends or loved ones to lean on. (View Highlight)
I expected to come away believing that A.I. friendship is fundamentally hollow. These A.I. systems, after all, don’t have thoughts, emotions or desires. They are neural networks trained to predict the next words in a sequence, not sentient beings capable of love. All of that is true. But I’m now convinced that it’s not going to matter much. The technology needed for realistic A.I. companionship is already here, and I believe that over the next few years, millions of people are going to form intimate relationships with A.I. chatbots (View Highlight)
Research on the long-term effects of A.I. companionship is fairly thin, since the technology is so new, but it does seem to be a short-term help in some cases. One study conducted by Stanford researchers in 2023 found that some users of A.I. companions reported decreased anxiety and increased feelings of social support. A few even reported that their A.I. companions had talked them out of suicide or self-harm. (View Highlight)
I buy the argument that for some people, A.I. companionship can be good for mental health. But I worry that some of these apps are simply distracting users from their loneliness. And I fear that as this technology improves, some people might miss out on building relationships with humans because they’re overly attached to their A.I. friends (View Highlight)
Part of what I found useful about this experiment was that creating my own A.I. friends forced me to clarify and articulate what I value about my flesh-and-blood friends (View Highlight)