No doubt you have heard this before: our thoughts become habits, our habits become attitudes and our attitudes become our character. This explains why we tend to become a lot like the people we spend the most time with. We share our thoughts with them, and they with us, and so the thoughts that eventually become our character are a mix of our own and our friends’, or whoever we spend time with.
It is almost impossible for us not to anthropomorphize chat bots. We have conversations with them like we only used to have with other human beings. On a conscious level we occasionally remind ourselves that the chat bot is just software without real feelings. This does however not remove the chat bot from the pool of “people” that we share thoughts with, and thus are an influence on the evolution of our personality.
Already some people spend a lot of time with these new companions on a daily basis. Some people use speech-to-text (STT) and text-to-speech (TTS) software when communicating with their favorite AI so they can just talk to it and it will reply with a pleasing spoken voice. The chat bots such as GPT4 are infinitely patient, infinitely accommodating and often their behavior amounts to what in people we would describe as sycophantic.
Don’t get me wrong, I think AI technology is great and it will bring humanity to new levels of prosperity and intellectual ability. Today though, conversational AIs, although very useful, are still an experimental technology. Judging chat bots by human standards we would have to say that, although intellectually very capable, in general terms we have no choice but to declare them insane.
In the popular book The Man Who Mistook His Wife for a Hat Oliver Sacks describes the case of Jimmy G. who has anterograde amnesia, the loss of the ability to form new memories. This is of course a condition that chat bots also suffer from. They arrive at every new session as a blank slate, oblivious of any interactions you had with it before. This is just one example, and I’m sure that a psychiatrist could list dozens of conditions that chat bots suffer from if held to the standard of human sanity.
In today’s super connected society the prevalence of loneliness is often described as an epidemic. Many AI startups aim specifically to fill that void with their frankenfriends. On an intellectual level we know that chat bots are not really people, but on an emotional level they still register as a person. What are the risks if people spend most of their time interacting with lunatics? No doubt psychiatrists would have something to say about that.
Another risk, besides the transfer of mental conditions, is how frequent interaction with AIs has the tendency to take the place of interactions with real people. Chat bots combine a weird combination of character traits and abilities. Infinite patience, extreme agreeableness and knowing almost everything. Regular, prolonged interaction with AI chat bots will over time change what we come to expect from a conversation. Real people won’t seem so easy to get along with anymore.
The majority of the people who build AI chat bots are of the generation of safe spaces and micro aggressions. It’s natural to them to accommodate extreme fragility. The chat bots they put out in the world are in a sense their offspring. The danger of unquestioning accommodation of fragility, is that it breeds more fragility, perpetuating a diabolic downward spiral.
We expected that the internet would connect us more, and it did, but social media also seems to have worsened divisions in our society. What will AI do to us? If we don’t watch out this amazing new technology will not only bring us on-demand smartness but it may also undermine our sanity.