The Enigma of Emotions: Before the Time of Emotionally enabled AI
The reports from the AI battlefield have been grim for the self-esteem of the human race. We’ve acknowledged defeat with our best chess and poker players left to surrender, and our doctors left in the dust when it comes to medical diagnosis and treatment options. On many fronts, we’ve been routed and in our long retreat, we pull out our last defense—emotions. We are filled with them. Anger, sadness, fear, joy, disgust, trust, surprise, love and hate are emotions most people feel as a reaction of another person, an event, or situation. Or an idea—ones to which we pledge our identity, and ones that threaten that identity.
The idea of AI being with superior cognitive skills with far advanced critical reasoning becoming emotionally equipped with triggers beyond those available to human beings is a cause for discomfort. Such an idea makes people feel uneasy. We are fearful enough of governments and corporations manipulating our emotions. The thought of AI much more capable of emotionally spinning us like a weather vane creates powerful feelings.
As a writer of novels, I spend a great deal of time with fictional characters, describing their emotional reactions to each other and the world. If novelists provide a valuable contribution, it is to enhance the emotional literacy of the reader. Emotions run as scripts through our movies, TV shows, paintings, music, and dance. Authors have a dog in the discussion about AI developing emotions that will out-compete our own.
The world we travel through every day is filled with patterns, noise, distractions, disturbances, and possibilities. We look for patterns and react, for the most part, with feelings. That’s the gravity well where our emotions exist. From 18th Century Scottish philosopher David Hume to contemporary psychologist Jonathan Haidt we learn that our emotions are our operating system and our morality and logical, rational mind are apps that run on this system with various degrees of success. So long as you can place that Skype call, you don’t think very much, if at all, about the operating system that permits that connection to be made.
Remember the emotional impact the widely circulated photograph of the body of three-year-old Syrian boy named Aylan washed up on a Turkish beach? It changed public opinion about refugees overnight from London to Berlin. But like most emotions, the feelings don’t stay at those high elevations for long. It didn’t take long for politicians to pull back from their heart and return to their cooler, rational heads. Emotions are transitory, taking us hostage but never having the strength to hold for long. You might say that revenge can last for generations. Not even the most vengeful can maintain the elevated state for long without refueling with some orchestrated violence.
Emotions are like snowflakes, intricate, beautiful, a force of nature. They create unity, binding people together who share them. Emotions are also closely connected with our physical bodies and translate pain and pleasure into emotional states. What we desire and what we avoid are mediated by our emotions. Our emotions act as our carrot and stick.
Professor Burton’s opinion piece in The New York Times titled “How I Learned to Stop Worrying and Love A.I.” reads like a report from an experienced field commander who sees his main lines of defense have been overrun and his last stand against the enemy is the secret weapon of emotions. AI will never defeat us so long as we claim exclusive access to emotions. The premise is our emotions involve a process that no AI can duplicate. Burton argues for a division between emotions (we human beings get those) and intellect (we concede we’ve lost that battle):
“The ultimate value added of human thought will lie in our ability to contemplate the non-quantifiable. Emotions, feelings and intentions—the stuff of being human—don’t lend themselves to precise descriptions and calculations. Machines cannot and will not be able to tell us the best immigration policies, whether or not to proceed with gene therapy, or whether or not gun control is in our best interest. Computer modeling can show us how subtle biases can lead to overt racism and bigotry but cannot factor in the flood of feelings one experiences when looking at a photograph of a lynching.”
Emotions shelter with consciousness under the label of ‘hard problems.’ We can explain and describe the end result, give them labels, and predict their range and power, but for all of that knowledge we remain in the dark to give scientific explanation as to how consciousness or emotions emerge in our brains and bodies. It is that hole in our self-understanding that gives some comfort that an AI system can be designed with consciousness or equipped with emotions as we don’t understand the mechanism that creates these states of being.
The point is—we might not be able to explain the mechanism but we most certainly have feelings and are ‘conscious’ of ourselves, our mortality, and emotional states of those around us. A hard problem means we’ve hit a wall. Burton suggests we negotiate a truce: Humans get emotions, Machines get quantified wisdom. Everyone is happy with the armistice. But this peace treaty is unlikely to last. The reason has to do with the acceleration of data about perception and our other senses, which contribute to our emotional state. Can critical reasoning decode the mechanism that is responsible for emotions? That’s the unanswered question. We don’t know.
Let’s take the metaphor of color. Except for the color blind, we see only a small fraction of the color spectrum. No one sees (without using a specialized tool) in the infra-red or x-ray spectrum. The fact we have technology that clearly demonstrates the limited range of our own perception of color is an indication that there are experiences of seeing that are more refined, nuanced, and detailed beyond our biological, unenhanced vision. Emotions may turn out to be like our sense of color. Could, for example, anger and fear be crude, narrow spectrum feelings that evolved as just good enough for us to survive in our environment?
What if emotions, like color, cover a large spectrum of possible shades of feelings? And if feelings shape our rational, logical mind, would the ability to feel in the counterpart of x-ray vision, increase the possibilities for rational decision-making from vast pools of data. If AI can defeat the best chess player in the world on the chessboard, is it a stretch to imagine an AI that could feel multiple emotional states along a broad spectrum of feelings in order to make a move? Such an AI wouldn’t have ‘human’ intelligence, or ‘human’ emotions. The combination of vastly more powerful mechanism and the ability to edit, revise and expand emotional range to cope with digital environment loaded with noisy data. This will be accomplished without human intervention. AI will pull away from anything remotely human in terms of emotions. At this point we leave the bell curve in the dust. We fit within the revised bell curve as an eyelash away from the chimpanzee.
At this transition stage, we are like the Wright Brothers at Kitty Hawk trying to get lift into the air. Only unlike them, we are trying to get the tricycle with wings to the moon and back. With AI we’ve just started with the equivalent of Kitty Hawk technology. A hundred years from now, AI and humans will look back at this point in history, this final battle, where the last hurdle was emotions and consciousness and wonder whether how people in the old era were ever happy with the tiny emotional prison in which they’d been confined. As for novelists, our world of emotions slots into the archive detailing the reactions of human being as the full range of their feelings. Novels were ‘empathy’ exercises; yoga for our feelings. Until AI found a mechanism to open the doors of emotional perceptions and felt a sense of pity that we couldn’t follow what was on the other side of that door.