The nature of crime is relatively straightforward across all cultures. Criminals depend on others who fail to cross check for danger and assess the risk of what lies waiting in the shadows. We grow fat and complacent and lazy. Members of the criminal class calculate their chances of making themselves richer at our expense and move away from the crime scene unscathed. With our habits and routines and disengagement from the analogue world, we make it easy for criminals.
Not us smart, non-complacent ones we assume, but the others: we see their bodies and we see the tears in the eyes of their families and friends. Sometimes all the vigilance in the world won’t be enough. Things happen to people and to the planners who have gone through the checklist twice before taking off. Call it bad luck, or karma, or the randomness of the universe, mysteries which mock our planning.
We are nearing the end of an age when crime fiction was an epic battle of law enforcement authorities matching wits with criminals at the domestic and international level. From the Parker novels to the Wolf of Wall Street, there is a parade of crime in the back alleys of Main Street and Wall Street.
The state authorities have been making gains employing the latest advanced technology such as surveillance cameras, Internet tracking, GPS systems, and recording of our online search histories, credit card purchases, telephone calls, and emails. There are many more ways to discover what others are planning, and to catch criminals after they commit a crime. There are fewer dark corners for criminals to hide and they continue to diminish.
Technology is dynamic. The devices appear to be egalitarian, and seem through the promise of connection to expand our sense of kinship, and that lulls us into feeling empowered.
The reality is the data collected about you and me and everyone is being concentrated. It is the new Capital, the new wealth from which income is being generated. Not US Treasury Bonds or dividends paid to shareholders. We are starting imagine where all of this is leading us.
As most people are caught up in the daily struggles it’s no surprise that the larger forces remain invisible even as they gather significance. One of the best examples is the potential for existential shifts caused by AI or Artificial Intelligence. This essay is about what the possibilities of AI may have waiting for us in the near to medium future. Let’s take a walk down that alley.
Often the first hints about the nature of abrupt change are found in literature and film. Two recent films: Hers and Transcendence ask questions about the intelligence combined with technology that dwarfs human intelligence.
Joaquin Phoenix, Her
We begin to notice small stories buried in the back pages about how the military is funding the development of autonomous-weapon systems. The technological entity launches itself, select the target and destroys and we sit back with a bowl of popcorn and watch the video replay.
We start to see articles about how people around us have withdrawn from the world and their lives, even as they are in public, are lived in a digital link through their iPhone or tablet.There are articles pleading for people to look away from their iPhone and engage the world around them.
The next canary to ring the warning bell is found among scientists, the rationalists, those who aren’t in the business of channeling our fears but understanding and explaining the nature of the world and updating the context of our reality. These aren’t doomsday people or someone trying to make a market, a buck, a name. They are shouting. They are asking people to pay attention.
Stephen Hawking in a recent piece for the Independent wrote:
“One can imagine such technology outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand. Whereas the short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all.”
As Hawking notes, few resources are being marshaled to monitor AI development, but there are “non-profit institutes such as the Cambridge Centre for the Study of Existential Risk, the Future of Humanity Institute, the Machine Intelligence Research Institute, and the Future of Life Institute.”
AI depends on not just who controls it but on the more basic question—at what point does AI slip the collar and no longer can be controlled by human intelligence? And slips that collar around our necks.
What is the timeline for an event when AI exceeds human intelligence? Robin Hanson and Eliezer Yudkowsky have debated issues, including the speed of AI development and the debate has been archived. Judging the possible rate of AI acceleration is hotly debated. AI might go through many steps allowing for all kinds of plans, policies and consensus to develop before the next step; or it might happen suddenly without advance warning. No one can give a reasonable probability of which position is more likely. As a result, scientists like Stephen Hawking have argued (this is the existential concern) that government should take precautions on AI going FOOM.
The rate of change factor is a major difference that distinguishes the impact of AI, its disruption of the concentration of wealth accumulated over many generations, which is Piketty’s domain. It is, for example, highly improbable that all wealth in the world would be owned by a single human being, acting on his or her own intelligence, in the space of 5 hours; but there is an argument that this probability is significant enough with AI that we should pay attention and plan for that probability.
Piketty’s Capital in the 21st Century is generating a lot of attention, controversy, and heated exchanges. Perhaps it is time to take Piketty’s argument about capital and put it in a different context to see if the feelings it evokes shift. I’ve written a short think piece on how Piketty’s argument would look inside the field of AI.
Artificial Intelligence and the Piketty Argument
Piketty’s research showing the exponential threat of unregulated capitalism hitting a wall, one built during the Cold War as a response to communism. The time between the fall of the Berlin Wall and Capital in the 21st Century is too short for the old ideological beliefs and faith in capitalism to not influence some contrary reaction.
Wasn’t capitalism what the Cold War was fought preserve against a collectivist nightmare? How can a French academic appear out of ‘nowhere’ and challenge the banner under which that Cold War was fought and won?
Piketty has said in an interview:
“’One of the conclusions that I take from my own work is that we don’t need 19th century economic inequality to grow. One lesson of the 20th Century is that the kind of extreme concentration of wealth that we had in the 19th Century was not useful, and probably even harmed growth, because it reduced mobility and access of new groups of the population into entrepreneurship and power. It led to the capture of our political institutions prior to World War I. We don’t want to return to this.’
We are headed down that road and we should take note and prepare ourselves. How we prepare for it and what tools we can reasonably use are another questions that require political solutions.
In a review of Piketty’s book, Branko Milanovic summarized the main thesis:
“Piketty’s key message is both simple and, once understood, almost self-evident. Under capitalism, if the rate of return on private wealth (defined to include physical and financial capital, land, and housing) exceeds the rate of growth of the economy, the share of capital income in the net product will increase. If most of that increase in capital income is reinvested, the capital-to-income ratio will rise. This will further increase the share of capital income in the net output. The percentage of people who do not need to work in order to earn their living (the rentiers) will go up. The distribution of personal income will become even more unequal.”
One way of understanding Piketty’s arguments based on research is to remove it from an economic platform associated with having avoided the prospect of subjection to communism. That is, let’s leave capitalism aside for a moment. And instead, we will focus on the basic idea Piketty’s research has revealed in another domain, Artificial Intelligence.
Now consider a revision of Branko Milanovic’s summary as follows:
We have a world of intelligence divided between human and machine intelligence. We live in a world where human intelligence is a domain. But as we continue to develop artificial intelligence, if the rate of increase of intelligence by AI (defined to include general and specialized intelligence and the ability to update itself) exceeds the rate of growth of human intelligence, the share of artificial intelligence will increase at a rate faster than human intelligence. If most of that increase in AI intelligence is reinvested by AI to make even smarter AI intelligence, the machine to human ratio the will rise.
At some point AI intelligence exponentially explodes to a level vastly beyond human intelligence (the ‘singularity’). Along this path, we can expect that the ratio between machine and human intelligence will result in a further increase the share of AI intelligence in the net output until human intelligence is no longer a significant or relevant factor.
The end game of AI arrives once human intelligence is no longer a relevant factor in technology, government, resource allocation, investment, etc., so that thinking, information, solving problems, analyzing data for patterns is no longer primarily carried out by human beings.
After the singularity, human beings might occupy a world where human intelligence no longer shapes or defines their world. Their lives and choices are in the memory and sub-directories of machines. After all human beings are a collection of particles and an advance AI might rationally believe those particles are put to better use by using them to make paper clips or fiber cables or memory boards.
Given this potential, does it make sense to invest resources to regulate the development of AI? Are the arguments that apply to the capital/income ratio applicable to the machine/human intelligence ratio? Should work be done to calculate this ratio, to monitor it, and to guard against a tipping point beyond which human intelligence is reduced to close to zero?
We have spent most of our time worrying about divisions within the society of human beings as if that society can never have a substantial challenger. Such species exceptionalism is an example of the hubris that history teaches is the ultimate undoing of all great leaders and empires. We try to think of a situation where the difference between members of the species is not the primary concern but the survival of the species is. It seems too far away for most. Remote like climate change or science fiction, we smile and move on.
We hear the canary in the mineshaft but we don’t believe he’s calling our name. Stephen Hawking understands the risk. He’s raised the alarm. The biggest crime story of all times is to ignore that warning and continue on with the day-to-day politics, crimes, injustices and unfairness putting off the day when we all get mugged and turned into paper clips.
William Shakespeare observed what it takes for kinship to take hold: “One touch of nature makes the whole world kin.” Earthquakes shake the ground under our feet. They spare no one. The same result may be the fate for our children and grandchildren once AI goes FOOM. And there lies the irony, we are fated to experience kinship when it is too late to celebrate and enjoy the recognition of our common bond.
Comments