Certain types of information about AI tends to the tabloid headline who calls catastrophism, but there are a number of precautions that should not be ignored.
A few weeks ago, media and social networks were invaded by headlines like “Facebook closes chatbots” that created a secret language, “others talking about” an artificial intelligence that had “come alive” “,” two robots talked and had to disconnect them “Or even ” Facebook’s AI creates its own language in a chilling advance of our future potential. ” Headlines that many users accompanied commentary referring to Skynet, the AI that realizes itself in Terminator and tries to exterminate the humans.
The prophecy seemed to be fulfilled, or at least insinuated in a first step towards this scenario that has so often reflected movies, series and literature with a greater or lesser degree of catastrophism: machines are able to talk to each other! With your own language! To know what would happen if Facebook does not turn them off!
Could not be farther from the truth; or, at least, nothing more madly interpreted. As many of these headlines or other similar ones later correctly explained, the conversation that had developed between the two agents occurred within the framework of a Facebook Artificial Intelligence Research (FAIR) work, in which a group of researchers tried to train bots, showing them first through a series of models of negotiations between humans and then making them converse so that, in addition to having dialogues, they were able to learn how to negotiate.
To this end, each was assigned a series of symbolic objects with different value, with the mission that they would come to exchange them with each other with the greatest possible benefit. In some of the dialogues different mechanisms were used, such as reinforcement learning. This is where a ” divergence of human language ” occurred in a negotiation between two bots , citing the report .
In particular, it is explained that this model “is arranged, since we find that updating the parameters of both agents – during the reinforced learning – led to divergence of human language.” The company’s blog post for the company’s engineers complemented this, simplifying all subsequent technical explanation of the report explaining that this deviation had been reached ” since agents developed their own language to negotiate .”
“I just got back from the CVPR and found my Facebook and Twitter feed exploding with articles describing apocalyptic and catastrophic scenarios, with Facebook investigators turning off IA agents who invented their own language.” This is Dhruv Batra, one of the investigators of the work, who explains that although this may sound unexpected or alarming to people unfamiliar with the subject, the idea of these elements talking ” is a well-established subfield of artificial intelligence“, And clarifies that it is not a disconnection, but that it is due to a change of parameters of the experiment. That the bots interacted had a concrete mission. Once studied, to something else. That is, they did not turn them off because they had created a secret language or could conspire, but because they had fulfilled their purpose.
Zuckerberg vs Musk
Batra calls the media coverage that leads to these ” clickbaity ” comments , referring to the type of headline-callers who seek to get visitors, even at the cost of informational irregularities. Beyond showing the tendency to clickbait , the news also gave a good example of another reality: alarmism, both in media and among readers, about where artificial intelligence can lead . Alarmism that is generated by various factors: ignorance, influence of science fiction, a mixture of concepts or, again, tabloids that are not read carefully (or do not develop correctly).
No, it is not that artificial intelligence has no dangers, that there are. But it is also a topic that lends itself to many lucubrations and news, or readings, tremendous. A little along this line were faced recently by Mark Zuckerberg and Elon Musk. In a live video through his Facebook profile, the founder of the social network responded to a user’s comments about artificial intelligence explaining that he is “optimistic”, but that people (among which, by allusion, encompassed to Musk), “is pessimistic and tries to encourage these catastrophic scenarios … I simply do not understand. It’s really negative and somehow I think it’s pretty irresponsible. ” Musk himself responded on Twitter, saying that he has spoken to him and that “his understanding on the subject is limited.”
Autonomous vehicles, voice or facial recognition, bots in their variants of commercial chatbots or virtual assistants, automation of certain tasks … These are just some of the best known uses of artificial intelligence, even for those who are not specialists in the subject . They are at the hand of anyone, and so anyone reads the headlines and thinks about the subject. Thus the controversy in social networks between Zuckerberg and Musk could well summarize most of the positions on the warnings about artificial intelligence: those who think that there is a real and immediate risk and those that detract from it , blaming even the other side that tremendousness that then make headlines like those already mentioned. And the two groups can recognize their part of reason.
Priorities when working with artificial intelligence …
Discussion about the potential risks of AI may seem like an issue today, given its boom, but actually has as many decades as its development. Its current resonance is largely motivated by an open letter from several researchers and personalities related to the sector in early 2015. In it they supported an article entitled ” Research priorities for a beneficial and robust artificial intelligence ,” in which researchers Stuart Russell, Daniel Dewey and Max Tegmark propose a series of short- and long-term goals in this field. Among the signers, Elon Musk himself, Stephen Hawking, Steve Wozniak and about 8,000 other experts.
The article, published in the winter edition of the Association for the Advancement of Artificial Intelligence magazine, AAAI, explains that “there is a broad consensus that AI research is progressing widely, and that its impact on society is going to increase. Its potential benefits are enormous , since all that civilization has to offer is a product of human intelligence; we can not predict what we can achieve when this intelligence is magnified by the tools that AI can provide, but the eradication of disease and poverty are not indecipherable, “he points out. “Because of the great potential of AI, it is important to investigate how to take advantage of its benefits avoiding potential hazards.”
… And potential risks: when things do not go as they should
By focusing on the points that call for further investigation, the brief also serves as a warning guide for potential risks. One of the central ideas throughout the article is that AI does not deviate from the interests of the human being. ” Our IA systems must do what we want them to do .” No, it is not a scenario in which artificial intelligence becomes aware of itself and decides for its own benefit. Rather, it is a matter of warning about the possibility of doing something unwanted by the human being by not defining well their tasks or their possibilities.
The director of Microsoft Research Labs, Eric Horvitz, and Thomas G. Dietterich, one of the pioneers in the field of machine learning , call this type of assumptions the case ” The sorcerer’s apprentice “, as the story of this apprentice who orders to a broom that does its task, to carry a bucket of water, without controlling the magic necessary for it. The example adapted to the world today is that of an autonomous car that is asked to take us to a destination as fast as possible. In a literal scenario, the vehicle will not stop at considerations such as limits or speed controls, signals and other drivers or pedestrians.
It is an end, yes, but it serves to emphasize the need to impose a series of correct instructions and for AI to ” analyze and understand when the behavior a human is requesting is going to be judged as ‘normal’ or ‘reasonable’ by most people, “as Horvitz and Dietterich stand out.
The relevance of well-defined artificial intelligence objectives, and alignment with human values, is what Berkeley University professor Stuart Russell, one of the authors of the article on priorities, calls ” the problem of alignment of values . ” Russell exemplifies it in his talk TED “3 principles to create a safer AI” linking all of the above, by putting the case of a machine that is given a simple command, such as bringing a coffee, but that can reach the conclusion that you should turn off your power off button so nothing ever prevents you from bringing the coffee.
To sum up these examples is that the danger does not come so much from the hand of an artificial intelligence suddenly malevolent or that has an existence apart from the human being, but relentlessly efficient , that will do anything to achieve its objectives, to the point of passing over our own interests. Or, in an even more dramatic turn, an AI that you can not disconnect.
The machines, for the labor market
Another of the possible risks, undoubtedly the most widely reported, is the increase in inequality or unemployment . Since the open letter of priorities in AI emphasizes the need to optimize its economic impact, mitigating these adverse effects and even advocating the creation of policies that manage them. Also mentioned by Dietterich and Horvitz, defending the need for a multidisciplinary work that integrates professionals from these two fields, economics and politics.
This concern is already emerging in concrete figures, although it is difficult to estimate how much of the job losses will be directly linked to artificial intelligence and what will be due to other technologies. Other factors must be taken into account, such as the possibility that some of these professionals may eventually be relocated to new demand positions. Thus, the Institute for Economic Studies collects data from the OECD that estimate that in Spain, automation will put at risk 12% of jobs in about a decade . In the United Kingdom and the United States, PwC estimates that 30% and 38% of jobs are at risk from automation.
Towards the veto of the AI arms race
In discussing the need to investigate legality and ethics, the article by Russell, Dewey and Tegmark addresses the issue of autonomous weapons , noting that questions should be asked as to whether they should be allowed, to what degree and who will control them, or what implications can have its use. This theme has jumped back to the forefront of another open letter. Collected by the Future of Life Institute, the same organization that gave voice to the brief on AI research priorities, the ” Open Letter to the UN Convention on Certain Conventional Weapons ” unites the founders and CEOs of about 100 artificial intelligence and robotics to request the organization toprohibition of the development of this type of armament .
In 1939, at the outbreak of World War II, Albert Einstein warned Franklin D. Roosevelt that Germany had discovered a way through uranium to make weapons of great harmful potential, by having the President of the United States launch the Manhattan Project and, with him, the nuclear bombs. Nor are they addressed to a specific government and we are not in an internationally warlike environment, but the alert of these experts could well be equivalent to that of Einstein in what refers to the qualitative leap that would suppose for the arms race not to stop these technologies.
” Lethal autonomous weapons threaten to become the third revolution in war, ” they warn, with “weapons that despots and terrorists can use against innocent populations, and weapons that can be hacked to behave in undesirable ways.” The signatories ask the authorities to “look for ways to protect us all from these dangers”. “Once the Pandora’s box is opened, it will be difficult to close it.” The unmistakably urgent tone of this text leaves no doubt: this is a close and great risk of one of the uses that can be given to the artificial intelligence.
Elon Musk or Mark Zuckerberg? Should we then worry about the risks involved in AI or maintain a moderate optimism, scorning alarmist headlines? Perhaps, after all the above, the best fit would be to say that it is better to maintain a healthy concern because the development of artificial intelligence goes hand in hand with research into its potential (and current) risks . Because we may not get to face Skynet, but neither will we want to become an outdated sorcerer’s apprentice. Nor in a few years to study the signers of the letter to the UN as the new Einstein.