Instead of being scared to lose a job as a substitute robot, we need to understand the basics to work with these new colleagues.
Madaline is the first object.
In 1959, she used her great intellect to deal with a long-standing problem: the echo in the phone.
At that time, long distance phones were often interrupted by the sound of the caller echoing as they spoke.
She corrects this error by recognizing when the incoming signal coincides with the transmitted signal, and automatically deletes them.
The solution is fast, and it is still used today. Of course, she is not human – she is a system called Multiple ADAptive LINear Elements – or Madaline.
This is the first artificial intelligence (AI) to use.
Everyone today accepts the fact that intelligent computers will start to work on our behalf.
They will complete all of your work for the week before you have finished eating breakfast sandwiches – and they do not need to take breaks, do not need a pension fund, do not even need to sleep.
However, even if a lot of work is going to be automated in the future, these supercomputers will probably still need to work next to humans.
Although ‘work’ is extremely successful in a number of areas, such as the ability to recognize fraud in order to stop, prevent it from happening, or give out reliable cancer screening results. It’s better than a doctor, but the most advanced AI machines today are not able to reach the level of synthetic knowledge.
According to a McKinsey report in 2017, with the current technology, only 5% of jobs will be fully automated, and 60% of robots will only take up to a third of the workforce. submit.
One important point to keep in mind is that not all robots use artificial intelligence – some robots have, but most are not.
The problem is that the lack of intelligence makes these smart robots invade the world, but that shortfall is what makes us, their flesh-and-blood colleagues, extremely confused.
From the trend of racism in not being able to set goals for themselves, can not solve the problem, do not know how to respond reasonably, generation of new workers lack the skills that even those who are foolish Most of us also.
Therefore, before it is ready to face a new future, when artificial intelligence really gain the upper hand and took all the work of us, then you should know the following to be able to work friendly with new robots.
Rule 1: Robots do not think like humans
At the same time Madaline created a revolution in long distance phone, the British philosopher Michael Polanyi original Hungarian name lung tuberculosis was thinking about the human mind.
Polanyi recognizes that while there are some skills, such as precise grammatical usage, that one can easily arrange into a rule and based on that rule to explain to others, there is a very Many skills can not do that.
Humans can perform implicit abilities that they themselves do not recognize.
As Polanyi says, “we know more than we can say.” This may include practicing like riding a bike and kneading dough, as well as more advanced skills. The problem is that if you do not know the rules then you can not teach the computer. This is Polanyi paradox.
Instead of trying to decipher human intelligence, computer scientists are finding another way to solve this problem by developing AI so that it thinks in a completely different way – data.
“You might think that the way AI works is based on how we understand people, and then we use that knowledge to build AI,” said Rich Caruana, senior researcher at Microsoft Research, to speak. “But that’s not it.”
He took the plane, for example, which had long been invented before we fully understand the way birds fly and therefore have a different understanding of aerodynamics. But today we make higher flying planes, far more than any other animal.
Like Madaline, many devices use artificial intelligence as “neural networks,” meaning they use mathematical models to learn by analyzing huge amounts of data.
Facebook, for example, specializes in facial recognition software, called DeepFace, based on about four million photos. By looking at the structure of a snapshot of a person, the software finally learns how to accurately identify faces by up to 97% of all identifications.
Artificial intelligence software such as DeepFace is a rising star in Silicon Valley, and they have surpassed those who have created them in such skills as driving cars, speech recognition, language translation. to another language, and of course, the photo tag feature. In the future, these software is expected to penetrate many areas, from healthcare to finance.
Rule 2: The new robot friend is not always right. They still make mistakes.
But this data-driven approach also means that they will make big mistakes. For example, one AI has concluded that a turtle printed with 3D technology is a gun.
Programs can not think abstractly, “It has a lot of scales and a shell, so it must be a turtle.”
Instead, patterned machinery is pre-defined, in which case the template for which it is thought is the pixel.
As a consequence of not recognizing the image, the AI manually edits a pixel to bring the image closer to the format it knows, and so the scales on the tortoise are interpreted as an answer. weird.
It also shows that robots do not have common sense, which is very important in the workplace, where people need to use the knowledge gained in new situations.
The classic example of this is DeepMind Artificial Intelligence: by 2015 it is required to play Pong games until good play. Just like you think, it only takes a few hours before winning a human opponent and even pioneering a whole new way of winning. But to play Breakout game almost like the game, this AI has to learn again from the beginning.
Although the development of knowledge transformation skills has become a major field in research, as there is now only a single system called IMPALA that can transfer knowledge across 30 different environments.
Rule 3: Robots can not explain why they make decisions
The second problem with AI is the modern Polanyi paradox. Since we do not fully understand how the brain learns new things, we program it for AI to think like a statistician. Ironically, we know very little about what goes on inside the AI mind. So we have to deal with two sets of thinking that we do not understand.
This phenomenon is called “black box problem”, because although you know the data loaded and you see the output, you do not know inside the box in front of it used mechanisms that give the result. . “So now we have two completely different types of wisdom that we do not understand,” Caruana said.
Neural networks have no language skills, so they can not explain to you what they are doing and why. And like all AIs, they are not capable of reflecting naturally.
Several decades ago, Caruana applied neural networks with some medical data.
It includes the symptoms and consequences, and the purpose is to calculate the risk that a patient may die someday, so that the doctor can prevent the risk.
It seems that this mechanism works well, until one night a student at the University of Pittsburgh noticed something strange. He was dealing with similar data with a simpler algorithm, so that he could read the decision-making logic, one by one. One of the lines that reads “asthma is good for you if you are suffering from pneumonia”.
“We asked the doctors and they said ‘oh so bad, this needs to be fixed,'” Caruana said.
Asthma is a serious risk in the development of pneumonia, as both diseases affect the lungs.
People will never know for sure if a machine learning rules this, but there is a theory that when a patient with a history of asthma has pneumonia, they will have to go to the doctor quickly, and this will help. Increased survival of the patient.
With more and more people interested in using AI to serve the public, many industry experts become concerned.
This year, the new regulation of the European Union comes into force, whereby individuals are entitled to ask for an explanation of the logic behind the decisions of the AI.
The US military research department, the Defense Advanced Research Projects Agency (Darpa) is also investing $ 70 million in a new AI program.
“Recently there has been a need to improve the accuracy of these systems,” said David Gunning, project manager at Darpa. “But the price we pay for these systems is too complicated and complicated, for example, we do not know why it has proposed something or why it has chosen a country. Come on in a game. ”
Rule 4: Robots may also have prejudice
There is growing concern that some algorithms may also conceal unintended stereotypes, such as gender bias or race. For instance, recent software has advised on whether a convicted criminal has committed a criminal offense, and as a result, it rates twice as high in blacks.
It is all due to the way the algorithm is trained. If the data entered is tight then the decisions that they make will be highly accurate. But there will always be human prejudice hidden somewhere in the input data.
A prominent example can be found easily in the Google Translate program, Google Translate. A researcher pointed out in Medium last year that if you translate “He is a nurse, she is a doctor” from English to Hungarian and then translates from Hungarian into English, the algorithm “She’s a nurse, he’s a doctor.”
Algorithm is trained on writing from millions of websites. But all he knows to do is find a rule, such as a doctor who is likely to be a man and a nurse can often be a woman.
Another way that prejudice can fade is through evaluation. Just as humans, our AI colleagues will analyze data by “evaluating” it – basically deciding on which parameters are more important, which is less important. An algorithm may decide that a person’s postal code is more or less related to their credit score – this is what is happening in the United States – and thus discriminates against minorities, which are often live in a poorer neighborhood.
This is not just racism or sex. These are the prejudices that we will not expect. Nobel Prize-winning economist Daniel Kahneman, who spent his life researching the absurd stereotype in the human mind, explained the issue in an interview with the Freakonomics blog from 2011. ” Natural self-learning will cause prejudice, and this is true for both human and artificial intelligence, but the self-learning mechanism of AI is not necessarily the same for humans. ”
The era of robots is coming. AI will change the future of work forever. But until the robots reach a bit more human-like levels, they will still need humans around us. And it looks great, it looks like these plastic colleagues will make me very nice.