Five major problems of artificial intelligence in 2018

This article introduces the five major problems facing artificial intelligence in 2018: understanding human language, robotic energy, anti-hacking, playing games, and distinguishing between right and wrong. How to solve these problems and let AI continue to benefit human society will become the top priority of all AI practitioners.

In 2017, artificial intelligence made significant progress with the help of deep learning. For example, a robotic poker player named Libratus defeated three human players in succession and became the "Alpha Dog" in the poker world. In reality, AI is improving many traditional industries, such as autonomous driving, agriculture, and healthcare.

Although the development speed of AI is somewhat doubtful, of course, there is a bubble in the development of AI, and various media are also speculating on AI. But there are still some sensible voices: Elon Musk is worried that artificial intelligence still can't do much. The toughest problems are the following:

Five major problems of artificial intelligence in 2018

Understanding human language

From now on, machines are better at handling text and language than ever before. For example, Facebook can describe images to the visually impaired, and Google Mail can automatically make a simple answer to the email (based on the content of the email). However, the AI ​​system still can't really understand what we mean and what we really think. Portland State University professor Melanie Mitchell said: "We can combine the concepts we have learned in different ways and apply them in new situations. But AI can't do it. at this point."

Mitchell defines today's AI flaw as a "meaning barrier" by mathematician Gian Carlo-Rota, and some leading AI research teams are trying to figure out how to climb it.

Part of this work is to provide a common sense basis for machines and a material world that supports our own thinking. For example, Facebook researchers are trying to understand reality by watching videos to teach AI. Others are mimicking what we can do with knowledge about the world. Mitchell has tried systems that use analogies and concepts about the world to explain what is happening in photos.

Make the robot more like a person

The robot hardware has done quite well, you can buy the HD camera palm-sized drone for $500, and the machine that carries the box and the two legs is also improved. But that doesn't mean it's widely available because today's robots lack the brains that match their advanced muscles.

Letting the robot do anything requires specific programming for a specific task, from which he can learn the operation. But this process is relatively slow. A shortcut is to let the robot train in the simulated world and then download those hard-won knowledge to the physical robot. However, this approach suffers from the real gap, and virtual robots are not always effective when simulating learned skills as they move to machines in the physical world.

But fortunately, the reality gap is shrinking. In October, Google's simulated robotic weapons society picked up various objects, including tape dispensers, toys, and combs. We saw the desired results from the experimental report.

In addition, autonomous car companies deploy virtual vehicles on virtual streets in motorized driving races to reduce the time and money spent testing under actual traffic and road conditions. Chris Urmson, CEO of autonomous driving startup Aurora, said that making virtual testing more suitable for real vehicles is one of his team's priorities.

Prevent hackers from attacking AI

When the Internet was born, it was accompanied by security issues. Nor should we expect autopilots and home robots to be different. And in fact it may be worse: because of the complexity of machine learning software, there are multiple ways to attack.

This year, research shows that you can hide a secret trigger in a machine learning system that can cause the system to turn into an evil mode when it sees a particular signal. The New York University research team designed a functioning street sign recognition system unless it saw a yellow Post-It. Attach a sticky note to a parking sign in Brooklyn and the system will report that it is a speed limit sign. This practice can cause problems for self-driving cars.

The threat is considered serious, and researchers at the world's most famous machine learning conference convened a one-day seminar on the threat of machine fraud earlier this month. The researchers discussed issues such as "How to generate handwritten numbers that make people look normal but the machines are more special." The researchers also discussed the possibility of preventing such attacks and feared that artificial intelligence was used to deceive humans.

TIm Hwang, who organized the workshop, predicted that as machine learning becomes easier to deploy and more powerful, using this technology to manipulate people is inevitable. He said that machine learning is no longer exclusive to doctoral degrees. TIm Hwang pointed out that Russia's fake intelligence campaign launched during the 2016 presidential election may be a pioneer in AI's information warfare. He said: Why not see technology in the field of machine learning in these activities? A particularly effective way of predicting Hwang is to use machine learning to generate fake video and audio.

Where is the real future of AI games?

AlphaGo also developed rapidly in 2017, and in May this year, a more powerful version defeated China's Go Championship. Its creator, DeepMind Institute, later upgraded a version: AlphaGo Zero, which does not have to study human chess, but also has extraordinary chess skills, based on which it also learns to play chess and Japanese chess.

The results of human and AI games are impressive, but they also remind us of the limitations of artificial intelligence software. Chess, Japanese chess and Go are very complicated, but their rules and gameplay are relatively simple. They make good use of the power of the computer and can quickly enumerate many possible future locations. However, most of the situations and problems in life are not so "structural."

That's why both DeepMind and Facebook started developing StarCraft AI in 2017.

But at present, it has not achieved good results. At present, even the best performing AI robots cannot compete with the average level of human players. DeepMind researcher Oriol Vinyals told Wired that his software now lacks the planning and memory skills needed to organize and command the military, while not being able to predict and respond to enemy actions. These skills also allow AI to do better in real-world task collaboration, such as office work or real military action. If the 2018 AI version of "StarCraft" made significant progress, it would indicate that artificial intelligence will produce some more powerful applications.

Let AI distinguish between right and wrong

Even though AI has not made new progress in the above areas, if the existing AI technology is widely used, many aspects of society and economy will change greatly. At this time, someone will be worried about accidents and intentional injuries caused by artificial intelligence and machine learning.

At this month's NIPS Machine Learning Conference, how to keep technology safe and ethical is an important topic of discussion. Researchers have found that machine learning systems can acquire data training from our imperfect world, and the systems that are trained are biased, such as continuing gender discrimination. Some people are now researching technologies that can be used to review the internal operations of artificial intelligence systems and ensure that they make fair decisions when working in industries such as finance or healthcare.

In the future, we should see the idea of ​​how technology companies can make artificial intelligence human. Google, Facebook, Microsoft and Ali have begun to discuss this issue. A charity project called "Artificial Intelligence Fund Ethics and Governance" is supporting MIT, Harvard University and other research artificial intelligence, a new research institution at New York University, AI Now There are similar tasks. In a recent report, governments were called upon to refrain from using the "black box" algorithm in areas such as criminal justice or social welfare.

Wireless Sport Earphone

Tws Sports Headphones,Wireless Sport Earphone,Sports Bluetooth Earphones,Wireless Earphones For Workout

Guangzhou YISON Electron Technology Co., Limited , https://www.yisonearphone.com