Quantcast
Channel: Artificial Intelligence
Viewing all articles
Browse latest Browse all 1160

These are the 3 biggest obstacles to artificial intelligence, according to Google's researchers

$
0
0

ex machina movie artificial intelligence robot

Artificial intelligence (AI) has always played a huge part in many of Google's applications. Behind the scenes, AI powers Google's translation program, image recognition, and email spam filters.

It doesn't look like Google will stop there. Google CEO Sundar Pichai announced during Google's Q3 earnings call that they're "re-thinking" all of its products to include more AI and a method called machine learning.

What those products will be remains to be seen. No one knows exactly what the final frontiers of AI will be like, but three Google researchers told Tech Insider which problems AI researchers face when it comes to building machines capable of exhibiting extreme intelligence.

Getting machines to experience the world like humans do

Humans are champions when it comes to experiencing the world — we wouldn't have survived if we weren't.

Millions of years of evolution has sharpened humans' senses, like vision, hearing, and touch, to a fine point because they were necessary to the species' survival. As it turns out, these senses are also an important part of intelligence. While AI researchers are making unprecedented strides in these areas using machine learning, they've still got a way to go.

Peter Norvig, a director of research at Google and co-author of the textbook on modern AI research, said getting computers to experience the rest of the world as well as humans do will unlock problems AI researchers have long had with planning and reasoning.

"We are very good at gathering data and developing algorithms to reason with that data, but reasoning is only as good as the data, which means it is one step removed from reality," Norvig wrote to Tech Insider in an email. The closer we can get computers to experience reality, the better they'll be.

"I think that reasoning will be improved as we develop systems that continuously sense and interact with the world, as opposed to learning systems that passively observe information that others have chosen, like collections of web pages or photos."

Getting computers to learn without human teachers

Preschool BrooklynWhen you were growing up, you learned about the world in a number of different ways. You likely had parents or teachers who pointed to items and told you what it was called. But a lot of childhood learning was also implicit, the ability to make inferences to fill in the gaps and build on previous knowledge.

But computers don't have that ability. The most successful method of machine learning so far is called supervised learning, which works a lot like how teachers point to items and naming them. Each time the system learns a new task, it has to start essentially from scratch. It requires a lot of human involvement and time. Machines need to be able to learn without as much supervision and input from humans, according to Samy Bengio, researcher at Google.

"We need to work more on continuous learning — the idea that we don't need to start training our models from scratch every time we have new data or algorithms to try," Bengio wrote to Tech Insider in an email. "These are very difficult tasks that will certainly take a very long period of time to improve on."

Focusing on the right parts of human intelligence and not getting sidetracked

Geoffrey Hinton, a Google distinguished researcher, told Tech Insider that one of the biggest obstacles to building computers capable of human-level intelligence was making sure we don't get sidetracked by unnecessary considerations.

Take consciousness. It has long been considered integral to true intelligence, but it is also one of the most mysterious aspects of human intelligence. But Hinton said that's an outdated way of thinking about thinking.

"Consciousness is an old and very primitive attempt to explain what's special about a very complicated computational system by appealing to some unobserved essence," he wrote to Tech Insider in an email. "The concept is no more useful than the concept of 'oomp' for explaining what makes cars go ... that doesn't explain anything about how they work."

SEE ALSO: 19 A.I. experts reveal the biggest myths about robots

Join the conversation about this story »

NOW WATCH: We asked an exercise scientist how many days you have to work out to actually see results


Viewing all articles
Browse latest Browse all 1160

Trending Articles