AI systems already display vision, language and controlled motor skills, but researchers are looking to answer the question: ‘When will we have robots that can do housework, communicate in natural language conversations and defend themselves against discrimination?’
copyright by nordic.businessinsider.com
At the global artificial intelligence conference IJCAI-ECAI 2018 held in Stockholm, Sweden, AI experts and research students from top universities around the world came together to discuss the state of AI as it stands today, and where we are headed in the not-so-distant future. “There won’t be a Big AI Bang where complete AI systems suddenly surround us in the next year,” says Christian Guttmann, Executive Director of the Nordic Artificial Intelligence Institute, “Instead, we will see more and more AI features being included in our products and services.”
Guttmann references examples such as a car parking itself with the press of a button, and the scenario of discussing our health conditions with both a virtual doctor and a human doctor at the same time.
There are major obstacles that will only be overcome very gradually.
Most researchers are in agreement – artificial intelligence will not become ubiquitous in a day.
“The truth is, that despite tremendous advances in AI technologies, we are still far from having robot maids,” according to Joyce Chai, Director of the Language and Interaction Research Group at Michigan State University. In her presentation, Chai underscores a few technological advances the research community is working towards in order to close the gap between humans and robots:
– Representations – Robots will need to understand rich and interpretable events, such as the human comprehension of a cucumber still being a cucumber even though the shape changes from one cylinder into smaller circles when sliced.
– Algorithms – Machine learning will need to be incremental and interactive, incorporate prior knowledge, handle uncertainties and support casual reasoning. In the case of cutting a cucumber, a human’s command to “cut” would infer that the robot uses the proper tool – in this case, a small knife.
– Common sense knowledge – Smart robotics will need to be able to act upon cause and effect knowledge, including physical, social and moral variables. While cutting a cucumber might not inherently be a moral decision, understanding if there is a small child’s finger in the line of cutting would be a common sense trigger robots should be able to comprehend before we invite them into our homes.
Artificial intelligence doesn’t learn efficiently enough.
Yan LeCunn is VP & Chief AI Scientist at Facebook and a professor at NYU. He describes low efficiency in learning algorithms as a major problem preventing us from reaching a state of pervasive artificial intelligence, or, “real AI.” According to LeCunn, today’s AI shortcomings can be divided into three core problems: To begin with, supervised learning needs too many information samples. Secondly, reinforcement learning (a kind of learning by experience approach) needs too many trials. Thirdly, machines don’t have common sense. […]
Thank you for reading this post, don't forget to subscribe to our AI NAVIGATOR!
read more – copyright by nordic.businessinsider.com
AI systems already display vision, language and controlled motor skills, but researchers are looking to answer the question: ‘When will we have robots that can do housework, communicate in natural language conversations and defend themselves against discrimination?’
copyright by nordic.businessinsider.com
At the global artificial intelligence conference IJCAI-ECAI 2018 held in Stockholm, Sweden, AI experts and research students from top universities around the world came together to discuss the state of AI as it stands today, and where we are headed in the not-so-distant future. “There won’t be a Big AI Bang where complete AI systems suddenly surround us in the next year,” says Christian Guttmann, Executive Director of the Nordic Artificial Intelligence Institute, “Instead, we will see more and more AI features being included in our products and services.”
Guttmann references examples such as a car parking itself with the press of a button, and the scenario of discussing our health conditions with both a virtual doctor and a human doctor at the same time.
There are major obstacles that will only be overcome very gradually.
Most researchers are in agreement – artificial intelligence will not become ubiquitous in a day.
“The truth is, that despite tremendous advances in AI technologies, we are still far from having robot maids,” according to Joyce Chai, Director of the Language and Interaction Research Group at Michigan State University. In her presentation, Chai underscores a few technological advances the research community is working towards in order to close the gap between humans and robots:
– Representations – Robots will need to understand rich and interpretable events, such as the human comprehension of a cucumber still being a cucumber even though the shape changes from one cylinder into smaller circles when sliced.
– Algorithms – Machine learning will need to be incremental and interactive, incorporate prior knowledge, handle uncertainties and support casual reasoning. In the case of cutting a cucumber, a human’s command to “cut” would infer that the robot uses the proper tool – in this case, a small knife.
– Common sense knowledge – Smart robotics will need to be able to act upon cause and effect knowledge, including physical, social and moral variables. While cutting a cucumber might not inherently be a moral decision, understanding if there is a small child’s finger in the line of cutting would be a common sense trigger robots should be able to comprehend before we invite them into our homes.
Artificial intelligence doesn’t learn efficiently enough.
Yan LeCunn is VP & Chief AI Scientist at Facebook and a professor at NYU. He describes low efficiency in learning algorithms as a major problem preventing us from reaching a state of pervasive artificial intelligence, or, “real AI.” According to LeCunn, today’s AI shortcomings can be divided into three core problems: To begin with, supervised learning needs too many information samples. Secondly, reinforcement learning (a kind of learning by experience approach) needs too many trials. Thirdly, machines don’t have common sense. […]
Thank you for reading this post, don't forget to subscribe to our AI NAVIGATOR!
read more – copyright by nordic.businessinsider.com
Share this: