For a robot to be able to "learn" sign language, it is necessary to combine different areas of engineering such as artificial intelligence, neural networks and artificial vision, as well as underactuated robotic hands. "One of the main new developments of this research is that we united two major areas of Robotics: complex systems (such as robotic hands) and social interaction and communication," explains Juan Víctores, one of the researchers from the Robotics Lab in the Department of Systems Engineering and Automation of the UC3M.
Science fiction films such as Blade Runner (1982), Lars and the Real Girl (2007) and Her (2013) explore the advent of human-machine relationships. And in recent years, reality has met fiction.
It's likely that before too long, robots will be in the home to care for older people and help them live independently. To do that, they'll need to learn how to do all the little jobs that we might be able to do without thinking. Many modern AI systems are trained to perform specific tasks by analysing thousands of annotated images of the action being performed. While these techniques are helping to solve increasingly complex problems, they still focus on very specific tasks and require lots of time and processing power to train.
Since the dawn of humankind, exploration of certain places, ranging from the depths of the oceans to the edges of the universe, has led to numerous discoveries. However, there are also several environments that need to be examined but can't be directly observed, like chemical or nuclear reactors, underground water or oil distribution pipes, space and inside of the body. The EU-funded Phoenix project has been addressing this challenge by developing a new line of technology that will offer the opportunity to get to unreachable places.