The first robotic arms introduced to car assembly lines in the 1960s had the same ‘reach’ as a human worker. They did the same tasks in the same space. Even then, futurists predicted machines would change the world more with their information processing power than by performing manual tasks.

Listen to the full Fast Forward episode 5 on your future robot colleagues

Turning machine empathy to our advantage

Dr. Beth Singler, an artificial intelligence research fellow at University of Cambridge, points to automation in the workplace revolving around replacing human ‘knowledge labor’ as much as physical labor. “Artificial intelligence (AI) assistants do tasks for you, but in an emotionally accessible way – with pleasantries and civility.”

Dr. Singler points to discussions about whether we should be civil back to AI assistants, like Alexa or Siri. “These questions are so integral to our conception of what AI is and could be that you can’t have a conversation about AI without them coming up. It all gets down to philosophical questions of ‘what is the human being for?'” But Dr. Singler also gives thought-provoking examples of good uses of AI’s ability to show humanlike traits while being seen as neutral.

Robot pizza delivery goes further thank you’d think

Kaspersky Principal Security Researcher David Emm says contrary to the human-robot conflict common in sci fi worlds, we might see machines as too unthreatening. “Our research with University of Ghent on how people would react to robots in the workplace found 40 percent would unlock security doors for robots. People didn’t question why a robot needed access to a secure area when it was delivering pizza.”

If in real life we accept machines in ways we wouldn’t accept humans, Emm says we may “share information or give access we shouldn’t.”

Robot ethical self-evolution

One person thinking about how robots could improve our relationship with both security and ethics is Alan Winfield, professor of robot ethics in the Bristol Robotics Lab. He believes robots should include technology that gives a data trail to understand contributing factors when they make a bad decision like causing an accident – an ‘ethical black box,’ like a flight data recorder.

Eventually robots could use the data to self-evolve, he believes – improving on their next generation without the human decision-making past generations used to domesticate animals and breed higher yielding crops. Listen to Fast Forward episode 5 to hear Professor Winfield explain his mind-blowing vision and how four universities are working to make it happen.

Listen to Fast Forward and explore more interviews with featured experts.Subscribe to future episodes on these audio streaming services:

Spotify
Apple
Google Podcasts
Amazon music
Podbean

RSS feed for podcast apps