While the internet is not necessarily reliable for finding truth, are humans capable of sorting machines from other humans, let alone truth from falsehood? Every day, machine learning is hotting up. What does it all mean?

There was a time when many believed the ever-expanding internet would mean a new era where facts were king – people could easily check if what they’d been told was accurate. The reality couldn’t be further from this utopian vision. In Fast Forward episode 4, Ken Hollings and his guests talk about why the internet is full of lies and why machines are more like us than we know.

Trolls from the 1990s

Eric Drass aka Shardcore, an artist known for exploring technology themes, says anonymity is a crucial factor. Back in the 90s few people used their real name online, and some found ‘fun’ in displaying views that would shock people. From the safety of a pseudonym, it felt like there were no real consequences. Today’s online environment, Drass believes, stems from that time.

Drass also thinks it’s become easy to use the visual hallmarks of respectability, contributing to the ease of spreading falsehoods. “I could set up a website that looks like a legitimate news source in a morning and start publishing stories about whatever I want. To a casual observer, it would look legitimate.”

Believing in the humanity of bots

It seems we’re easily fooled when we want to be. The Turing Test is a test for machine intelligence devised by early computer programmer and code breaker Alan Turing. It suggests a machine is intelligent if it can lie convincingly. But should we value being able to lie, when humans are so easily hoodwinked?

When ‘she’ was invented in 1966, people felt therapeutic chatbot ELIZA was so real and empathetic some asked to be left alone with her: See if ELIZA feels real to you. All the bot does is follow established therapeutic prompts ‘learned’ from a textbook.

Kaspersky Security Researcher David Emm says it’s our tendency to believe that cybercriminals often exploit. As we’re becoming used to chatbots as ‘real’ as ELIZA on business websites, we need to stay alert to whether what the bot is asking us for is right in the context. Being asked for personal information like bank details, date of birth and so forth should trigger us to consider how we know who we’re talking to.

Could machines identify vulnerabilities in us we don’t know?

If we expect it to become truly and independently intelligent, a machine has to learn on its own terms how to fool us effectively. Could gamification level up machine intelligence? Generative Adversarial Network (GAN) does just that – two neural networks play against each other to find matching variants for a data set.

Technology writer and artist James Bridle has concerns. “You set two models against one another, and that relationship can be generative or totally adversarial, where one is trying to fool or compete against the other. It’s incredibly powerful. I think there’s something concerning about it being our dominant training model.”

Bridle explains how Facebook tried to train GAN in bargaining and pitted it against humans. When you listen to this episode, you’ll be amazed to hear about the skills and tactics the machines taught themselves. These have big implications for future human-machine relationships, and make you wonder who’ll take charge.

Listen to Fast Forward and explore more interviews with featured experts.Subscribe to future episodes on these audio streaming services:

Google Podcasts
Amazon music

RSS feed for podcast apps