If a face isn't a face!
Truth, that's so 90ies! How it is getting more and more difficult to believe what we see and hear.
There is lots of social content that is rather useless, yet we spend hours with it. And then there are those few things that are really worth the time — like, reading Raphael Satter's Tweets. He is a journalist for the AP and uncovers the most awkward stories about activism, digital espionage and such. He gives you a deep dive into the techniques, modern spies use. And sometimes it leaves me very baffled. Like the other week, when he discussed AI generated profile pictures on LinkedIn. Just another episode in the big fake-wars, though.
Fake-wars may sound a bit dramatic for people faking a few pictures or videos or sounds - but it is on its way to becoming a serious issue. In Satter's latest example, it is an issue that you could simply dismiss as being irrelevant: what is the problem if someone creates a fake profile picture? But the case is more complex: the fake LinkedIn profile he found was used to get in touch with senior government officials and think tanks. The more vigilant people would check contacts before connecting with them. Until now, an easy way to find out if a profile was fake or not, was to test the face in a search engine. Run the profile photo through a reverse image search and if it is a stock photo or also associated with other names - well, the chance that the person contacting you is not who they say they are is high. It is quite easy, btw. Do a right click on the profile image and then choose "Search Google for image". Do it with my LinkedIn profile picture, and this happens.
An AI generated picture doesn't give you that chance. The reverse search will not return any results, as the face indeed is unique. For sure, absolutely no results might also be suspicious, but much less than if the photo suddenly appears at a stock photo site.
Even more – one can design a picture to be especially appealing to a certain target group. Long story short: AI generated photos increase the chance to be considered real and that a invitation would be accepted. It's not necessarily clear what the purpose of such a campaign is though. Satter on Twitter mentioned that he had contacted most of the fake account's LinkedIn connections, and none had any conversations with her.
But the right connections can over time give a fake person credibility. I had to check the trustworthiness of a source for a film we were producing recently and asked a security researcher if he knew anything about that person. One of the criteria he used were connections that source had on social networks.
So, it could be a long-term strategy: Create a picture which allows you to establish a network, start communicating about specific topics extensively, create awareness for your fake persona, impact opinions as a well-connected influencer.
The technology behind it is called Generative Adversarial Network (GAN). As a technology, that is as complicated as it sounds but relatively easy to use. Everything you need for it you would find publicly available on GitHub and with a bit of technical understanding you can get started.
There can be proper reasons for creating a fake image. It's cheaper and more unique than using a face from a stock photography database for a website, for example. Phil Wang's project thispersondoesnotexist.com collects samples of AI-generated faces. It is an excellent resource to train yourself in finding the little mistakes the AI still makes. Besides that, it really is an impressive showcase of what the AI can do.
Christopher Schmidt takes it a step further (and weirder). His site thisrentaldoesnotexist.com creates fake Airbnb rentals. Everything, from the pictures to the text, is created by machine learning technologies. They often are not perfect, but then - which rental listing is flawless?
And now, test yourself. This site shows you a real and a fake image. Up to you to find out which is real.