Quantcast
Channel: Artificial Intelligence
Viewing all articles
Browse latest Browse all 1160

How to spot a deepfake, according to experts who clocked the fake persona behind the Hunter Biden dossier

$
0
0

rise of deepfakes 4x3

Summary List Placement

A shocking dossier intended to detonate a bomb under Joe Biden's presidential campaign was defused after a researcher spotted its author was a computer-generated deepfake.

A document penned by Typhoon Investigations began circulating in right-wing circles from September and alleged compromising ties between Biden's son Hunter Biden and China.

martin aspen

But "Martin Aspen", the document's purported author, isn't real. His likeness was produced by a generative adversarial network (GAN), a branch of artificial intelligence, and the report's allegations were baseless.

Disinformation researchers have warned that deepfake personas like Martin Aspen pose a threat to democracy, though up until now the threat has been minimal. We've seen convincing examples of Trump and Obama deepfakes, though neither were used for nefarious political purposes.

The Martin Aspen incident is something else — if political fakery is really on the rise, how do we protect ourselves?

There are tell-tale signs when a neural network has produced a fake image

First, it's helpful to understand how these images are created.

Neural networks, which use hardware processing power to learn new skills, compete against each other to try and trick the other about what is a real image and what is faked, but indistinguishable, from the real thing.

GANs have become very good at creating lifelike images of people — but they're not infallible. Check out this weird "dog ball" generated by a trio of researchers in 2019:

DogBall

But GANs have improved significantly, to the extent where the technology can generate fairly convincing human faces:

AI bloke

"While these generative adversarial networks can be really good, and they learn from their own 'mistakes' so they get better over time, there are certain contextual things they cannot understand," said Agnes Venema, a Marie Curie research fellow, working on a project at the Romanian National Intelligence Academy and at the Department of Information Policy and Governance of the University of Malta.

Here's how to spot when an image isn't exactly a real person.

Background details can be telling

Martin Aspen clothing

"Key giveaways for GAN-created faces tend to be vague, out of focus backgrounds, or weird textures," said Elise Thomas, the researcher at the Australian Strategic Policy Institute who first outed Aspen as an AI fraud.

"Sometimes they look like they're borrowed from other things," she added. "Like a shirt which looks like it has the texture of the plant." Aspen's odd green clothes were a dead giveaway.

It's all in the eyes

Martin Aspen eye

The key tell that Aspen was the culmination of computer code doing its magic, rather than anyone real, was simple once you zoomed into the eyes. "You do sometimes see the irregular irises, as the Martin Aspen picture had," said Thomas.

The irises get close to being realistic, but often bleed or blur in a way that isn't natural. In the case of the faked image of Martin Aspen, there's a second pupil in one iris, which is only visible when you zoom in and analyze the image in detail.

Check the ears, too

Martin Aspen ear

Computers don't have ears, and so when confronted with the curious mix of cartilage, bone and skin, they struggle to understand what's going on anatomically. "Sometimes there are areas of a deepfake that the GAN has not been able to train so well on to make it look natural," said Venema.

Ears are covered by hair, and there's therefore less training data to make it perfect. The wonky ears were the giveaway for Aspen's photograph, though for women it's often the inability to handle earrings in a logical manner that makes it obvious something's amiss.

Hairlines are often worrying

Martin Aspen hair

For those focused on the ravages of ageing, the hairline is the first thing they look at – and it can help identify deepfaked images of people, too. "It's the inconsistencies that are very difficult to spot, but can be there, like fuzzy hairlines," said Venema.

GANs often also struggle with shadows, which the image of the fake intelligence analyst also had issues with. There's an odd element by Aspen's left temple where thinning grey hair casts dark brown shadows that it shouldn't.

How to become better at spotting deepfakes

If you're keen to stay away from disinformation in the coming days ahead of the election – or in the coming years, come what may – then Thomas recommends visiting Which Face is Real, a website that shows you a real and a computer-generated face, and tries to help train people to spot the common issues with AI-generated ones.

"It really helps to get your eye in for GAN faces," she said. "It's pretty incredible how good it's become in the last couple of years, given that this technology didn't exist until quite recently and is now available to almost anyone."

However, Thomas mixes that awe with fear. "I also question whether we really want it to get so good that no one can tell whether it's real or not," she said. "It's hard to see how the benefits of that would outweigh the inevitable misuse of it."

Join the conversation about this story »

NOW WATCH: A cleaning expert reveals her 3-step method for cleaning your entire home quickly


Viewing all articles
Browse latest Browse all 1160

Trending Articles