Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Evening Standard
Evening Standard
Technology
Andrew Williams

Intel says it can separate living people from deepfakes in real time

Intel FakeCatcher analyses blood flow

(Picture: Intel)

Intel has developed a system that can identify deepfakes, called FakeCatcher, which it claims is the “world’s first real-time deepfake detector”.

The detector looks for visual signs of blood flow in people’s faces, to determine whether the representation of a person is real or not. Intel says the process takes just “milliseconds”.

FakeCatcher’s accuracy sits at 96 per cent, according to an Intel blog on the software posted last week.

Intel says up to 72 streams can be analysed at once using one of its 3rd Gen Xeon processors. However, these are a little different to the CPUs found in consumer laptops and desktop PCs, and cost up to around £4,000.

This does offer some context as to where this technology, or tech like it, will be used in future.

How FakeCatcher could be used

Intel suggests it could be implemented in “content-creation tools”, the most fundamental of which are video-editing applications like Adobe Premiere Pro. The most pressing use is, of course, within social-media platforms, as “part of a screening process for user-generated content”.

This particular case is why the ability to assess many streams concurrently is so important, owing to the sheer volume of content posted online.

Intel also suggests the platform could be used by media and broadcasters, or opened up to the public at large, to let them establish the veracity of any video.

FakeCatcher looks for slight changes in skin tone caused by the flow of blood through the body. Watches with heart-rate sensors use a similar method, firing green LEDs into the wrist and analysing the amount of reflected light to calculate how fast your heart is working.

The task seems more difficult here, where you don’t typically perceive a variation in skin tone caused by blood flow.

We have contacted Intel to find out if there are specific requirements for video quality in Intel FakeCatcher, and if it has greater difficulties with certain skin tones. The latter affects the optical heart-rate sensors in watches, which can produce less accurate results for those with darker skin.

In 2021, Facebook posted a research blog about its own attempts to identify deepfake images, rather than video. Its method attempts to reverse engineer the machine-learning model used to create the image in question, unearthing the fingerprints such a model leaves on its creations.

Later in 2021, Nvidia “deepfaked” part of a presentation by its own CEO to demonstrate the power of generative imaging techniques. These two companies represent some of the most positive and negative uses for generative imaging technology. Nvidia shows off hyper-realistic graphics and character modelling that could one day be seen in video games or VR experiences, but the clearest use for deep-fake tech is as a disinformation tool, which makes technologies like Intel FakeCatcher so important.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.