Get all your news in one place.
100’s of premium titles.
One app.
Start reading
The Guardian - AU
The Guardian - AU
National
Josh Taylor

Facebook rejects Andrew Forrest’s legal claim it should be liable for cryptocurrency scam ads

Australian businessman Andrew Forrest
Andrew Forrest has launched court action against Facebook in both California and Western Australia over scam ads bearing his image. Photograph: Bianca de Marchi/AAP

Facebook has filed court documents rejecting claims by Australian billionaire Andrew Forrest that it should be held liable for cryptocurrency scam ads bearing Forrest’s unauthorised image, in part arguing the company’s terms of service protect it from liability.

The social media giant has argued in court documents filed in California in January that because Forrest has a Facebook account, he has agreed to the company’s terms and conditions, which they allege protect it from liability.

Forrest launched civil proceedings against the social media giant in the superior court in San Mateo, California, alleging Facebook’s failure to stop scam ads appearing on the platform constituted a “misappropriation of likeness”, “aiding and abetting fraud” and “negligent failure to warn”.

The ads began appearing in March 2019. These ads use images of celebrities or other well-known people in the regions they are targeting. They are presented as a news story claiming the celebrity has made a “big investment” and the banks are shocked by how well that investment has done.

If you click on the ad, it takes you to a fake news story that includes a link claiming to be a cryptocurrency investment scheme.

Forrest has separately commenced criminal proceedings in the Western Australian magistrates court alleging that Facebook has breached federal money laundering laws by failing to stop the ads. Those proceedings are expected to be mentioned for the first time in March.

In the US court, Forrest has argued his reputation has been damaged by the scam ads and Facebook is not merely a platform, but the publisher of the ads. Forrest argues this is due to how Facebook actively targets users for ads based on their location and interests, and other information.

“Facebook is not simply providing neutral tools for bad actors to carry out fraudulent schemes. Instead, Facebook is utilizing its sophisticated method of collecting data, and then using that data to engage its users for longer periods of time with information, advertisements, and other material, irrespective of what that content is,” the court filing states.

“Facebook is directly involved with developing and enforcing a system that subjects its users to illegal content.”

In its response, filed in court, Facebook has argued that it cannot be held liable on a number of grounds, including that it is protected by section 230 of the US Communications Decency Act, which limits the liability of websites for third-party content posted on those websites.

Facebook has argued the claims were “the classic kinds of claims that [have been] found to be preempted by section 230” and uniformly rejected by US courts.

The company has also argued that because Forrest has a Facebook account and has agreed to the company’s terms of service, he cannot claim the company is liable.

“Specifically, section 4.3 of the TOS makes clear that Facebook is ‘provided ‘as is’’, that Facebook ‘make[s] no guarantees that [the platform] always will be safe, secure, or error-free’, and that Facebook ‘do[es] not control or direct what people and others do or say’ and ‘[is] not responsible for their actions or conduct (whether online or offline) or any content they share (including offensive, inappropriate, obscene, unlawful, and other objectionable content)’.”

Forrest’s next response is due on Tuesday, with Facebook’s subsequent response due in March, ahead of a hearing in mid-April.

Both Republicans and Democrats in the US have called for reform of section 230 in the past few years. Most recently, following the revelations of the Facebook whistleblower Frances Haugen, House Democrats introduced a bill which would remove the liability protections for companies with over 5 million users in situations where personalised algorithms serve up harmful content to users.

Although it will not comment on the cases, Facebook has previously told the Guardian that it takes “a multifaceted approach to stop these ads” and is “committed to keeping these people off our platform”.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.