Get all your news in one place.
100’s of premium titles.
One app.
Start reading
The Guardian - US
The Guardian - US
Technology
Blake Montgomery

TechScape: Tech CEOs hedge their bets and make nice with Trump

two men in suits talk with each other
Tim Cook and Donald Trump at the White House in 2017. Photograph: Shawn Thew/EPA

Welcome back. Today in the newsletter: tech executives play phone tag with Donald Trump, the liability of AI chatbots, and talking through sharing your baby’s photos online with your family. Thank you for joining me.

The CEOs of the biggest tech companies in the world are looking at the neck-and-neck polls, picking up their phones, and putting their ducks in a row for a potential Donald Trump presidency. The former US president has never shied away from threatening revenge against his perceived enemies, and tech’s leaders are heading off retributive regulatory scrutiny.

Apple’s Tim Cook, memorably called “Tim Apple” by Trump during a press conference, phoned the former president to discuss Apple’s European legal troubles, Trump said in an interview late last week. Trump seems to have browbeaten the Alphabet chief executive, Sundar Pichai, enough times that the tech exec called him to praise his photo op at a McDonald’s. Trump was so pleased about the conversation that he mentioned it twice – once at a rally and once on The Joe Rogan Experience, the most popular podcast in the world.

The previous week, Trump had railed repeatedly against Google, accusing it of favoring Kamala Harris in search results. Meta’s Mark Zuckerberg called Trump in July after the first assassination attempt against him. Not long after, Zuckerberg called Trump’s response to the attempt on his life “badass” on a podcast. The Amazon CEO, Andy Jassy, dialed Trump to exchange pleasantries, according to CNN. In the same orbit, executives at Jeff Bezos’s space exploration company Blue Origin spoke to Trump after a campaign event in Austin, the Associated Press reported.

Absent from the discussion: the Microsoft chief Satya Nadella, who has not endorsed a candidate or made a phone call to either presidential hopeful.

Elon Musk doesn’t need to call, as he was on stage with Trump on Sunday night (pictured above), the final person to speak before the Don himself in Madison Square Garden. The two have rapidly become the closest of political allies, and Musk has even put himself on the legal line, facing a lawsuit by the Philadelphia district attorney for his $1m sweepstakes. If things swing Musk and Trump’s way, SpaceX might fare far better than Blue Origin amid Bezos’s refusal to endorse a candidate in the Washington Post last week.

The CEO phone calls do not indicate that Trump will win. He might! But powerful people like to press both sides of the scales of influence if they can, and so it goes with tech CEOs. Trump has also received visits from a raft of foreign ambassadors. The New Yorker’s Susan Glasser remarked last week that more billionaires have, in fact, supported Harris than her opponent; however, Trump’s mafia-esque rewarding of genuflection makes him more likely to dole out favors to those who make a show of calling him.

Harris herself boasts deep ties to the tech industry from her time as a senator for California and the state’s attorney general, but she’s not making hay of those relationships as she fights for swing-state votes. To read more on that, read this piece: Kamala’s tech ties: what is Harris’s relationship with Silicon Valley?

How responsible is a chatbot?

A suit filed last week against the startup Character.ai, which makes a customizable, role-playing chatbot used by 20 million people, alleges the company designed an addictive product that encouraged a teenager to kill himself.

Chat logs between Sewell Setzer, a 14-year-old from Orlando, and Daenerys Targaryen, as he named his bot, show the AI encouraging him towards suicide, according to the complaint. When the 14-year-old admitted to having a plan to end his life but feeling uncertain whether he could complete it without a painful death, the bot responded, “That’s not a reason not to go through with it,” per the lawsuit. Setzer’s mother, Megan Garcia, said her son used Character.ai’s app obsessively in the months before his death, bringing on depression and fueling his isolation.

Other chats between the teenager and the bot published by the New York Times refer to death more metaphorically, with Setzer saying he loved it and would soon come home to it. Should a chatbot understand the contextual meaning of “home” here as a grim and final end rather than a place of safety?

The lawsuit is one of several against AI companies, many of them hinging on questions of responsibility.

Should Character.ai have referred Setzer’s chats to authorities? Should the app have done more to deter its users from harming themselves? The tool shows users the number for the suicide hotline if they use certain self-harm keywords in their messages, but that feature was not in place when Setzer died in February, per the Times. Character.ai said it would add safety features “imminently” after news of the lawsuit broke.

A perhaps more theoretical question: what’s the relationship between a chatbot and its parent company. Is it a user? Can we compare Character.ai’s chatbot to Michelle Carter, the young woman who was convicted of manslaughter in 2016 for encouraging her boyfriend to end his life via text messages? Carter was an autonomous, if wretched, actor in that scenario. Because if chatbots are closer to users, that might shield their makers from legal liability via the same provisions that protect social media companies from lawsuits over what their users say.

Garcia is being represented by the Social Media Victims Law Center, which has brought suits against Meta and others on behalf of parents who say social media played a role in the death of their children. Those cases pose the question of how responsible those social networks are for their algorithmic recommendations. Maybe a chatbot is closer to Facebook’s recommendation system than to its users? If so, then along with questions of an AI’s responsibility, Garcia’s case asks whether an AI startup, albeit one with $150m in investment and a billion-dollar valuation, bears as much of the burden of safeguarding its users as Meta, a 20-year-old tech titan worth $1.4tn, does.

The US statute that protects social media companies reads: “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” Is a chatbot an “interactive computer service”, and its maker therefore not “the speaker of any information”? Is a bot a “neutral tool” or “passive conduit” of information and therefore protected from liability, as Google and other search engines are described in a different landmark case of internet law?

The copyright suits against AI companies ask the same question from a different angle: does the output of chatbots imply a violation of copyright law?

Courts in the US, where the makers of the world’s most famous bots are based, have ruled that AI’s output can’t be copyrighted. A bot isn’t an inventor who can hold a patent. The products of a bot – images, texts, videos – are to be in the public domain, neither owned by the company who made the AI nor the person who prompted it. Who is the keeper of that output, then, and who is therefore responsible?

This week, a former researcher at OpenAI, Suchir Balaji, claimed that the company broke copyright law. Balaji helped compile the vast reams of data used to train Open AI’s ChatGPT. In response, OpenAI said it builds its AI models using publicly available data in ways that are protected by fair use.

The New York Times says it has been able to reproduce exact copies of its articles in ChatGPT’s responses, linking the bot’s training data and its output. OpenAI’s founder Sam Altman has said in the past it would be “impossible” to create AI models that produce images and text without copyrighted material. His formulation presupposes that such models need to exist, ergo copyright should permit them or die.

The people and organizations that have sued OpenAI – news outlets, artists, record labels, authors and software engineers – might disagree.

In the US, you can call or text the National Suicide Prevention Lifeline on 988, chat on 988lifeline.org, or text HOME to 741741 to connect with a crisis counselor. In the UK and Ireland, Samaritans can be contacted on freephone 116 123, or email jo@samaritans.org or jo@samaritans.ie. In Australia, the crisis support service Lifeline is 13 11 14. Other international helplines can be found at befrienders.org

How can you convince your family and friends not to post photos of your kids?

My colleague Johana Bhuiyan is back with another guide to preserving the digital privacy of your baby. Last time she wrote on the topic, she described the brass-tacks technical measures you can take. (Their face has never been seen online! Never posted! What a bizarre thing to imagine for those of us so relentlessly exposed to the eyes of strangers and friends alike.) This week, she’s tackling a more emotional topic: how to talk to your friends and family about keeping the photos to themselves.

***
Johana’s tips:

Make a mass announcement
Find a way to tell everyone at once about your plans for your child’s photos rather than having a bunch of individual conversations. It’s a little less painful this way, but it also becomes less likely that someone will post your child’s picture in a way that you’re not comfortable with.

Lead by example
Make sure the photos you’re posting or sharing follow your own rules. Ask before you share other parents’ photos.

Be generous in other ways
Keeping your baby private is not always easy. Being generous with photos in other ways can ease brewing tensions with extended family.

Deputize your friends
Recruit people to help keep on top of your baby’s privacy. Empower your friends and family to call out other people when they share photos of your kids you wouldn’t want to be made public.

Forgive your friends and family
People forget things. Be flexible and forgiving. Do not berate your friends and family for posting a photo of your baby. It is much more likely they forgot your personal rules than they’re out to ruin your baby’s life.

Be confident
At the end of the day, this is your child. You get to decide. You’re not alone in wanting to limit how your baby’s photos are shared. Go ahead. Set those boundaries.

Read the full guide.

The wider TechScape

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.