Good morning and welcome to Tech News Now, TheStreet's daily tech rundown.
In today's edition, we're covering Wall Street's harsh reaction to Palo Alto Networks' Tuesday earnings, a new open letter calling for strong deepfake regulation, the New York Times' decision to explore AI ad tools and the apparent breaking of ChatGPT.
And, of course, chip giant Nvidia is reporting earnings after the bell Wednesday.
Tickers we're watching: (NVDA) , (PANW) and (TSLA) .
Let's get into it.
Related: Top analyst says Nvidia's earnings are 'key for the tech world and broader markets'
Palo Alto sinks 24% in pre-market trading
Shares of Palo Alto plummeted more than 24% in pre-market trading Wednesday morning on the heels of an earnings beat that came in hand with a reduction in its full-year revenue guidance.
The cybersecurity and software firm reported adjusted earnings of $1.46 per share, above expectations of $1.30, on revenue of $1.98 billion, slightly above expectations of $1.97 billion.
The company pulled back its full-year revenue guidance from a range of $8.15 billion to $8.2 billion to a range of $7.95 billion to $8 billion.
CEO Nikesh Arora said during the conference call that the adjusted guidance was the result of a changing strategy, one focused on accelerating growth, "platform migration and consolidation and activating AI leadership."
Arora said that Palo Alto expects a "difficult customer" as a result of the company's strategy shift.
Though Wedbush's Dan Ives called the report a "brutal night," he noted that the transition to a more platform-centric approach "is the right long-term move" for the company, and will allow it to "ultimately emerge in a stronger market position."
"Our long-term bull thesis is still well intact," Ives said in a note, maintaining his outperform rating but dropping his price target to $375 from $425.
The stock was down more than 25% to $272 per share immediately following market open.
Open letter calls for strict regulations of deepfakes
Concerns over AI-generated deepfakes have been mounting in recent weeks as the scale and scope of identity theft has seemingly evolved; one cybersecurity expert I spoke to earlier this month referred to the situation as "identity hijacking," a more dangerous iteration of identity theft in which anyone's likeness — through video, imagery, audio or text — can be synthesized and fraudulently misused.
Recent instances of this demonstrate the variety of attacks that have already been perpetuated — a podcast used an AI in an attempt to create a new comedy special in the voice and style of late comic legend George Carlin; deepfaked, sexually explicit images of Taylor Swift recently went viral on X, highlighting the growing problem of deepfake porn which has already impacted high school students; a finance worker at a firm in Hong Kong was duped into giving scammers — posing in a video call as the company's CFO — $25 million.
And these are only a few recent, headline-grabbing events. The problem of deepfakes is proliferating, and it poses enormous threats not just to individual people, but also to electoral processes around the world through deepfake electoral misinformation.
In a new open letter published Wednesday, hundreds of scientists and executives are calling for new laws to at least hinder this proliferation of deepfakes.
More deep dives on AI:
- Think tank director warns of the danger around 'non-democratic tech leaders deciding the future'
- US Expert Warns of One Overlooked AI Risk
- Artificial Intelligence is a sustainability nightmare — but it doesn't have to be
New laws, according to the letter, ought to "fully criminalize" deepfake child pornography, and governments ought to establish criminal penalties for "anyone who knowingly creates or facilitates the spread of harmful deepfakes."
The letter goes on to call for responsibility to be allocated to software developers and distributors, calling on governments to require such parties to prevent their products from creating harmful deepfakes, and to "be held liable if their preventive measures are easily circumvented."
The letter's 430 signatories include AI researcher Gary Marcus, Facebook whistleblower Frances Haugen and AI scientist Yoshua Bengio.
Related: Deepfake porn: It's not just about Taylor Swift
New York Times is playing with AI
Despite the New York Times' lawsuit against OpenAI alleging massive copyright infringement and the "unlawful use" of the Times' journalistic content, the media company is not technology averse.
The Times told Axios Tuesday that it is working on ad-targeting solutions powered by generative AI, something the company has reportedly been working on long before its lawsuit against OpenAI.
The technology will allow business partners to better target niche audiences through more specific ad campaigns. The tool will be made available to advertisers in the second half of the year.
"This obviously demonstrates that we believe GenAI is an enabler and can be something that is effective for our business when used responsibly," Joy Robins, global chief advertising officer, said.
Related: Copyright expert predicts result of NY Times lawsuit against Microsoft, OpenAI
The AI Corner: The breaking of ChatGPT
X users on Tuesday pointed out issues of ChatGPT going "berserk."
Users noted instances where OpenAI's flagship chatbot "ends each reply with hallucinated garbage and doesn't stop generating it."
OpenAI has broken GPT-4. It ends each reply with hallucinated garbage and doesn't stop generating it. pic.twitter.com/cwC3BUUeoR
— Andriy Burkov (@burkov) February 20, 2024
OpenAI acknowledged the issue, noting "unexpected responses" from ChatGPT. The company did not respond to a request for comment.
"The reality, though is that these systems have never been stable. Nobody has ever been able to engineer safety guarantees around them," Marcus wrote in response. "The need for altogether different technologies that are less opaque, more interpretable, more maintainable and more debuggable — and hence more tractable—remains paramount."
Black box APIs can break in production when one of their underlying components gets updated.
— Sasha Luccioni, PhD 💻🌎🦋✨🤗 (@SashaMTL) February 21, 2024
This becomes an issue when you build tools on top of these APIs, and these break down too.
That's where open-source has a major advantage, allowing you to pinpoint and fix the problem! https://t.co/qyssdHSusF
"Today’s issue may well be fixed quickly," he added, "but I hope it will be seen as the wakeup call that it is."
Contact Ian with AI stories via email, ian.krietzberg@thearenagroup.net, or Signal 732-804-1223.
Related: Human creativity persists in the era of generative AI