Get all your news in one place.
100’s of premium titles.
One app.
Start reading

"Unleash all this creativity": Google AI's breathtaking potential

Google's research arm on Wednesday showed off a whiz-bang assortment of artificial intelligence (AI) projects it's incubating, aimed at everything from mitigating climate change to helping novelists craft prose.

Why it matters: AI has breathtaking potential to improve and enrich our lives — and comes with hugely worrisome risks of misuse, intrusion and malfeasance, if not developed and deployed responsibly.


Driving the news: The dozen-or-so AI projects that Google Research unfurled at a Manhattan media event are in various stages of development, with goals ranging from societal improvement (such as better health diagnoses) to pure creativity and fun (text-to-image generation that can help you build a 3D image of a skirt-clad monster made of marzipan).

On the "social good" side:

  • Wildfire tracking: Google's machine-learning model for early detection is live in the U.S., Canada, Mexico and parts of Australia.
  • Flood forecasting: A system that sent 115 million flood alerts to 23 million people in India and Bangladesh last year has since expanded to 18 additional countries (15 in Africa, plus Brazil, Colombia and Sri Lanka). 
  • Maternal health/ultrasound AI: Using an Android app and a portable ultrasound monitor, nurses and midwives in the U.S. and Zambia are testing a system that assesses a fetus' gestational age and position in the womb.
  • Preventing blindness: Google's Automated Retinal Disease Assessment (ARDA) uses AI to help health care workers detect diabetic retinopathy. More than 150,000 patients have been screened by taking a picture of their eyes on their smartphone.
  • The "1,000 Languages Initiative": Google is building an AI model that will work with the world's 1,000 most-spoken languages.

On the more speculative and experimental side:

  • Self-coding robots: In a project called "Code as Policies," robots are learning to autonomously generate new code.
  • In a demonstration, Google's Andy Zeng told a robot hovering over three plastic bowls (red, blue and green) and three pieces of candy (Skittles, M&M's and Reese's) that I liked M&M's and that my bowl was blue. The robot placed the correct candy in the right bowl, even though it wasn't directly told to "place M&M's in the blue bowl."
  • Wordcraft: Several professional writers are experimenting with Google's AI fiction-crafting tool. It isn't quite ready for prime time, but you can read the stories they devised with it here.
At left, Andy Zeng of Google Research showed how a robot could be taught to understand terms like "Willy Wonka" as a metaphor for chocolate. Right, Daniel Tse of Google Research shows off the AI-driven maternal sonography system he's developing. Photos: Jennifer A. Kingson

The big picture: Fears about AI's dark side — from privacy violations and the spread of misinformation to losing control of consumer data — recently prompted the White House to issue a preliminary "AI Bill of Rights," encouraging technologists to build safeguards into their products.

  • While Google published its principles of AI development in 2018 and other tech companies have done the same, there's little-to-no government regulation.
  • Although investors have been pulling back on AI startups recently, Google's deep pockets could give it more time to develop projects that aren't immediate moneymakers.

Yes, but: Google executives sounded multiple notes of caution as they showed off their wares.

  • AI "can have immense social benefits" and "unleash all this creativity," said Marian Croak, head of Google Research's center of expertise on responsible AI.
  • "But because it has such a broad impact on people, the risk involved can also be very huge. And if we don't get that right ... it can be very destructive."

Threat level: A recent Georgetown Center for Security and Emerging Technology report examined how text-generating AI could "be used to turbocharge disinformation campaigns."

  • And as Axios' Scott Rosenberg has written, society is only just beginning to grapple with the legal and ethical questions raised by AI's new capacity to generate text and images.

Still, there's fun stuff: This summer, Google Research introduced Imagen and Parti — two AI models that can generate photorealistic images from text prompts (like "a puppy in a nest emerging from a cracked egg"). Now they're working on text-to-video:

  • Imagen Video can create a short clip from phrases like "a giraffe underneath a microwave."
  • Phenaki is "a model for generating videos from text, with prompts that can change over time and videos that can be as long as multiple minutes," per Google Research.
  • AI Test Kitchen is an app that demonstrates text-to-image capabilities through two games, "City Dreamer" (build cityscapes using keywords) and "Wobble" (create friendly monsters that can dance).

The bottom line: Despite recent financial headwinds, AI is steamrolling forward — with companies such as Google positioned to serve as moral arbiters and standard-setters.

  • "AI is the most profound technology we are working on, yet these are still early days," Google CEO Sundar Pichai said in a recorded introduction to Wednesday's event.
Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.