Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Digital Camera World
Digital Camera World
Hillary K. Grigonis

What's the next step in smarter AI? Teaching AI how to "forget", researchers suggest

AI (Artificial Intelligence) concept - woman creating photos from her thoughts.

Researchers from the Tokyo University of Science (TUS) have proposed a new concept for building smarter AI: giving it selective memory loss.

A study published this month shares a new method of removing specific information from existing artificial intelligence models, suggesting that a new “black box forgetting” technique could build smarter, more efficient specialist AI rather than generalist systems. Researchers note that the process could also have implications in improving privacy and reducing energy expenditure.

The researchers proposed a new method to help AI “forget” unnecessary information. Why make AI forget some data? The study’s authors suggest that a specialist AI could perform better than a generalist AI in specific tasks. Another potential advantage, the authors posited, could be preventing generative AI from delivering undesirable content.

A selectively forgetful AI could also help ease some privacy concerns by removing extraneous data. Large artificial intelligence datasets also require both large amounts of energy and devices or cloud storage with ample space. Telling the AI to forget specific, unnecessary data could help AI systems run more efficiently.

As an example, the researchers explained that an autonomous car needs to recognize things like cars, people and signs, but it doesn’t need to identify types of food. "Retaining the classes that do not need to be recognized may decrease overall classification accuracy, as well as cause operational disadvantages such as the waste of computational resources and the risk of information leakage,” said TUS associate professor and study leader, Go Irie.

The research isn’t the first to propose making AI forget in the name of privacy and energy conservation. However, earlier research requires the user to have access to the AI structure itself.

The new research from TUS can be used by users that don’t have access to the AI model because of commercial or ethical limitations, enabling them to customize existing AI datasets without access to the full model. The method could be used to help customize AI for specific purposes while using existing datasets rather than building a system from scratch.

The team explained that current methods sample prompts, feed them to the model, evaluate the results, then update the distribution. The researchers’ new method, called latent content sharing, breaks down prompts into smaller pieces, then optimizes the AI for those smaller parts.

The researchers tested their proposed method by using CLIP (Contrastive Language–Image Pre-training), an AI that classifies images, or identifies what is in the image. The goal was to get the AI to “forget” 40% of the classes of objects that it could identify.

The researchers called the study results promising and are expected to present their findings at the Neural Information Processing Systems conference.

You may also like…

For more, read about how AI became part of mainstream photography, or how to spot a generative AI image.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.