Close Menu
Best in TechnologyBest in Technology
  • News
  • Phones
  • Laptops
  • Gadgets
  • Gaming
  • AI
  • Tips
  • More
    • Web Stories
    • Global
    • Press Release

Subscribe to Updates

Get the latest tech news and updates directly to your inbox.

What's On

Truth Social Crashes as Trump Live-Posts Iran Bombing

22 June 2025

Gear News This Week: Adobe Wants to Make iPhone Photos Better, and TCL Brings Flexibility to Atmos

21 June 2025

Security News This Week: Israel Says Iran Is Hacking Security Cameras for Spying

21 June 2025
Facebook X (Twitter) Instagram
Just In
  • Truth Social Crashes as Trump Live-Posts Iran Bombing
  • Gear News This Week: Adobe Wants to Make iPhone Photos Better, and TCL Brings Flexibility to Atmos
  • Security News This Week: Israel Says Iran Is Hacking Security Cameras for Spying
  • The Best Lawn Games for Goofing Off in the Sun
  • Eli Lilly’s Obesity Pill Appears to Work as Well as Injected GLP-1s
  • Review: Ford Ranger Plug-In Hybrid
  • Methane Pollution Has Cheap, Effective Solutions That Aren’t Being Used
  • Review: Framework Laptop 12
Facebook X (Twitter) Instagram Pinterest Vimeo
Best in TechnologyBest in Technology
  • News
  • Phones
  • Laptops
  • Gadgets
  • Gaming
  • AI
  • Tips
  • More
    • Web Stories
    • Global
    • Press Release
Subscribe
Best in TechnologyBest in Technology
Home » Selective Forgetting Can Help AI Learn Better
News

Selective Forgetting Can Help AI Learn Better

News RoomBy News Room10 March 20243 Mins Read
Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
Share
Facebook Twitter LinkedIn Pinterest Email

The original version of this story appeared in Quanta Magazine.

A team of computer scientists has created a nimbler, more flexible type of machine learning model. The trick: It must periodically forget what it knows. And while this new approach won’t displace the huge models that undergird the biggest apps, it could reveal more about how these programs understand language.

The new research marks “a significant advance in the field,” said Jea Kwon, an AI engineer at the Institute for Basic Science in South Korea.

The AI language engines in use today are mostly powered by artificial neural networks. Each “neuron” in the network is a mathematical function that receives signals from other such neurons, runs some calculations, and sends signals on through multiple layers of neurons. Initially the flow of information is more or less random, but through training, the information flow between neurons improves as the network adapts to the training data. If an AI researcher wants to create a bilingual model, for example, she would train the model with a big pile of text from both languages, which would adjust the connections between neurons in such a way as to relate the text in one language with equivalent words in the other.

But this training process takes a lot of computing power. If the model doesn’t work very well, or if the user’s needs change later on, it’s hard to adapt it. “Say you have a model that has 100 languages, but imagine that one language you want is not covered,” said Mikel Artetxe, a coauthor of the new research and founder of the AI startup Reka. “You could start over from scratch, but it’s not ideal.”

Artetxe and his colleagues have tried to circumvent these limitations. A few years ago, Artetxe and others trained a neural network in one language, then erased what it knew about the building blocks of words, called tokens. These are stored in the first layer of the neural network, called the embedding layer. They left all the other layers of the model alone. After erasing the tokens of the first language, they retrained the model on the second language, which filled the embedding layer with new tokens from that language.

Even though the model contained mismatched information, the retraining worked: The model could learn and process the new language. The researchers surmised that while the embedding layer stored information specific to the words used in the language, the deeper levels of the network stored more abstract information about the concepts behind human languages, which then helped the model learn the second language.

“We live in the same world. We conceptualize the same things with different words” in different languages, said Yihong Chen, the lead author of the recent paper. “That’s why you have this same high-level reasoning in the model. An apple is something sweet and juicy, instead of just a word.”

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
Previous Article3 great Amazon Prime Video movies to watch instead of the 2024 Oscars
Next Article I compared all of AMD’s V-cache CPUs to see which you should buy

Related Articles

News

Truth Social Crashes as Trump Live-Posts Iran Bombing

22 June 2025
News

Gear News This Week: Adobe Wants to Make iPhone Photos Better, and TCL Brings Flexibility to Atmos

21 June 2025
News

Security News This Week: Israel Says Iran Is Hacking Security Cameras for Spying

21 June 2025
News

The Best Lawn Games for Goofing Off in the Sun

21 June 2025
News

Eli Lilly’s Obesity Pill Appears to Work as Well as Injected GLP-1s

21 June 2025
News

Review: Ford Ranger Plug-In Hybrid

21 June 2025
Demo
Top Articles

ChatGPT o1 vs. o1-mini vs. 4o: Which should you use?

15 December 202496 Views

Costco partners with Electric Era to bring back EV charging in the U.S.

28 October 202495 Views

5 laptops to buy instead of the M4 MacBook Pro

17 November 202466 Views

Subscribe to Updates

Get the latest tech news and updates directly to your inbox.

Latest News
News

Review: Ford Ranger Plug-In Hybrid

News Room21 June 2025
News

Methane Pollution Has Cheap, Effective Solutions That Aren’t Being Used

News Room21 June 2025
News

Review: Framework Laptop 12

News Room21 June 2025
Most Popular

The Spectacular Burnout of a Solar Panel Salesman

13 January 2025124 Views

ChatGPT o1 vs. o1-mini vs. 4o: Which should you use?

15 December 202496 Views

Costco partners with Electric Era to bring back EV charging in the U.S.

28 October 202495 Views
Our Picks

The Best Lawn Games for Goofing Off in the Sun

21 June 2025

Eli Lilly’s Obesity Pill Appears to Work as Well as Injected GLP-1s

21 June 2025

Review: Ford Ranger Plug-In Hybrid

21 June 2025

Subscribe to Updates

Get the latest tech news and updates directly to your inbox.

Facebook X (Twitter) Instagram Pinterest
  • Privacy Policy
  • Terms of use
  • Advertise
  • Contact Us
© 2025 Best in Technology. All Rights Reserved.

Type above and press Enter to search. Press Esc to cancel.