Close Menu
Best in TechnologyBest in Technology
  • News
  • Phones
  • Laptops
  • Gadgets
  • Gaming
  • AI
  • Tips
  • More
    • Web Stories
    • Global
    • Press Release

Subscribe to Updates

Get the latest tech news and updates directly to your inbox.

What's On

The Minnesota Shooting Suspect’s Background Suggests Deep Ties to Christian Nationalism

18 June 2025

Amazon Rebuilt Alexa Using a ‘Staggering’ Amount of AI Tools

18 June 2025

Nothing Phone 3 to Offer Longer Android and Security Update Support Than Its Predecessor

18 June 2025
Facebook X (Twitter) Instagram
Just In
  • The Minnesota Shooting Suspect’s Background Suggests Deep Ties to Christian Nationalism
  • Amazon Rebuilt Alexa Using a ‘Staggering’ Amount of AI Tools
  • Nothing Phone 3 to Offer Longer Android and Security Update Support Than Its Predecessor
  • Warner Bros. Games Restructures Into Divisions Centered On Core Franchises
  • Oppo Reno 14 5G, Reno 14 Pro 5G India Launch Timeline Leaked
  • Borderlands 4 Preview – A New Generation Of Vault Hunting
  • This AI Model Never Stops Learning
  • Google Pixel 10, Pixel 10 Pro Alleged Case Suggests Minor Design Changes From Predecessors
Facebook X (Twitter) Instagram Pinterest Vimeo
Best in TechnologyBest in Technology
  • News
  • Phones
  • Laptops
  • Gadgets
  • Gaming
  • AI
  • Tips
  • More
    • Web Stories
    • Global
    • Press Release
Subscribe
Best in TechnologyBest in Technology
Home » This AI Model Never Stops Learning
News

This AI Model Never Stops Learning

News RoomBy News Room18 June 20254 Mins Read
Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
Share
Facebook Twitter LinkedIn Pinterest Email

Modern large language models (LLMs) might write beautiful sonnets and elegant code, but they lack even a rudimentary ability to learn from experience.

Researchers at Massachusetts Institute of Technology (MIT) have now devised a way for LLMs to keep improving by tweaking their own parameters in response to useful new information.

The work is a step toward building artificial intelligence models that learn continually—a long-standing goal of the field and something that will be crucial if machines are to ever more faithfully mimic human intelligence. In the meantime, it could give us chatbots and other AI tools that are better able to incorporate new information including a user’s interests and preferences.

The MIT scheme, called Self Adapting Language Models (SEAL), involves having an LLM learn to generate its own synthetic training data and update procedure based on the input it receives.

“The initial idea was to explore if tokens [units of text fed to LLMs and generated by them] could cause a powerful update to a model,” says Jyothish Pari, a PhD student at MIT involved with developing SEAL. Pari says the idea was to see if a model’s output could be used to train it.

Adam Zweiger, an MIT undergraduate researcher involved with building SEAL, adds that although newer models can “reason” their way to better solutions by performing more complex inference, the model itself does not benefit from this reasoning over the long term.

SEAL, by contrast, generates new insights and then folds it into its own weights or parameters. Given a statement about the challenges faced by the Apollo space program, for instance, the model generated new passages that try to describe the implications of the statement. The researchers compared this to the way a human student writes and reviews notes in order to aid their learning.

The system then updated the model using this data and tested how well the new model is able to answer a set of questions. And finally, this provides a reinforcement learning signal that helps guide the model toward updates that improve its overall abilities and which help it carry on learning.

The researchers tested their approach on small and medium-size versions of two open source models, Meta’s Llama and Alibaba’s Qwen. They say that the approach ought to work for much larger frontier models too.

The researchers tested the SEAL approach on text as well as a benchmark called ARC that gauges an AI model’s ability to solve abstract reasoning problems. In both cases they saw that SEAL allowed the models to continue learning well beyond their initial training.

Pulkit Agrawal, a professor at MIT who oversaw the work, says that the SEAL project touches on important themes in AI, including how to get AI to figure out for itself what it should try to learn. He says it could well be used to help make AI models more personalized. “LLMs are powerful but we don’t want their knowledge to stop,” he says.

SEAL is not yet a way for AI to improve indefinitely. For one thing, as Agrawal notes, the LLMs tested suffer from what’s known as “catastrophic forgetting,” a troubling effect seen when ingesting new information causes older knowledge to simply disappear. This may point to a fundamental difference between artificial neural networks and biological ones. Pari and Zweigler also note that SEAL is computationally intensive, and it isn’t yet clear how best to most effectively schedule new periods of learning. One fun idea, Zweigler mentions, is that, like humans, perhaps LLMs could experience periods of “sleep” where new information is consolidated.

Still, for all its limitations, SEAL is an exciting new path for further AI research—and it may well be something that finds its way into future frontier AI models.

What do you think about AI that is able to keep on learning? Send an email to [email protected] to let me know.

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
Previous ArticleGoogle Pixel 10, Pixel 10 Pro Alleged Case Suggests Minor Design Changes From Predecessors
Next Article Borderlands 4 Preview – A New Generation Of Vault Hunting

Related Articles

News

The Minnesota Shooting Suspect’s Background Suggests Deep Ties to Christian Nationalism

18 June 2025
News

Amazon Rebuilt Alexa Using a ‘Staggering’ Amount of AI Tools

18 June 2025
News

Israel-Tied Predatory Sparrow Hackers Are Waging Cyberwar on Iran’s Financial System

18 June 2025
News

Review: Dell 32 Plus QD-OLED

18 June 2025
News

Authors Are Posting TikToks to Protest AI Use in Writing—and to Prove They Aren’t Doing It

18 June 2025
News

Review: Silk & Snow S&S Organic Mattress

18 June 2025
Demo
Top Articles

ChatGPT o1 vs. o1-mini vs. 4o: Which should you use?

15 December 202495 Views

Costco partners with Electric Era to bring back EV charging in the U.S.

28 October 202495 Views

5 laptops to buy instead of the M4 MacBook Pro

17 November 202466 Views

Subscribe to Updates

Get the latest tech news and updates directly to your inbox.

Latest News
Gaming

Borderlands 4 Preview – A New Generation Of Vault Hunting

News Room18 June 2025
News

This AI Model Never Stops Learning

News Room18 June 2025
Phones

Google Pixel 10, Pixel 10 Pro Alleged Case Suggests Minor Design Changes From Predecessors

News Room18 June 2025
Most Popular

The Spectacular Burnout of a Solar Panel Salesman

13 January 2025124 Views

ChatGPT o1 vs. o1-mini vs. 4o: Which should you use?

15 December 202495 Views

Costco partners with Electric Era to bring back EV charging in the U.S.

28 October 202495 Views
Our Picks

Warner Bros. Games Restructures Into Divisions Centered On Core Franchises

18 June 2025

Oppo Reno 14 5G, Reno 14 Pro 5G India Launch Timeline Leaked

18 June 2025

Borderlands 4 Preview – A New Generation Of Vault Hunting

18 June 2025

Subscribe to Updates

Get the latest tech news and updates directly to your inbox.

Facebook X (Twitter) Instagram Pinterest
  • Privacy Policy
  • Terms of use
  • Advertise
  • Contact Us
© 2025 Best in Technology. All Rights Reserved.

Type above and press Enter to search. Press Esc to cancel.