Meta Builds AI Model That Can Train Itself

Hereā€™s one thatā€™ll freak the AI fearmongers out. As reported by Reuters, Meta has released a new generative AI model that can train itself to improve its outputs.

Thatā€™s right, itā€™s alive, though also not really.

As per Reuters:

ā€œMeta said on Friday that itā€™s releasing a “Self-Taught Evaluator” that may offer a path toward less human involvement in the AI development process. The technique involves breaking down complex problems into smaller logical steps and appears to improve the accuracy of responses on challenging problems in subjects like science, coding and math.ā€

So rather than human oversight, Metaā€™s developing AI systems within AI systems, which will enable its processes to test and improve aspects within the model itself. Which will then lead to better outputs.

Meta has outlined the process in a new paper, which explains how the system works:

As per Meta:

ā€œIn this work, we present an approach that aims to improve evaluators without human annotations, using synthetic training data only. Starting from unlabeled instructions, our iterative self-improvement scheme generates contrasting model outputs and trains an LLM-as-a-Judge to produce reasoning traces and final judgments, repeating this training at each new iteration using the improved predictions.ā€

Spooky, right? Maybe for Halloween this year you could go as ā€œLLM-as-a-Judgeā€, though the amount of explaining youā€™d have to do probably makes it a non-starter.

As Reuters notes, the project is one of several new AI developments from Meta, which have all now been released in model form for testing by third parties. Metaā€™s also released code for its updated ā€œSegment Anythingā€ process, a new multimodal language model that mixes text and speech, a system designed to help determine and protect against AI-based cyberattacks, improved translation tools, and a new way to discover inorganic raw materials.

The models are all part of Metaā€™s open source approach to generative AI development, which will see the company share its AI findings with external developers to help advance its tools.

Which also comes with a level of risk, in that we donā€™t know the extent of what AI can actually do as yet. And getting AI to train AI sounds like a path to trouble in some respects, but weā€™re also still a long way from automated general intelligence (AGI), which will eventually enable machine-based systems to simulate human thinking, and come up with creative solutions without intervention.

Thatā€™s the real concern that AI doomers have, that weā€™re close to building systems that are smarter than us, and could then see humans as a threat. Again, thatā€™s not happening anytime soon, with many more years of research required to simulate actual brain-like activity.

But even so, that doesnā€™t mean that we canā€™t generate problematic outcomes with the AI tools that are available.

Itā€™s less risky than a Terminator-style robot apocalypse, but as more and more systems incorporate generative AI, advances like this may help to improve outputs, but could also lead to more unpredictable, and potentially harmful results.

Though that, I guess, is what these initial tests are for, but maybe open sourcing everything in this way expands the potential risk.

You can read about Metaā€™s latest AI models and datasets here.

Reviews

0 %

User Score

0 ratings
Rate This

Leave your comment

Your email address will not be published. Required fields are marked *