Google’s new AI architecture ‘Pathways’ has the potential to transform machine learning systems from one-trick ponies to far more adaptable and perceptive systems.
According to Jeff Dean, Google’s AI lead and co-founder of the Google Brain project, today’s AI models are in the “one-trick pony” stage, meaning they are “usually trained to perform only one thing.” However, a new method known as Pathways may be able to create something equivalent to a trainable dog that can perform various tricks.
Pathways is a “next-generation AI architecture,” according to Dean, that will “allow us to train a single model to do hundreds or millions of things.”
Pathways can expand an AI model’s ability to respond to input from several senses, such as text, visuals, and speech, by removing the limitations of the model’s ability to respond to information from just one sense.
Dean notes, “Pathways could enable multimodal models that cover vision, audio, and linguistic understanding at the same time.”
The model might then process the word “leopard,” the sound of someone saying “leopard,” or a video of a leopard running, for instance.
The AI recognizes the concept of a leopard in all three circumstances. According to Dean, a model like this would be “more insightful and less prone to errors and biases.”
See Also: Google’s Parent Company Alphabet Posts Record Profit, Way To Go Meta!
“We’d like to train one model that can perform a variety of jobs while also drawing on and combining its existing talents to learn new tasks faster and more efficiently,” Dean explains.
What he appears to be describing is an AI that can plug into other AI models and take advantage of their best features by employing a Pathways model that “dynamically learns which parts of the network are good at which tasks — it learns how to route tasks through the most relevant parts of the model.”
As a result, Pathways may increase machine learning’s decision-making capabilities, possibly bringing it closer to an AI that can reason through a problem or circumstance.
Pathways, according to Dean, is not only better capable of learning diverse tasks, but it is also more energy-efficient because only relevant sections of a network are activated for a certain activity.