What does a leaked Google memo reveal about the future of AI? | The Economist, May 11th 2023
ioi_ab's bookmarks 2023-05-15
Summary:
"...researchers in the open-source community, using free, online resources, are now achieving results comparable to the biggest proprietary models. It turns out that llms can be “fine-tuned” using a technique called low-rank adaptation, or LoRa. This allows an existing llm to be optimised for a particular task far more quickly and cheaply than training an llm from scratch.
Activity in open-source ai exploded in March, when llama, a model created by Meta, Facebook’s parent, was leaked online. Although it is smaller than the largest llms (its smallest version has 7bn parameters, compared with 540bn for Google’s palm) it was quickly fine-tuned to produce results comparable to the original version of Chatgpt on some tasks. As open-source researchers built on each other’s work with llama, “a tremendous outpouring of innovation followed,” the memo’s author writes.
This could have seismic implications for the industry’s future. “The barrier to entry for training and experimentation has dropped from the total output of a major research organisation to one person, an evening, and a beefy laptop,” the Google memo claims. An llm can now be fine-tuned for $100 in a few hours. With its fast-moving, collaborative and low-cost model, “open-source has some significant advantages that we cannot replicate.” Hence the memo’s title: this may mean Google has no defensive “moat” against open-source competitors. Nor, for that matter, does Openai...."