Viralyft / Unsplash

OpenAI’s next-generation Orion model of ChatGPT, which is both rumored and denied to be arriving by the end of the year, may not be all it’s been hyped to be once it arrives, according to a new report from The Information.

Citing anonymous OpenAI employees, the report claims the Orion model has shown a “far smaller” improvement over its GPT-4 predecessor than GPT-4 showed over GPT-3. Those sources also note that Orion “isn’t reliably better than its predecessor [GPT-4] in handling certain tasks,” specifically coding applications, though the new model is notably stronger at general language capabilities, such as summarizing documents or generating emails.

The Information’s report cites a “dwindling supply of high-quality text and other data” on which to train new models as a major factor in the new model’s insubstantial gains. In short, the AI industry is quickly running into a training data bottleneck, having already stripped the easy sources of social media data from sites like X, Facebook, and YouTube (the latter on two different occasions.) As such, these companies are increasingly having difficulty finding the sorts of knotty coding challenges that will help advance their models beyond their current capabilities, slowing down their pre-release training.

That reduced training efficiency has massive ecological and commercial implications. As frontier-class LLMs grow and further push their parameter counts into the high trillions, the amount of energy, water, and other resources is expected to increase six-fold in the next decade. This is why we’re seeing Microsoft try to restart Three Mile Island, AWS buy a 960 MW plant, and Google purchase the output of seven nuclear reactors, all to provide the necessary power for their growing menageries of AI data centers — the nation’s current power infrastructure simply can’t keep up.

In response, as TechCrunch reports, OpenAI has created a “foundations team” to circumvent the lack of appropriate training data. Those techniques could involve using synthetic training data, such as what Nvidia’s Nemotron family of models can generate. The team is also looking into improving the model’s performance post-training.

Orion, which was originally thought to be the code name for OpenAI’s GPT-5, is now expected to arrive at some point in 2025. Whether we’ll have enough available power to see it in action, without browning out our municipal electrical grids, remains to be seen.






By Jasper