On Sunday, OpenAI CEO Sam Altman offered two eye-catching predictions about the near-future of artificial intelligence. In a post titled “Reflections” on his personal blog, Altman wrote, “We are now confident we know how to build AGI as we have traditionally understood it.” He added, “We believe that, in 2025, we may see the first AI agents ‘join the workforce’ and materially change the output of companies.”
Both statements are notable coming from Altman, who has served as the leader of OpenAI during the rise of mainstream generative AI products such as ChatGPT. AI agents are the latest marketing trend in AI, allowing AI models to take action on a user’s behalf. However, critics of the company and Altman immediately took aim at the statements on social media.
“We are now confident that we can spin bullshit at unprecedented levels, and get away with it,” wrote frequent OpenAI critic Gary Marcus in response to Altman’s post. “So we now aspire to aim beyond that, to hype in purest sense of that word. We love our products, but we are here for the glorious next rounds of funding. With infinite funding, we can control the universe.”
AGI, short for “artificial general intelligence,” is a nebulous term that OpenAI typically defines as “highly autonomous systems that outperform humans at most economically valuable work.” Elsewhere in the field, AGI typically means an adaptable AI model that can generalize (apply existing knowledge to novel situations) beyond specific examples found in its training data, similar to how some humans can do almost any kind of work after having been shown few examples of how to do a task.
According to a longstanding investment rule at OpenAI, the rights over developed AGI technology are excluded from its IP investment contracts with companies such as Microsoft. In a recently revealed financial agreement between the two companies, the firms clarified that “AGI” will have been achieved at OpenAI when one of its AI models generates at least $100 billion in profits.
Tech companies don’t say this out loud very often, but AGI would be useful for them because it could replace many human employees with software, automating information jobs and reducing labor costs while also boosting productivity. The potential societal downsides of this could be considerable, and those implications extend far beyond the scope of this article. But the potential economic shock of inventing artificial knowledge workers has not escaped Altman, who has forecast the need for universal basic income as a potential antidote for what he sees coming.