Great post. Thanks, Yuli.
GPT-4.5 unveiled and released to the masses. Though on the surface it doesn't seem to be a massive leap forward over GPT-4, likely having the same context window size of 128k tokens and possibly not even being fully multimodal but rather pseudo-multimodal like ChatGPT is, it actually will be a substantial upgrade in terms of logic, coherence, hallucination reduction, quality, and commonsense reasoning. Plus, its outputs might be substantially longer, ranging from 8k tokens to 16k or even 24k in one go, effectively allowing it to be possible to create whole novels in 10 minutes.
To me, that
does seem like a massive leap on the surface. Very much so. I've always been of the opinion that 2023 AI is competent at almost everything (except math) whenever it does well; the issue is reliability. I ordered "The AI Revolution in Medicine: GPT-4 and Beyond" and was amazed at just how capable the March release of GPT-4 was. It was
incredibly capable of grasping nuance, of offering thoughtful and relevant solutions, and of understanding long, layered, complex writing... minus the 10 percent of the time it glitched out and suffered hallucinations.
So take its already impressive capabilities and simply reduce hallucinations (from its already rather low rate of 3 percent) along with patching up a couple of other weaknesses? That's huge. That would excite me more than a multi-modal model would that hasn't experienced significant improvements in hallucinations, logic, and common sense reasoning. If GPT-4.5's main emphasis is maximizing reliability, then GPT-5 can focus on things like Q-Star and other dramatic changes.
AI in video game creation becomes tangibly feasible, with GPT-4.5 capable of creating SNES-quality games
Holy shit. That would be phenomenal. Are you assuming simpler genres like sidescrollers, or games as complex as RPGs? There's obviously a difference between Contra III and Super Mario RPG, but honestly even making something on par with a 1991 SNES sidescroller would be mind-blowing.
Generative AI begins transforming medicine at incredible rates, including an LLM discovering a method of "curing" a type of cancer or fatal disease
And the best thing is, if AI is indeed capable of curing
a disease, there's no reason that 2024 AI should only cure one singular disease and no other, right? It would likely be able to extend those capabilities to finding cures for multiple diseases by the end of the year. Maybe double digits, maybe triple digit (if AI is to medical care what Gnome is to material sciences).
And as for artificial general intelligence? I think a similar shift to blue hour will happen here too, as the SOTA in 2024 is going to be so outrageous that there will indeed be debates as to whether we've achieved AGI now or will within the next couple of years. They may even rage online, in fact, and the arrival of AI agents will make it even muddier because agent behavior will fool many people into thinking they are operating AGIs after all, especially when said agents seem to use logic to build coherent contraptions like video games or comics. Yet very smart people will point out "this is not AGI after all," citing certain deficiencies or lack of capabilities in certain areas. So it will indeed be a watershed year for the debate, but I think on the other side of it, it'll be clear that we're much closer than we thought we were if nothing else, and the rise of agentic AI will make some argue that we already achieved it. Nothing conclusive will be able to be deduced this year, however.
If 2024 AI does indeed reach the level described in your predictions, I'll consider it to be AGI, honestly - even if it does fall up short compared to humans in a few ways. Sure, here in December 2023 AI is for the most part an inferior form of intelligence to that which we have. It possesses some advantages, but overall we possess more. But if the situation's reversed and AI matches or exceeds us in most respects, I don't see any reason to insist that we have to be the superior intelligence just because we have a few advantages remaining. At the very least I would expect a "Separate but ultimately more or less equal" scenario, which is an incredibly different paradigm from the historical "Separate but with human beings having a clear overall advantage" (despite AI obviously having some incredibly important advantages which has accelerated certain tasks and certain fields by hundreds, thousands, millions, or billions of times).
2024 might not decisively answer the "Does AGI exist?" question, but I think there's another question that will be answered more plainly. "Does it really matter?" And I think the answer to that is... no, not really. If your predictions hold true, transformative AI will be here. You have AI that's patched up its most glaring weaknesses, is capable of creating almost any form of art at almost level of quality, and is helping humanity with its greatest forms of suffering at a speed hundreds or thousands of times faster than we've ever known before (namely the Medical Revolution which begins in 2024 thanks to AI). By the end of 2024, we will have built the tools that will build the new world. Whether those tools fit the label of AGI probably doesn't matter
that much; they will have reached the level of sophistication and ability needed to allow for rapid gains in quality-of-life, for each year from 2025 onwards (maybe the latter half of 2024) to be an era where miracles are a routine thing.