------------------------------------------------------------------------------------------------------------------------------1. Focus on Capabilities, not Processes. The majority of definitions focus on what an AGI can
accomplish, not on the mechanism by which it accomplishes tasks. This is important for identifying
characteristics that are not necessarily a prerequisite for achieving AGI (but may nonetheless be
interesting research topics). This focus on capabilities allows us to exclude the following from our
requirements for AGI:
• Achieving AGI does not imply that systems think or understand in a human-like way (since this
focuses on processes, not capabilities)
• Achieving AGI does not imply that systems possess qualities such as consciousness (subjective
awareness) (Butlin et al., 2023) or sentience (the ability to have feelings) (since these qualities
not only have a process focus, but are not currently measurable by agreed-upon scientific
methods)
2. Focus on Generality and Performance. All of the above definitions emphasize generality
to varying degrees, but some exclude performance criteria. We argue that both generality and
performance are key components of AGI. In the next section we introduce a leveled taxonomy that
considers the interplay between these dimensions.
3. Focus on Cognitive and Metacognitive Tasks. Whether to require robotic embodiment (Roy
et al., 2021) as a criterion for AGI is a matter of some debate. Most definitions focus on cognitive
tasks, by which we mean non-physical tasks. Despite recent advances in robotics (Brohan et al.,
2023), physical capabilities for AI systems seem to be lagging behind non-physical capabilities. It is
possible that embodiment in the physical world is necessary for building the world knowledge to be
successful on some cognitive tasks (Shanahan, 2010), or at least may be one path to success on some
classes of cognitive tasks; if that turns out to be true then embodiment may be critical to some paths
toward AGI. We suggest that the ability to perform physical tasks increases a system’s generality, but
should not be considered a necessary prerequisite to achieving AGI. On the other hand, metacognitive
capabilities (such as the ability to learn new tasks or the ability to know when to ask for clarification
or assistance from a human) are key prerequisites for systems to achieve generality.
Basically, something like GPT-4 has "emergent" generality due to the cognitive functions it can perform. GPT-4 is not at complete human level yet across every domain but can certainly outperform "unskilled" workers or artist, or at least can augment their outputs in a meaningful way. It can summarize and understand various topics which an unskilled person may not know on their own, and this includes its coding abilities which are "impressive" compared to an unskilled human's. Many tend to compare GPT-4 to a junior dev (assuming GPT-4 is having a good day) and then there's the various benchmarks like passing the bar and medical exams.
If I'm being "strict" I'd say only multimodal LLMs should count from here on out as "emergent" AGIs, which is basically hearkening back to Microsoft's "sparks of AGI" paper.