Yuli Ban wrote: ↑Fri Sep 27, 2024 12:28 am
And I agree with this take
My point for years now is "It will be able to do EVERYTHING." It doesn't matter what you think it can't do; it will do that too... or it will at least so perfectly mimic it as to not matter. People constantly say "Well, it can't create a freeform soulful bluesy prog-rock album" and that's one of the first things it did years ago with OpenAI's Jukebox. It can't create a Jackson Pollock painting with the same energy and chaos, and I say "Literally just give it time."
I really just don't know how the current anti-AI contingent is going to react to that. They have no idea that these are quite literally the waning days of even being able to seriously argue that AI is a nothingburger. And by the time that AI shows clearly that it's not the scam or fake trend so many seem convinced it is, it'll be too late to ban it or do anything to stop it.
The thing about these recursive algorithms is that yes, they can and will largely do anything we throw at them within a given parameter set. But does that make them intelligent? Not really. A well rested healthy human would never call a duck a cow even after a billion or a trillion cycles of the same question being asked. They'd probably get very annoyed, they'd joke about the question, they may trick you into thinking that they think the duck is a cow to get it over with. But the human would never genuinely think that the duck is a cow, they'd know what it is and be capable of properly identifying it at any point in the cycle if desired. I can't say the same about any of these algorithms, they don't "know" anything and it's why hallucinations aren't going anywhere in real terms even when they are obscured from view in application of the technology.
The error rates can be brought into workable ranges for most industry purposes, the time tables and parameters these algorithms can operate in can be brought into ranges where a person might think they are alive, where for all intents and purposes they act like what we might expect an AGI might act. That doesn't make them one though, they aren't actually intelligent or alive and if left to their own devices long enough will spout incoherent nonsense. The Chinese Room holds, silicon just isn't it.
(
This is just the token argument again, it really isn't, the chinese room holds, I fought it for a long time too but it's true when you really dig into it and nothing that's come out of lesswrong, or silicon valley, or some twitter thread has made a dent in it.)
Both the ideological AI believers and the ideological AI skeptics are mistaken. Narrow AI can in of itself actualize on most of the job automation claims, media generation claims, etc that AI believers are expecting from AGI in of itself without really being one. And the AI skeptics are wrong in the sense that the things they fear probably are going to happen regardless of if narrow AI ever crosses the intelligence threshold or not.
I do think that AGI is possible, as I've discussed here before, just not on silicon. We need to be building real brain analogues using a different medium entirely to cross that threshold. When that happens the "artificial" in the phrase won't even make sense in the lexicon anymore as we'll be creating living beings with real brains.