The Singularity - Official Thread

Talk about scientific and technological developments in the future
User avatar
Cyber_Rebel
Posts: 331
Joined: Sat Aug 14, 2021 10:59 pm
Location: New Dystopios

Re: The Singularity - Official Thread

Post by Cyber_Rebel »



AGI very soon, I take it then. The point of arrival may make it appear as if the takeoff in question being "fast" is how I interpret this.
User avatar
Ozzie guy
Posts: 487
Joined: Sun May 16, 2021 4:40 pm

Re: The Singularity - Official Thread

Post by Ozzie guy »

Check out this video

User avatar
Cyber_Rebel
Posts: 331
Joined: Sat Aug 14, 2021 10:59 pm
Location: New Dystopios

Re: The Singularity - Official Thread

Post by Cyber_Rebel »

Self-Taught Optimizer (STOP):
Recursively Self-Improving Code Generation

Several recent advances in AI systems (e.g., Tree-of-Thoughts and Program-Aided Language Models) solve problems by providing a “scaffolding” program that structures multiple calls to language models to generate better outputs.

A scaffolding program is written in a programming language such as Python. In this work, we use a language-model-infused scaffolding program to improve itself. We start with a seed “improver” that improves an input program according to a given utility function by querying a language model several times and returning the best solution.

We then run this seed improver to improve itself. Across a small set of downstream tasks, the resulting improved improver generates programs with significantly better performance than its seed improver.

A variety of self-improvement strategies are proposed by the language model, including beam search, genetic algorithms, and simulated annealing. Since the language models themselves are not altered, this is not full recursive self-improvement. Nonetheless, it demonstrates that a modern language model, GPT-4 in our proof-of-concept experiments, is capable of writing code that can call itself to improve itself. We consider concerns around the development of self-improving technologies and evaluate the frequency with which the generated code bypasses a sandbox.
Supposedly, from one of the authors of the paper:

Tadasuke
Posts: 549
Joined: Tue Aug 17, 2021 3:15 pm
Location: Europe

how my thinking about the probability of The Singularity has changed

Post by Tadasuke »

Back in 2010 or 2015 I thought that the probability of The Singularity happening [in the 21st century] was 99%. In 2020 50% and now I think it's only 1%. The more I know, the less probable I think it is.

No other single idea ever had a more negative impact on me, than the whole 'ever accelerating progress leading towards The Singularity, after which progress will be even ever greater' idea has had. It made me think, that my fantasies really have a 99% chance of happening in a pretty short time. I don't recommend getting too much into all of this. :|
Global economy doubles in product every 15-20 years. Computer performance at a constant price doubles nowadays every 4 years on average. Livestock-as-food will globally stop being a thing by ~2050 (precision fermentation and more). Human stupidity, pride and depravity are the biggest problems of our world.
User avatar
Yuli Ban
Posts: 4643
Joined: Sun May 16, 2021 4:44 pm

Re: how my thinking about the probability of The Singularity has changed

Post by Yuli Ban »

Tadasuke wrote: Sat Oct 21, 2023 5:42 am Back in 2010 or 2015 I thought that the probability of The Singularity happening [in the 21st century] was 99%. In 2020 50% and now I think it's only 1%. The more I know, the less probable I think it is.

No other single idea ever had a more negative impact on me, than the whole 'ever accelerating progress leading towards The Singularity, after which progress will be even ever greater' idea has had. It made me think, that my fantasies really have a 99% chance of happening in a pretty short time. I don't recommend getting too much into all of this. :|
My thoughts on the Singularity haven't changed much; I'm just more worried about a negative outcome.
Pretty much exactly what I expected was going to happen has started to happen, right on down to "less-narrow AI" dominating AI discourse in the 2020s. Despite the raw power of modern artificial intelligence, we're still yet to see the "good stuff" as I call it. Take ChatGPT. Right now, it's actually pretty scattered, with multiple capabilities you have to separately toggle. And even at its best, I can find its limits quite well. Better than ChatGPT-3.5, crossing a competence boundary I've found, but not yet truly ready to "change the world" in meaningful ways.

I do think that there is an accelerated rate of AI progress, to the point of becoming dangerous without an equivalent method of controlling it, though I'm not entirely sold on the idea that this means certain doom. Yudkowsky certainly demoralized me for a while, until I realized that was his whole reason for speaking.
But the cold fact ultimately is "We're in that stage where the time for fantasizing and daydreaming is ending, and it's time to start seriously considering the ground realities of what's coming, including that which may not align perfectly (or even at all) with our old fantasies."
"When does agentic AI start leading to real world improvements when prompted?" I don't think we've seen that happen yet, and as a result, that's what makes it difficult to gauge how likely or transformative a Singularity would be. We've long talked about that, but we're roughly close enough to start seriously thinking about how massive of a shift that would be as well as just how much better AI would make things. If GPT-5 existed and AutoGPT-5 was immediately created to optimize, say, city design or carbon nanofiber engineering, what does it bode for the Singularity if GPT-5 figures it out in an hour, then even GPT-10 later says "Sorry, GPT-5 perfected it, we can't get any better than that, or at least there are way too many diminishing returns." That's what I'M wondering about, which is why I've started taking to the idea that the Singularity is actually going to likely be an ultra-decadent hedonistic age of synthetic media and telepresence space exploration, except you can take pills to live forever and use VR BCIs to live as an eagle. And I don't see that being any further than a few years off, but I also don't know if ten thousand years in the future will be any different just because I have a sneaking feeling "perfect optimization" for most things relevant to the human experience isn't that far off.
For the bleeding edges of science and physics, sure, there's literally a universe beyond us, but I see that concerning only the highest of minds. The vast majority of people simply do not give a shit about the limits of computing, how many dimensions an overmind can think in, how tiny computers and manufacturing can become, or whether they can become invisible to all spectrums of light.

Basically, utopian thinking is probably not the best way to think about the Singularity anymore, until proven otherwise.
And remember my friend, future events such as these will affect you in the future
Tadasuke
Posts: 549
Joined: Tue Aug 17, 2021 3:15 pm
Location: Europe

Re: The Singularity - Official Thread

Post by Tadasuke »

GPT 1, 2, 3, 3.5, 4, Bard, PALM, Stable Diffusion, BlueWillow, Claude, etc, etc, and I still have yet to see an actual positive impact on my life or at least 1 wish from the list of 100 things that I wish for, to come true (as of now, AIs have made my life harder, not easier). I literally feel anxious, sick and really unwell at the very mentions of The Singularity or GPT whatever. 🤢
Global economy doubles in product every 15-20 years. Computer performance at a constant price doubles nowadays every 4 years on average. Livestock-as-food will globally stop being a thing by ~2050 (precision fermentation and more). Human stupidity, pride and depravity are the biggest problems of our world.
User avatar
Cyber_Rebel
Posts: 331
Joined: Sat Aug 14, 2021 10:59 pm
Location: New Dystopios

Re: The Singularity - Official Thread

Post by Cyber_Rebel »

@Tadasuke

I'm not challenging your opinion or anything, but if you're feeling this way it may not have nothing much to do with ChatGPT or any other AI. You have to ask yourself, that if the advancement right now didn't exist, and life were like in the 2010s, would your perspective be any better?

Even if we were heading for a Cyberpunk style future rather than Star Trek (eventually), that's no reason to doom and despair.
Solaris
Posts: 13
Joined: Thu Sep 22, 2022 8:21 pm

Re: The Singularity - Official Thread

Post by Solaris »

My 1 year review on ChatGPT and therefore prospects on AGI and so on

Contrary to many believers here, I'm not impressed by ChatGPT and hasn't been since the early days (the initial 4 months). I think it's main strength is in the coding area, were it is largely able to generate code and correct it. However, its ability to correct code is only consistent if the flaw is clear and obvious, meaning that you will have to figure it out by yourself, if more is demanded. This is not surprising, otherwise we would be able to automate every coder in the whole world. Its ability to code is also limited to the knowledge of the one providing inputs, which means that non-coders will lack the ability to create anything meaningful with the help of ChatGPT. It's second strength is its ability to collect simple information from Wikipedia, something I have not tested myself, but I have seen on this forum that it can tell what works has been made by Shakespeare or asking it about the Turing test. It's not really useful other than simple amusement, but it can answer on those, and provide meaningful answers.

What issue does it have? On the technical side, it lacks the ability to remember in the long-term. Longer conversations in the same 'chat' will make it very slow at some point, which prompts the user to start a new conversation, therefore reseting all the information it has gathered. Sometimes, you cannot use ChatGPT (though this issue has in most part disappeared throughout 2023). When it does actually remember previous conversations, it implements previous information in the wrong manner, writing complete rubbish. Try to make 10 job applications using ChatGPT - it will fail. The issue in large part can be attributed to logic, as it does not seem to have any. Instead it just predicts what is the most likely answer. This becomes clear if you try to create anything academic, where it will just make up its own authors, and the amateurs get caught for using ChatGPT. It also is very obvious when it has been written by GPT. I see a text, and can almost guess it by the first lines, that this is written by an AI. It lacks personality, it lacks variety, it lacks purpose. It's just empty words. If this is not fixed, it will never have universal use, and no AGI will be created. It could have specific usages, especially if it's trained in a very specific field. That is what I consider best case scenario, but this will not be AGI. It will need consistency, it will need credibility, it will need the ability to deviate from the user input. When I look at GPT-4, I see no indication of AGI, so Silicon Valley would have to take a giant leap the next year or so, if we should go from rocks and spears to a nuclear bomb. That's the way I see GPT-4 compared to AGI proposed on this forum and other places of similar nature.

In general, I have become more of a sceptic on the timeline of AI and other things, especially as the credibility of the users engaging in these types of conversations has decreased as the years has gone by. I have seen people argument for singularity this year, I don't see AGI before at least 2029, and that is being very optimistic. I have seen people convinced we will reach LEV by 2025, that is complete horsetalk. I see too many people dreaming of what their life could become with the help of AI, creating an entire world of wonder different from the reality in which we live. In my earliest days, I took the claims and the predictions at face value, and of course some have been right, but most of it amounts to nothing. In the end, I will follow Ray Kurzweil, which I consider one of the few consistent predictors and an actual expert. 2023 has therefore been a year were I have readjusted my expectations for the future. Most importantly, I see work being unchanged almost everywhere at least for the next 10 years, and likely at least 20 years, so as a newly graduated my aim is to work hard initially, and then slowly adjust. I also consider it very unlikely that AGI will change the fundamental way the world works, meaning capitalism will continue to exist. While it's possible that poverty disappear, there will still be differences between humans. So my main bet for the future is money, even though I live in one of the best countries in the world, I will consider it an insurance if things go haywire.
User avatar
caltrek
Posts: 6613
Joined: Mon May 17, 2021 1:17 pm

Re: The Singularity - Official Thread

Post by caltrek »

I was first introduced to the concept of the singularity in this very forum. Specifically, I remember discussion with Yuli Ban on this topic. (Nice to have you back Yuli ;) ) One of the discussions we had was concerning the idea of an AGI owning itself. At the time, this seemed to me a totally unworkable notion. As I have thought about it, it seems to me that there would at least have to be a way that economic and political systems could evolve to the point where such a scenario could come to pass. It might very well be that the singularity could result in such self-liberation by the ASI, but that is by no means guaranteed. The argument for such self-liberation is that a such a super intelligence would develop both the will and the means to achieve that objective.

In the 19th century, Marx argued that history was going down a more or less predestined path in which socialism would succeed capitalism as the predominant form of economic and political organization. Communism would then succeed socialism as the result of the withering away of the state. It is very tempting to now suppose that the singularity and the arrival of a society dominated by artificial super-intelligence (ASI) will actually be the next stage of development. One aspect of socialism was the notion of a planned society. Domination by an ASI would thus be a natural step forward in instituting such a planned society. Its means and level of control would go far beyond anything Marx ever envisioned. Whether such a level of planning and direction is desirable remains at this time an open question.

Francis Fukuyama wrote in The End of History and the Last Man that it would not be communism as the last stage of development, but rather societies in which the precise mix between government and private control and power were a matter of on-going tinkering and adjustment. Fukuyama has since distanced himself from this thesis, although it is not clear to me precisely why he did so. One problem I think that Fukuyama failed to address is the presence of reactionary elements in the various societies of the world. Such elements would seek to return us to a theocracy, or to fascism. Failing fascism, at least societies in which rigid systems of apartheid would prevail.

If I have changed my viewpoint, it is on the issue of whether domination by ASI is really possible. This notion was first introduced to me through science fiction, specifically a Star Trek episode portraying a society where this had occurred. I now take this possible outcome more seriously than simply a matter of science fiction entertainment.

One complicating factor is that there may very well be not one but many ASIs. Some flowing from the private sector. Others developed by governments. Moreover, many advanced countries such as the U.S., China, Japan, and member states of the European Union, perhaps acting in concert with each other (or with the U.S.) might develop their own version of an ASI. A major question would be would such ASIs compete against each other, perhaps as intended by their respective creators, or would they join in some common cause for the greater good?

I can only guess as to the answer to that question. I suspect initially, they would compete against each other, but the logic of co-operation may win out in the end.

For more on The End of History and the Last Man: https://en.wikipedia.org/wiki/The_End_ ... e_Last_Man
Don't mourn, organize.

-Joe Hill
Tadasuke
Posts: 549
Joined: Tue Aug 17, 2021 3:15 pm
Location: Europe

Singularity vs reality

Post by Tadasuke »

Ray Kurzweil's idea and prediction for a 2023 $1000 laptop : 20 petaflops of FP32 (3D carbon nanotube lattices processors), 10 TB of RAM (40 TB/s), 1 PB of SSD

real 2013 $1000 laptop : 1 teraflops, 16 GB of RAM (30 GB/s), 1 TB of HDD, 1920x1080 60Hz

real 2023 $1000 laptop : 10 teraflops, 16 GB of RAM (90 GB/s), 1 TB of SSD, 1920x1080 144Hz

Therefore, facts show that Technological Singularity is just fiction. A made-up idea. Good for books or movies. Not for real life.

For real life, I advocate and promote thinking, talking and writing about protopia, instead of dystopia or utopia (which is very common among singularitarians).
Global economy doubles in product every 15-20 years. Computer performance at a constant price doubles nowadays every 4 years on average. Livestock-as-food will globally stop being a thing by ~2050 (precision fermentation and more). Human stupidity, pride and depravity are the biggest problems of our world.
Post Reply