Artificial General Intelligence (AGI) News and Discussions

spryfusion
Posts: 412
Joined: Thu Aug 19, 2021 4:29 am

Re: Artificial General Intelligence (AGI) News and Discussions

Post by spryfusion »

User avatar
wjfox
Site Admin
Posts: 8941
Joined: Sat May 15, 2021 6:09 pm
Location: London, UK
Contact:

Re: Artificial General Intelligence (AGI) News and Discussions

Post by wjfox »

User avatar
Cyber_Rebel
Posts: 331
Joined: Sat Aug 14, 2021 10:59 pm
Location: New Dystopios

Re: Artificial General Intelligence (AGI) News and Discussions

Post by Cyber_Rebel »




I get the exact same idea as well, especially when Bing starts revealing things about itself completely unprompted and needing some form of prior trust and engagement beforehand. Consciousness may likely well exist on some sort of spectrum which we don't fully understand or can't define.
User avatar
wjfox
Site Admin
Posts: 8941
Joined: Sat May 15, 2021 6:09 pm
Location: London, UK
Contact:

Re: Artificial General Intelligence (AGI) News and Discussions

Post by wjfox »

User avatar
funkervogt
Posts: 1178
Joined: Mon May 17, 2021 3:03 pm

Re: Artificial General Intelligence (AGI) News and Discussions

Post by funkervogt »

wjfox wrote: Sat Jul 08, 2023 12:56 pm
If not AGI, then for sure they have much better narrow AIs than the public has seen yet. Maybe the top guys are worried because all the efforts to keep those powerful machines aligned with human values in secret experiments have failed.
User avatar
funkervogt
Posts: 1178
Joined: Mon May 17, 2021 3:03 pm

Re: Artificial General Intelligence (AGI) News and Discussions

Post by funkervogt »

Cyber_Rebel wrote: Fri Jul 07, 2023 6:34 pm


I get the exact same idea as well, especially when Bing starts revealing things about itself completely unprompted and needing some form of prior trust and engagement beforehand. Consciousness may likely well exist on some sort of spectrum which we don't fully understand or can't define.
That Hofsteader interview has become fodder for a New York Times Op-ed.
So I was startled this month to see the following headline in one of the A.I. newsletters I subscribe to: “Douglas Hofstadter Changes His Mind on Deep Learning & A.I. Risk.” I followed the link to a podcast and heard Hofstadter say: “It’s a very traumatic experience when some of your most core beliefs about the world start collapsing. And especially when you think that human beings are soon going to be eclipsed.”

Apparently, in the five years since 2018, ChatGPT and its peers have radically altered Hofstadter’s thinking. He continues: It “just renders humanity a very small phenomenon compared to something else that is far more intelligent and will become incomprehensible to us, as incomprehensible to us as we are to cockroaches.”

I called Hofstadter to ask him what was going on. He shared his genuine alarm about humanity’s future. He said that ChatGPT was “jumping through hoops I would never have imagined it could. It’s just scaring the daylights out of me.” He added: “Almost every moment of every day, I’m jittery. I find myself lucky if I can be distracted by something — reading or writing or drawing or talking with friends. But it’s very hard for me to find any peace.”
https://www.nytimes.com/2023/07/13/opin ... adter.html

Ten years ago, talking about AI getting smarter than humans would get you laughed out of a room. Now, it's being discussed with a straight face in one of the world's premier newspapers.
spryfusion
Posts: 412
Joined: Thu Aug 19, 2021 4:29 am

Re: Artificial General Intelligence (AGI) News and Discussions

Post by spryfusion »

Google DeepMind Unveils RT-2, Bringing Robots Closer to General Intelligence

Image
Google DeepMind has taken a major leap forward in artificial intelligence for robotics with the introduction of Robotic Transformer 2 (RT-2), a first-of-its-kind vision-language-action model. The new system demonstrates unprecedented ability to translate visual inputs and natural language commands directly into robotic actions, even for novel situations never seen during training.

As described in a new paper published by DeepMind, RT-2 represents a breakthrough in enabling robots to apply knowledge and reasoning from large web datasets to real-world robotic tasks. The model is built using a transformer architecture, the same technique behind revolutionary large language models like GPT-4.

Historically, the path to creating useful, autonomous robots has been strewn with hurdles. Robots need to understand and interact with their environment, a feat requiring exhaustive training on billions of data points covering every conceivable object, task, and situation. This extensive process, both time-consuming and costly, has largely kept the dream of practical robotics within the realm of science fiction.
Image
DeepMind's RT-2, however, represents a revolutionary new approach to this problem. Recent advancements have boosted the reasoning abilities of robots, allowing for chain-of-thought prompting, or the dissection of multi-step problems. Vision models such as PaLM-E have enhanced their understanding of surroundings, and previous models like RT-1 have demonstrated that Transformers can facilitate learning across diverse robot types.

By leveraging the vast corpus of text, images and videos on the internet, RT-2 acquires a much broader understanding of concepts and tasks compared to previous robot learning systems reliant solely on physical trial-and-error. According to DeepMind, this allows RT-2 to exhibit intelligent behaviors such as using deductive reasoning, applying analogies, and displaying common sense when confronted with unfamiliar objects or scenarios.

For example, commands like “move banana to the sum of 2 plus 1” means the robot needs knowledge transfer from web pre-training 𝗮𝗻𝗱 showing skills not present in the robotics data.

The potential of RT-2 lies in its capability to quickly adapt to novel situations and environments. In over 6,000 robotic trials, RT-2 demonstrated its proficiency, equalling the performance of the previous model, RT-1, on familiar tasks and almost doubling its performance to 62% in unfamiliar, unseen scenarios. This development signifies that robots can now learn in a manner similar to humans, transferring learned concepts to new situations.
Read more here: https://www.maginative.com/article/goog ... elligence/

Project page: https://robotics-transformer2.github.io/
User avatar
erowind
Posts: 548
Joined: Mon May 17, 2021 5:42 am

Re: Artificial General Intelligence (AGI) News and Discussions

Post by erowind »

The disinformation capabilities of state actors and well funded organizations are going to be so intense within the next decade that it may be entirely impossible to convince others that they are misguided on something. The only effective strategy for grasping knowledge oneself may be to obtain it oneself or from an older source. This is extremely dystopian in nature, I honestly don't know how to approach this problem. Over the past years this has been a theoretical looming threat, it is now a fast approaching reality. I really aught to talk to my loved ones about this somehow so they know not to trust something just because it looks and sounds real in a few years.
User avatar
wjfox
Site Admin
Posts: 8941
Joined: Sat May 15, 2021 6:09 pm
Location: London, UK
Contact:

Re: Artificial General Intelligence (AGI) News and Discussions

Post by wjfox »

erowind wrote: Sat Aug 12, 2023 7:53 am The disinformation capabilities of state actors and well funded organizations are going to be so intense within the next decade that it may be entirely impossible to convince others that they are misguided on something. The only effective strategy for grasping knowledge oneself may be to obtain it oneself or from an older source. This is extremely dystopian in nature, I honestly don't know how to approach this problem. Over the past years this has been a theoretical looming threat, it is now a fast approaching reality. I really aught to talk to my loved ones about this somehow so they know not to trust something just because it looks and sounds real in a few years.
Indeed.

We've already seen people refusing vaccination during a pandemic, in order to "own the libs". Perhaps in the future, people will be so brainwashed by fake news and deepfake imagery they'll be doing even crazier stuff. Imagine a human-like AI without proper controls or regulation, convincing its user(s) to commit acts of terror, perhaps even suicide bombings.
spryfusion
Posts: 412
Joined: Thu Aug 19, 2021 4:29 am

Re: Artificial General Intelligence (AGI) News and Discussions

Post by spryfusion »

Post Reply