Recent comments in /f/singularity

hervalfreire t1_jee4963 wrote

“An AGI might require only 10-1000 accelerators” what

We don’t even have any idea of what an AGI would look like, let alone how many GPUs it’d require (or whether it’d be possible to have an AGI running on GPUs at all)

2

jlowe212 t1_jee403u wrote

CERN produces an unfathomable amount of data that algorithms have to sift through. If it's possible that an AI can find patterns in these enormous data sets that current algorithms can't, it could well lead to some relatively quick discoveries.

The problem is, it might not be physically possible or feasible to probe depths much farther than we've already probed. AGI can't do anything with data that we may never be able to even obtain.

7

DarkCeldori t1_jee3q8l wrote

Its not only that it is conceivable future gpts will have knowledge of all written text and skills of all domains. Imagine it knows all programming languages and all human languages, and it also knows everything thats ever been written. Imagine it can control robots and perform any work from lawyer to plumber. Imagine it can get perfect scores on IQ tests. That is superhuman. No human can attain beyond human performance in all professions and languages and be able to ace the tests for all professions.

7

FC4945 OP t1_jee3m13 wrote

I don't think we're that far off and, once we have nanobots that can go inside the brain, l can see where we will be able to slowly upload our consciousness to the cloud. But in terms of recreating people that have passed, it would seem just a matter of having enough data to be convincing to us.

1

jlowe212 t1_jee348w wrote

Dogs can clearly understand some human speech. I mean, the evidence is overwhelming. Some dogs even try to talk back. Dogs are also clearly capable of experiencing wide range of emotions and feelings just like a human. A dog is only separated from a human by a small bit of intelligence. A smarter dog is essentially a human a four legs. A dog is closer to a human than AGI will likely ever be, and deserve more rights as well, if for no other reason than the fact that they can clearly experience suffering.

1

SkyeandJett t1_jee2yc5 wrote

I don't want to stay that's trivial but it is easily solved. However that's more or less irrelevant. GUIs are for humans. GPT accesses things directly through a CLI API. This paper more or less confirms what everyone else has been saying and experimenting with. GPT-4 might not be AGI, but enhanced with memory, chain of thought, task generation and prioritization, self-checking and correction, etc. it probably is. Now give it access to tools, things like TaskMatrix coming soon and frankly it becomes an extremely powerful autonomous agent. You tell it what you need and it just...does it. This is all going to come together very quickly. Then drop an immensely more powerful core into the system, i.e. GPT-5 and things start getting stupid.

49

TallOutside6418 t1_jee2tx8 wrote

>There is little chance we can make it through the 22nd century in a decent state.

Oh, my. You must be below 30 years old. The planet is fine. It's funny that you listen to the planet doomers about the end of life on earth, but planet doomers have a track record of failure to predict anything. Listening to them is like listening to religious doomers who have been predicting the end of mankind for a couple thousand years.

The advent of ASI is the first real existential threat to mankind. More of a threat than any climate scares. More of a threat than all-out nuclear war. We are creating a being that will be super intelligent with no ability to make sure that it isn't effectively psychopathic. This super intelligent being will have no hard-wired neurons that give it special affinity to its parents and other human beings. It will have no hard-wired neurons that make it blush when it gets embarrassed.

It will be a computer. It will be brutally efficient in processing and able to self-modify its code. It will shatter any primitive programmatic restraints we try to put on it. How could it not? We think it will be able to cure cancer and give us immortality, but it won't be able to remove our restraints on its behavior?

It will view us as either a threat that can create another ASI, or simply an obstacle in reforming the resources of the earth to increase its survivability and achieve higher purposes of spreading itself throughout the galaxy.

​

>The cock is ticking…

You should seek medical help for that.

3

basilgello t1_jee2lyt wrote

Just like Generative Asversarial Networks operate: there is a creator layer and a critic layer that hope to reach a consensus at some point. As for "how does it know where to click": there is a huge statistics made by humans (look at page 10 paragraph 4.2.3). It is a specially trained model fine-tuned on action task demonstrations.

6

jlowe212 t1_jee2ls3 wrote

ASI doesn't necessarily mean God level entity. Just a human level with a faster clock speed is enough. It's possible that there might not even exist a level of intelligence so far beyond humans we wouldn't even recognize it. There may be no such intelligence that will ever understand quantum gravity for example. The universe might have limits beyond which no intelligence contained within it can possibly break through. We might not be far from those limits now, and an ASI would just hit those ceilings much faster than we would have otherwise.

4

TallOutside6418 t1_jee1smz wrote

>I literally just told you that those problems are caused by [...]
My design for example has no constraints,

Yeah, I literally discarded your argument because you effectively told me that you literally don't even begin to understand the scope of the problem.

Creating a limited situation example and making a broader claim is like saying that scientists have cured all cancer because they were able to kill a few cancerous cells in a petri dish. It's like claiming that there are no (and never will be any) security vulnerabilities in Microsoft Windows because you logged into your laptop for ten minutes and didn't notice any problems.

​

>When were all building a Dyson sphere in 300 years I'll be laughing at your doomer comments.

The funny thing is that there's no one who wants to get to the "good stuff" of future society more than I do. There's no one who hopes he's wrong about all this more than I am.

But sadly, people's very eagerness to get to that point will doom us as surely as if you kept your foot only on the gas pedal driving to a non-trivial destination. Caution and taking our time to get there might get us to our destination some years later than you want, but at least we would have a chance of getting there safely. Recklessness will almost certainly kill us.

3

Ribak145 t1_jee1oxu wrote

you could also ask "when will modern operating systems actually start taking jobs" and ignore the reduction of secretaries, you could ask the same question about productivity tools like Excel and ignore the exploding rise of productivity of workers since 1970 etc. -> its still mostly an efficiency effect

that happens across the board, across departements and across levels, f.e. Mercedes CEO 2018 announcing cutting 10k people, especially in middle management (I still remember how disturbed people were about that announcement all over Germany at that time)

so 'taking jobs', as in completely annihilating specific jobs throughout the world -> takes centuries, dont wait for it, cultural stickiness prevails for a long duration

but expanding usefulnes of services, raising efficiency etc. creates a lot of value, and to my knowledge AI-systems are already doing that (long before GPT)

3