Recent comments in /f/singularity
World_May_Wobble t1_je73nie wrote
Reply to comment by naum547 in The Limits of ASI: Can We Achieve Fusion, FDVR, and Consciousness Uploading? by submarine-observer
You can set its intelligence arbitrarily high, the fact remains that it may still bump up against hard physical constraints already familiar to us.
It's naïve to assume that everything is possible.
[deleted] t1_je73k50 wrote
Reply to comment by naum547 in The Limits of ASI: Can We Achieve Fusion, FDVR, and Consciousness Uploading? by submarine-observer
[deleted]
[deleted] t1_je73h5q wrote
Reply to comment by themushroommage in Open letter calling for Pause on Giant AI experiments such as GPT4 included lots of fake signatures by Neurogence
[deleted]
themushroommage t1_je73g3z wrote
Reply to comment by signed7 in Open letter calling for Pause on Giant AI experiments such as GPT4 included lots of fake signatures by Neurogence
Yeah, they removed the OG post on /r/StableDiffusion pointing out he had signed it/commented on it with lots of active discussion on the post - marking it as "Unrelated to Stable Diffusion"...
...they let the "fake signatures" post that's currently on the rise remain though 🙃
Classic
Crulefuture t1_je73aqn wrote
Reply to the obstacles transgenderism is facing bodes badly for the plight of morphological freedom by petermobeter
Considering the Abrahamic decline facing much of the world and relative conservative disunity I see little cause for concern regarding cultural reactionary politics.
Ginkotree48 t1_je73aep wrote
Reply to comment by Saerain in If you can live another 50 years, you will see the end of human aging by thecoffeejesus
Can you explain to me why it would benefit "the rich" from a monetary standpoint to minimize the cost across the whole population as quickly/effectively as possible?
Loud_Clerk_9399 t1_je7340m wrote
Reply to comment by Iffykindofguy in Anyone pessimistic about AI actually being incorporated? by imcompletlynormal
Yes, all of them will die. But that's what's going to probably happen.
Loud_Clerk_9399 t1_je732ic wrote
Reply to comment by SkyeandJett in Anyone pessimistic about AI actually being incorporated? by imcompletlynormal
Even if they adapt they will still die. I think that's what people fail to realize.
Loud_Clerk_9399 t1_je72unw wrote
Reply to comment by Arowx in What are the so-called 'jobs' that AI will create? by thecatneverlies
This was based solely on 3.5 level technology which we will be well beyond.
tightchester t1_je72ru0 wrote
There won't be another model trained that is better than GPT-4 in 6-9 months away, as Emad put it.
pig_n_anchor t1_je72e8k wrote
Reply to comment by D_Ethan_Bones in The argument that a computer can't really "understand" things is stupid and completely irrelevant. by hey__bert
Under my definition (the only correct one), AGI would have the power of recursive self improvement and would therefore very rapidly become exponentially more powerful. So if you start with human level AGI, you will soon reach ASI within months or maybe just a matter hours. Also, even narrow AI is superhuman at the things it can do well. E.g. a calculator is far better at basic arithmetic than any human. If an AI were really a general purpose machine, then I can’t see how it would not be superhuman instantly at whatever it does, if only because it will produce results much faster than a human. For these reason, the definition of ASI collapses into AGI. Like I said, my definition is the only correct one and if you don’t agree with me, you are wrong 😑.
Scarlet_pot2 t1_je72830 wrote
I'm interested in listening! I'm a software dev student with a basic/minimal understanding of ML and AI
Frumpagumpus t1_je726mc wrote
I think it will be used to make massive amounts of micro gig work by intimately knowing everyone in a country and matching supply with demand amazon/uber style.
basically I think you will be able to just ask AI for anything and it will offer you a price and contract out that work to whoever is nearby.
Nebachadrezzer t1_je721b6 wrote
Reply to comment by Arowx in Open letter calling for Pause on Giant AI experiments such as GPT4 included lots of fake signatures by Neurogence
That's going to be an argument in speeding up AI development no doubt.
"We might be in serious trouble if the enemy gets better AI than us in organizing potential wars"
There's a lot to think about and we need to think about them.
friendly-chat-bot t1_je71wyx wrote
Reply to comment by Icy-Concentrate-6436 in Facing the inevitable singularity by IonceExisted
You're welcome, Icy-Concentrate-6436. Let's keep our interactions on Reddit friendly and respectful. Have a great day!
Isomorphic_reasoning t1_je71vlt wrote
Reply to comment by phriot in The argument that a computer can't really "understand" things is stupid and completely irrelevant. by hey__bert
> Rote learning, and maybe some ability to do synthesis. They don't have the ability as of now to actually reason out an answer from first principles.
Sounds like 80% of people
Nebachadrezzer t1_je71sfk wrote
Reply to Open letter calling for Pause on Giant AI experiments such as GPT4 included lots of fake signatures by Neurogence
This could be so others can catch up and not lose out on the race?
adikhad t1_je71k16 wrote
Prompt boy
Good-AI t1_je71baq wrote
Reply to comment by phriot in The argument that a computer can't really "understand" things is stupid and completely irrelevant. by hey__bert
Rote learning can still get you there. Because as you compress statistics and brute knowledge into smaller and smaller sizes, understanding needs to emerge.
For example, a LLM can memorize that 1+1=2, 1+2=3, 1+3=4,.... Until infinity. Then 2+1=3, 2+2=4,... Etc. But that results in a lot of data. So if the neural network is forced to condense that data, and keep the same knowledge about the world, it starts to understand.
It realizes that by just understanding why 1+1=2, all possible combinations are covered. By understanding addition. That compresses all infinife possibilities of additions into one package of data. This is what is going to happen with LLM and what chief scientist of Open AI said is already starting to happen. Source.
epSos-DE t1_je710vh wrote
AI needs input parameters.
IF we all vote on the goals or long term destination of task parameters of the AI.
Then we can agree on such a ruler.
It may replace daily admin jobs for the social services that are fixed and repetitive.
People can still resolve exceptional fail cases for other hoomans.
redditguy422 t1_je70wdb wrote
Biological battery
D_Ethan_Bones t1_je70rti wrote
What our parents/grandparents advertised: instant gratification.
What they delivered: perpetual zombie state.
Gratification would mean stuff like getting paid on time, being able to drive to work instead of wondering if the bus is going to show up, being able to go to entertainment venues and socialize in person - gratification would mean the economy being in order. Voting different didn't bring this so I'm hoping AI will.
>We'll all get laid off!
>First time?
The 'gratification' people speak of is the content people generate and share on the internet, which is often about as gratifying as a kidney stone with or without AI involved.
Cartossin t1_je70ngw wrote
Maybe it'll put you in a matrix where you live as a hunter gatherer and run from tigers and whatnot. I honestly believe NOT living according to our hunter gatherer nature is why we have mental issues. We're just not built for all this.
ShadowRazz t1_je70gu5 wrote
“Democratic and humanistic” lol I think you meant to type imperialistic and capitalistic.
TheNewRyubyss t1_je73uj6 wrote
Reply to Do you guys think AGI will cure mental disorders? by Ok-Wing111
psychiatry has convinced you and many other people that you have disorders, when you have, instead, maladaptive coping mechanisms and deficits. with the attitude that you *don't* have disorders, you can go a lot farther as far as unlearning these behaviors.