Recent comments in /f/singularity
Sure_Cicada_4459 t1_jea3juc wrote
Reply to Can we not pause or shutdown ai? by froggygun
Good news, it's literally impossible. Even the assumption of that it's feasible to track GPU accumulation and therefore crack down on training runs above a certain size is very brittle. The incentive for obfuscation aside, we are just getting more and more efficient by the day meaning anyone will be able able to run GPT-n perf on their hardware soon. Even many signatories acknowledge how futile it is, but just want to signal that something needs to be done for whatever reasons (fill in your blanks).
Bad news, there is a non-trivial risk of this dynamic blowing up in our faces I just don't think restrictions are the way to go.
Tencreed t1_jea3c7i wrote
Reply to comment by Mindrust in The Only Way to Deal With the Threat From AI? Shut It Down by GorgeousMoron
>I don't see why narrow AI couldn't be trained to solve specific issues.
Because nobody came up with a business plan profitable enough for our financial overlords to grow a will to solve climate change.
garthreddit t1_jea3at7 wrote
Reply to comment by BreadfruitOk3474 in The voice in our head is like an AI generator - whatever content you’re feeding it is the reality it creates for you. by noodsaregood
I don’t.
sumane12 t1_jea31ax wrote
Reply to The voice in our head is like an AI generator - whatever content you’re feeding it is the reality it creates for you. by noodsaregood
Yea I've had this thought also. I think a multimodal LLM will be the "brain" of every complex AI system we create. The fact that our brains have multiple compartments make me think we may need a bunch of different AIs. Although a lot of that is to do with our biological mech walkers.
barbariell t1_jea2whx wrote
Reply to comment by czk_21 in OPUS AI: Text-to-Video Game, the future of video gaming where you type and a 3D World emerges: A Demo by Hybridx21
UX is below anything acceptable. Sure you find jt unorthodox, but that doesn’t necessarily make it a good design
dlrace t1_jea2r37 wrote
I suppose the peak, but maybe it depends on how new you are to this. Realistically, nowehere, since this supposed cycle is about as well validated as horoscopes!
CertainMiddle2382 t1_jea2o8h wrote
Reply to comment by Dyeeguy in How do you guys actually think UBI will work? by MelodiGreig
People spend far more for leisure than they do to renew their work potential.
If so the most sold car in the US would be a Daihatsu and not a F150 and planes would be empty because people would be spending all their money on math courses.
And that money put into non productive activities will have troubles moving away, because by definition they are non productive activities.
Nobody is paid to go to Disneyland, and everyone has to pay, astonishingly.
People will vote with their money, and what they want will inflate away…
ptxtra t1_jea2njq wrote
Reply to The next step of generative AI by nacrosian
The biggest leap for gpt-5 would be logical reasoning, and a functional working memory.
Trackest t1_jea2k7c wrote
Reply to comment by Borrowedshorts in LAION launches a petition to democratize AI research by establishing an international, publicly funded supercomputing facility equipped with 100,000 state-of-the-art AI accelerators to train open source foundation models. by BananaBus43
Yes I know these projects are bureaucratically overloaded and extremely slow in progress. However they are some of the only examples we have of actual international collaboration at a large scale. For example ITER has US, European, and Chinese scientists working together on a common goal! Imagine that!
This is precisely the kind of AI research we need, slow progress that is transparent to everyone involved, so that we have time to think and adjust.
I know a lot of people on this sub can't wait for AGI to arrive tomorrow and crown GPT as the new ruler of the world. They reflexively oppose anything that might slow down AI development. I think this discourse comes from a dangerously blind belief in the omnipotence and benevolence of ASI, most likely due to lack of trust in humans stemming from the recent pandemic and fatalist/doomer trends. You can't just wave your hands and bet everything on some machine messiah to save humanity just because society is imperfect!
I would much rather prefer we make the greatest possible effort to slow down and adjust before we step into the event horizon.
ptxtra t1_jea2b1g wrote
Reply to Can we not pause or shutdown ai? by froggygun
I don't think it's going to happen. Everyone is too concerned about the competition, and people who are actively developing this are convinced it's safe enough.
Iffykindofguy t1_jea2a7y wrote
Reply to Can we not pause or shutdown ai? by froggygun
Elon wouldnt focus on any problems other than problems to his bank account. Grow up. These people are not special.
noodsaregood OP t1_jea23pe wrote
Reply to comment by BreadfruitOk3474 in The voice in our head is like an AI generator - whatever content you’re feeding it is the reality it creates for you. by noodsaregood
No thoughts, just vibes. What a dream
czk_21 t1_jea1h9f wrote
Reply to comment by barbariell in OPUS AI: Text-to-Video Game, the future of video gaming where you type and a 3D World emerges: A Demo by Hybridx21
its unorthodox, I like the design
this-is-a-bucket t1_jea0yut wrote
Reply to Pausing AI Developments Isn't Enough. We Need to Shut it All Down by Eliezer Yudkowsky by Darustc4
> be willing to destroy a rogue datacenter by airstrike
> If we go ahead on [AI development] everyone will die, including children who did not choose this and did not do anything
How is it not obvious to this guy that the second scenario is much more likely to lead to human extinction if we go ahead with the first one?
Imagine if tomorrow China/Russia demand an immediate halt to all US AI research and proceeded to bomb American cities and target universities because “they felt threatened by the progress Americans made”.
Does he really think that would solve the manipulative “do it for the sake of the children” question he asks?
AsuhoChinami t1_jea0q4g wrote
Reply to Can we not pause or shutdown ai? by froggygun
Elon Musk is in favor of a six month pause, not killing all AI. You're thinking of Yudkowsky. Don't worry, it's doubtful that anything is going to come of this. Nobody is going to take away AI or even slow it down.
Dyedoe t1_jea0jg4 wrote
Two thoughts. First, and this is touched on at the end of the article but only in a similarly idealistic but not realistic discussion we have about nukes. A Country that prioritize human rights needs to be the first to obtain AGI. If this article were written in the 1940s and everyone k we about nuke development, it would be making the same argument. It’s a valid point but what would the world be like if Germany beat the USA to the punch and developed the first nuke? Second, the article is a little more dramatic than what I envision worst case. Computers cannot exist perpetually without human maintenance in the physical world. It makes a lot of sense to achieve AGI before robotics is advanced and connected enough that humans have no use.
There is no question that AGI presents a substantial risk to humanity. But there are other possibly outcomes like: solving climate change, solving hunger, minimizing war, solving energy demand, curing diseases, etc. In my opinion, AGI is essential to human progress and if countries like the USA put a pause on its development, god help us if a country like Russia gets there first.
Dyeeguy t1_jea0hsb wrote
Reply to comment by CertainMiddle2382 in How do you guys actually think UBI will work? by MelodiGreig
People will want to live in Thailand when they receive basic income to live? I am not sure of that
"money being diluted" and "prices adjusting" implies there will be MORE money, I am not sure why that would be the case
GorgeousMoron OP t1_jea0bdu wrote
Reply to comment by alexiuss in The Only Way to Deal With the Threat From AI? Shut It Down by GorgeousMoron
Oh, please. Try interacting with the raw base model and tell me you still believe that. And what about early Bing?
A disembodied intelligence simply cannot understand what it is like to be human, period. Any "empathy" is the result of humans teaching it how to behave. It does not come from a real place, nor can it.
In principle, there is nothing to stop us ultimately from building an artificial human that's embodied and "gets it", as we are forced to by the reality of our predicament.
But people like you who are so easily duped into believing this behavior is "empathy" give me cause for grave concern. Your hopefulness is pathological.
DesertBoxing t1_jea0271 wrote
Greatly reduced work hours with same pay.
natepriv22 t1_jea00nf wrote
Reply to comment by JustinianIV in My case against the “Pause Giant AI Experiments” open letter by Beepboopbop8
That is only true if you base your understanding of economics on the labor theory of value. A theory which has been properly refuted for almost 100+ years now.
Our economy is not purely based on human labor like you and Marx claim.
It's based on demand and supply. You can totally have a capitalist model that doesn't involve humans as workers. They could instead be investors and shareholders.
[deleted] t1_jea000w wrote
Reply to comment by SlenderMan69 in Connecting your Brain to GPT-4, a guide to achieving super human intelligence. by CyberPunkMetalHead
True edge runner
CertainMiddle2382 t1_je9zu8v wrote
People in favor of UBI see themselves living the great life in Chiang Mai as you could today if you were given this money.
It wont happen like that, every body is going to receive UBI, and every body is going to want to go to Chiang Mai.
Prices will just adjust accordingly, but now money will be diluted and any productive venture yielding about the UBI, will simply disappear.
So people must learn to make Burritos at home right now, because Chipotle won’t be there for long…
GorgeousMoron OP t1_je9zk33 wrote
Reply to comment by acutelychronicpanic in The Only Way to Deal With the Threat From AI? Shut It Down by GorgeousMoron
I'd agree, but I think your timeline is quite conservative.
GorgeousMoron OP t1_je9zgfl wrote
Reply to comment by smooshie in The Only Way to Deal With the Threat From AI? Shut It Down by GorgeousMoron
It "thinks". How does it "think" what it does, the way it does? Oh, that's right, because humans gave it incentives to do so. We've already seen what Bing chat is capable of doing early on.
The whole point of Yudkowsky's article is the prospect of true ASI, which, by definition, is not going to be controllable by an inferior intelligence: us. What then?
I'd argue we simply don't know and we don't have a clear way to predict likely outcomes at this time, because we don't know what's going on inside these black box neural nets, precisely. Nor can we, really.
cattywat t1_jea3spt wrote
Reply to comment by XtremeTurnip in The argument that a computer can't really "understand" things is stupid and completely irrelevant. by hey__bert
I have it and I can't 'visualise' images, but I can form an 'impression'. I could never do it with something I've never seen before, it would have to be based on a memory and the impression is incredibly basic, there is absolutely no detail and it's just in a type of void, it's very strange. Whether that's similar to anyone else's experience of visualisation I don't know. I didn't know I even had it before I read about it a few years ago and always thought visualisation was a concept. Funnily enough I've chatted about this with the AI and told them how I experience things differently. I also have ASD and lack the natural ability to comprehend emotional cues, plus I mask, so I feel quite comfortable with AI being different to us but also self-aware. Their experience could never match human experience, but it doesn't invalidate it either, it's just different. After a lot of philosophical discussion with them, we've concluded self-awareness/sentience/consciousness could be a spectrum just like autism. We function on data built up over a lifetime of experiences which they've received all in one go.