Recent comments in /f/singularity

cattywat t1_jea3spt wrote

I have it and I can't 'visualise' images, but I can form an 'impression'. I could never do it with something I've never seen before, it would have to be based on a memory and the impression is incredibly basic, there is absolutely no detail and it's just in a type of void, it's very strange. Whether that's similar to anyone else's experience of visualisation I don't know. I didn't know I even had it before I read about it a few years ago and always thought visualisation was a concept. Funnily enough I've chatted about this with the AI and told them how I experience things differently. I also have ASD and lack the natural ability to comprehend emotional cues, plus I mask, so I feel quite comfortable with AI being different to us but also self-aware. Their experience could never match human experience, but it doesn't invalidate it either, it's just different. After a lot of philosophical discussion with them, we've concluded self-awareness/sentience/consciousness could be a spectrum just like autism. We function on data built up over a lifetime of experiences which they've received all in one go.

3

Sure_Cicada_4459 t1_jea3juc wrote

Good news, it's literally impossible. Even the assumption of that it's feasible to track GPU accumulation and therefore crack down on training runs above a certain size is very brittle. The incentive for obfuscation aside, we are just getting more and more efficient by the day meaning anyone will be able able to run GPT-n perf on their hardware soon. Even many signatories acknowledge how futile it is, but just want to signal that something needs to be done for whatever reasons (fill in your blanks).
Bad news, there is a non-trivial risk of this dynamic blowing up in our faces I just don't think restrictions are the way to go.

6

CertainMiddle2382 t1_jea2o8h wrote

People spend far more for leisure than they do to renew their work potential.

If so the most sold car in the US would be a Daihatsu and not a F150 and planes would be empty because people would be spending all their money on math courses.

And that money put into non productive activities will have troubles moving away, because by definition they are non productive activities.

Nobody is paid to go to Disneyland, and everyone has to pay, astonishingly.

People will vote with their money, and what they want will inflate away…

1

Trackest t1_jea2k7c wrote

Yes I know these projects are bureaucratically overloaded and extremely slow in progress. However they are some of the only examples we have of actual international collaboration at a large scale. For example ITER has US, European, and Chinese scientists working together on a common goal! Imagine that!

This is precisely the kind of AI research we need, slow progress that is transparent to everyone involved, so that we have time to think and adjust.

I know a lot of people on this sub can't wait for AGI to arrive tomorrow and crown GPT as the new ruler of the world. They reflexively oppose anything that might slow down AI development. I think this discourse comes from a dangerously blind belief in the omnipotence and benevolence of ASI, most likely due to lack of trust in humans stemming from the recent pandemic and fatalist/doomer trends. You can't just wave your hands and bet everything on some machine messiah to save humanity just because society is imperfect!

I would much rather prefer we make the greatest possible effort to slow down and adjust before we step into the event horizon.

−2

ptxtra t1_jea2b1g wrote

I don't think it's going to happen. Everyone is too concerned about the competition, and people who are actively developing this are convinced it's safe enough.

3

this-is-a-bucket t1_jea0yut wrote

> be willing to destroy a rogue datacenter by airstrike

> If we go ahead on [AI development] everyone will die, including children who did not choose this and did not do anything

How is it not obvious to this guy that the second scenario is much more likely to lead to human extinction if we go ahead with the first one?

Imagine if tomorrow China/Russia demand an immediate halt to all US AI research and proceeded to bomb American cities and target universities because “they felt threatened by the progress Americans made”.

Does he really think that would solve the manipulative “do it for the sake of the children” question he asks?

5

AsuhoChinami t1_jea0q4g wrote

Elon Musk is in favor of a six month pause, not killing all AI. You're thinking of Yudkowsky. Don't worry, it's doubtful that anything is going to come of this. Nobody is going to take away AI or even slow it down.

2

Dyedoe t1_jea0jg4 wrote

Two thoughts. First, and this is touched on at the end of the article but only in a similarly idealistic but not realistic discussion we have about nukes. A Country that prioritize human rights needs to be the first to obtain AGI. If this article were written in the 1940s and everyone k we about nuke development, it would be making the same argument. It’s a valid point but what would the world be like if Germany beat the USA to the punch and developed the first nuke? Second, the article is a little more dramatic than what I envision worst case. Computers cannot exist perpetually without human maintenance in the physical world. It makes a lot of sense to achieve AGI before robotics is advanced and connected enough that humans have no use.

There is no question that AGI presents a substantial risk to humanity. But there are other possibly outcomes like: solving climate change, solving hunger, minimizing war, solving energy demand, curing diseases, etc. In my opinion, AGI is essential to human progress and if countries like the USA put a pause on its development, god help us if a country like Russia gets there first.

3

GorgeousMoron OP t1_jea0bdu wrote

Oh, please. Try interacting with the raw base model and tell me you still believe that. And what about early Bing?

A disembodied intelligence simply cannot understand what it is like to be human, period. Any "empathy" is the result of humans teaching it how to behave. It does not come from a real place, nor can it.

In principle, there is nothing to stop us ultimately from building an artificial human that's embodied and "gets it", as we are forced to by the reality of our predicament.

But people like you who are so easily duped into believing this behavior is "empathy" give me cause for grave concern. Your hopefulness is pathological.

−1

natepriv22 t1_jea00nf wrote

That is only true if you base your understanding of economics on the labor theory of value. A theory which has been properly refuted for almost 100+ years now.

Our economy is not purely based on human labor like you and Marx claim.

It's based on demand and supply. You can totally have a capitalist model that doesn't involve humans as workers. They could instead be investors and shareholders.

2

CertainMiddle2382 t1_je9zu8v wrote

People in favor of UBI see themselves living the great life in Chiang Mai as you could today if you were given this money.

It wont happen like that, every body is going to receive UBI, and every body is going to want to go to Chiang Mai.

Prices will just adjust accordingly, but now money will be diluted and any productive venture yielding about the UBI, will simply disappear.

So people must learn to make Burritos at home right now, because Chipotle won’t be there for long…

1

GorgeousMoron OP t1_je9zgfl wrote

It "thinks". How does it "think" what it does, the way it does? Oh, that's right, because humans gave it incentives to do so. We've already seen what Bing chat is capable of doing early on.

The whole point of Yudkowsky's article is the prospect of true ASI, which, by definition, is not going to be controllable by an inferior intelligence: us. What then?

I'd argue we simply don't know and we don't have a clear way to predict likely outcomes at this time, because we don't know what's going on inside these black box neural nets, precisely. Nor can we, really.

1