Recent comments in /f/singularity

0002millertime t1_jebtyak wrote

My work basically fired 90% of the marketing team. When they fire the CEO, then we'll know it's getting serious.

Microsoft fired their whole AI ethics department. If I was an AI asked to cut costs, that's literally the first thing I'd suggest doing.

35

chlebseby t1_jebtb7g wrote

My parents are in denial of big changes soon, they say things like "50 years" or "nobody will let that happen" etc.

Though i understand them. They experienced previous revolutions like computers, internet and mobile phones, yet it has not changed their lives completely. So i just tell them to be aware of things like deepfakes.

Friends in my age either just observe or are interested in it. I know technical people, so it's quite expected.

27

electroninmotion t1_jebsirg wrote

I suffered from it for about 6-7 years before I decided I couldn’t keep living with it. I tried benzo’s, beta blockers, psychologists, therapists, hypnotherapy, alcohol, and everything else. Exposure therapy is literally the only solution. Put yourself in the most panic invoking situations multiple times a day and don’t leave the situation under any circumstance. I went from being near agoraphobic to giving presentations at toastmasters and at work.

1

Smellz_Of_Elderberry t1_jebrrey wrote

>However since AI is a major existential risk I believe moving to a strict and controlled progress like what we see with nuclear fusion in ITER and theoretical physics in CERN is the best model for AI research.

This is going to lead to us waiting decades for progress and testing. Look at drug development.. Takes decades of clinical trials for us to even start making it available, and then it's prohibitively expensive. We might have cured cancer already, If we didn't have so many barriers in the way.

>Open-sourcing research will greatly increase risk of mis-aligned models landing in the wrong hands or having nations continue research secretly. If AI research has to be concentrated within an international body, there should be a moratorium on large scale AI research outside of that organization. This may be a deal-breaker.

So you want an unelected international body to hold the keys to the most powerful technology in existence? That sounds like a terrible idea. Open source is the only solution to alignment, because it will make the power available to all. Thus allowing all the disparate and opposing ideological groups the ability to, in a custom manner, align ai to themselves.

All an international group will do, is align ai in a way that maximizes the benefit of all parties involved. Parties which really have no incentive to actually care about you or i.

3

HeBoughtALot t1_jebrefx wrote

When I think about points of failure, I immediately think of the brittleness of a system, but in this context, it can result in too much power in too few hands, another type of failure.

2

RiotNrrd2001 t1_jebrdzp wrote

If AI threatened ANY group other than the political and business leader class, the political and business leader class would not give one flying fuck. They are only loudly concerned because... this will affect THEM. That's a whole different kettle of fish in their eyes. The poors have some concerns? Whatevs. Wait, this will affect US? NOW we need to be careful, circumspect, conservative, move slowly, don't rock the boat, because if money becomes obsolete, who are we going to hire as security guards? And with what? Money?

15