Recent comments in /f/singularity

Frumpagumpus t1_jecynxk wrote

he realizes AI can think so fast but apparently hasn't thought about how software forks all the time and shuts processes down willy nilly (he thinks death is silly and stupid but software does it all the time)

or other mundane details like what it would mean to mentally copy paste parts of your brain or thoughts or mutexes or encryption

4

agorathird t1_jecybw0 wrote

​

>who published at the conferences NeurIPS or ICML in 2021.

누구? Conferences are meme. Also they still don't know about the internal workings of any companies that matter.

>I mean just exactly who do you want to tell you these things. I can pull quotes from people at OpenAI saying they are worried what might be coming in future.

Already addressed this to another commenter, no matter how capable they are it freaks people out less if they appear concerned.

One of the participants is legit just a PHD student, I'm sorry I don't take your study with credibility.

[Got blocked :( Please don't spread disinformation if you can! I see you've linked that study a lot for arguments. ]

2

ItIsIThePope t1_jecxy9d wrote

Whether AI can help with mental disorders is a question of whether it can figure out consciousness or not, or at least how much of it it can presently understand. Much of the human mind is a great mystery; just as how our understanding of human biology and anatomy leads to advancements in surgery, vaccines, rehabilitation etc., a growing science in human mind is how we can understand the nature of psychological illness and eventually remedy them.

If mental illnesses for example were discovered by AI to be a result of physical malfunctions in the brain or its sub-organs or find such ailments to be a product of chemical imbalance, or even a result of our mismatched intelligence and biological tendencies (also rooted in parts of the brain), then perhaps it can employ physically reconstructive solutions to help its victims.

But if mental illness remains elusive and appear deeply rooted, intertwined or emergent with consciousness itself, and it struggles with understanding the nature of it, then it will have a very difficult time solving "conscious illnesses", understanding the nature of anything is the key to manipulating it

The wild thing here is, when we make AGI or ASI, it itself might have mental illnesses, it is after all, a thinking, possibly conscious being; there is the possibility that it ends up suffering the same things we suffer from.

The bottom-line is, Actual AI and the Human mind/intelligence are both subjects we are not very developed in, to the point where predicting how they will interact can feel like speculation.

That said, the nature of both fields are deeply similar (that of consciousness and intelligence), and so advancements in one of them will inevitably lead to insight and progress into the other.

1

blueSGL t1_jecxney wrote

>a lot of them look like randoms so far.

...

>Population

>We contacted approximately 4271 researchers who published at the conferences NeurIPS or ICML in 2021.

I mean just exactly who do you want to tell you these things. I can pull quotes from people at OpenAI saying they are worried what might be coming in future.

−1

Fantastic-Ad4559 t1_jecx0so wrote

As a Chinese , I can tell you that the mainstream opinion of Chinese thinking AI safety is that they don't care, as there is not much difference between serving as a tool for capitalists or as a flesh-and-blood battery for AI. The more discussed topic is that why we can never create something like ChatGPT.

17

BackgroundResult OP t1_jecwmnf wrote

TikTok is the most likely channel along with other consumer apps for media to be used strategically against the U.S. and its vulnerable population. ByteDance currently has 3 apps in the top 10 in app downloads.

TikTok hypnotic allure is the perfect condition in the "user", for thought-experiments to take place by the CCP and the PLA. This includes alterting sentiment around Capitalism, democracy, and the United States itself.

−3

agorathird t1_jecwk6a wrote

What does behind mean? If it's not from someone who knows all of the details holistically for how each arm of the company is functioning then they're still working with incomplete information. Letting everyone know your safety protocols is an easy way for them to be exploited.

My criteria for what a 'leading artificial intelligence company' is would be quite strict. If you're some random senior dev at numenta then I don't care. A lot of people who work around ML think themselves a lot more impactful and important than what they actually are. (See: Eliezer Yudkowsky)

Edit: Starting to comb through the participants and a lot of them look like randoms so far.

This is more like if you got random engineers (some just professors) who've worked on planes before (maybe) and asked them to judge specifications they're completely in the dark about. It could be the most safe plane known to man.

Edit 2: Particpant Jongheon Jeong is literally just a phd student that appears to have a few citations to his name.

[Got blocked :( Please don't spread disinformation if you can! I see you've linked that study a lot for arguments. ]

1

Federal_Two_1189 t1_jecwc3a wrote

"how difficult this all is", All we need to do is get AI to the point where it makes better version of itself. Researchers say that this is what AGI should be capable of doing

Realistically all we need to do is build AGI which is within our grasp in maybe 10-30 years and then everything from there is automated by AI.

Humans won't be the innovators anymore, we'll be on vacation.

1

BigMemeKing t1_jecw0np wrote

It's going to eliminate the need to learn any other language should all work properly. Your native language will be automatically translated into whatever language needed.

You would have implants that would translate spoken word in one language into your most dominant language or the language of your preference for that matter. Why would anyone ever need to learn English when machines do the talking for everyone one day?

7

TheSecretAgenda t1_jecvpwi wrote

Even when the technology is ready it will take time for businesses to adapt. There will be a wait and see trial period while most businesses see how the technology works with early adopters. It still may be only when AI gives user businesses a significant competitive advantage that there will be widespread adoption.

1