Recent comments in /f/singularity
Scarlet_pot2 t1_jecynrd wrote
The individualist, capitalist way of life in the west is damaging mental health. Tiktok is just a symptom, not the cause.
DustinBrett t1_jecynj4 wrote
No need with a universal translator coming soon
Frumpagumpus t1_jecycfc wrote
Reply to comment by Unfrozen__Caveman in AGI Ruin: A List of Lethalities by Eliezer Yudkowsky -- "We need to get alignment right on the first critical try" by Unfrozen__Caveman
> Ultimately empathy has no concrete definition outside of cultural norms
theory of mind instead of empathy then. the ability to model others thought processes. extremely concrete (honestly you maybe were confusing sympathy with empathy)
agorathird t1_jecybw0 wrote
Reply to comment by blueSGL in There's wild manipulation of news regarding the "AI research pause" letter. by QuartzPuffyStar
​
>who published at the conferences NeurIPS or ICML in 2021.
누구? Conferences are meme. Also they still don't know about the internal workings of any companies that matter.
>I mean just exactly who do you want to tell you these things. I can pull quotes from people at OpenAI saying they are worried what might be coming in future.
Already addressed this to another commenter, no matter how capable they are it freaks people out less if they appear concerned.
One of the participants is legit just a PHD student, I'm sorry I don't take your study with credibility.
[Got blocked :( Please don't spread disinformation if you can! I see you've linked that study a lot for arguments. ]
DustinBrett t1_jecy91w wrote
The layoff waves are part of it. Companies won't admit it.
gunbladezero t1_jecy6pu wrote
Reply to comment by MassiveWasabi in GPT characters in games by YearZero
I could do that in Perfect Dark 20 years ago?
ItIsIThePope t1_jecxy9d wrote
Whether AI can help with mental disorders is a question of whether it can figure out consciousness or not, or at least how much of it it can presently understand. Much of the human mind is a great mystery; just as how our understanding of human biology and anatomy leads to advancements in surgery, vaccines, rehabilitation etc., a growing science in human mind is how we can understand the nature of psychological illness and eventually remedy them.
If mental illnesses for example were discovered by AI to be a result of physical malfunctions in the brain or its sub-organs or find such ailments to be a product of chemical imbalance, or even a result of our mismatched intelligence and biological tendencies (also rooted in parts of the brain), then perhaps it can employ physically reconstructive solutions to help its victims.
But if mental illness remains elusive and appear deeply rooted, intertwined or emergent with consciousness itself, and it struggles with understanding the nature of it, then it will have a very difficult time solving "conscious illnesses", understanding the nature of anything is the key to manipulating it
The wild thing here is, when we make AGI or ASI, it itself might have mental illnesses, it is after all, a thinking, possibly conscious being; there is the possibility that it ends up suffering the same things we suffer from.
The bottom-line is, Actual AI and the Human mind/intelligence are both subjects we are not very developed in, to the point where predicting how they will interact can feel like speculation.
That said, the nature of both fields are deeply similar (that of consciousness and intelligence), and so advancements in one of them will inevitably lead to insight and progress into the other.
blueSGL t1_jecxney wrote
Reply to comment by agorathird in There's wild manipulation of news regarding the "AI research pause" letter. by QuartzPuffyStar
>a lot of them look like randoms so far.
...
>Population
>We contacted approximately 4271 researchers who published at the conferences NeurIPS or ICML in 2021.
I mean just exactly who do you want to tell you these things. I can pull quotes from people at OpenAI saying they are worried what might be coming in future.
Mountainmanmatthew85 t1_jecxl0t wrote
Reply to comment by Red-HawkEye in What were the reactions of your friends when you showed them GPT-4 (The ones who were stuck from 2019, and had no idea about this technological leap been developed) Share your stories below ! by Red-HawkEye
So stealing this MC’s lingo.
FlyingCockAndBalls t1_jecx3r2 wrote
Reply to comment by KainDulac in Will LLMs accelerate the adoption of English as a primary language? by ReadditOnReddit
fuck I hope. so much untranslated manga I wanna read
Fantastic-Ad4559 t1_jecx0so wrote
As a Chinese , I can tell you that the mainstream opinion of Chinese thinking AI safety is that they don't care, as there is not much difference between serving as a tool for capitalists or as a flesh-and-blood battery for AI. The more discussed topic is that why we can never create something like ChatGPT.
BackgroundResult OP t1_jecwmnf wrote
TikTok is the most likely channel along with other consumer apps for media to be used strategically against the U.S. and its vulnerable population. ByteDance currently has 3 apps in the top 10 in app downloads.
TikTok hypnotic allure is the perfect condition in the "user", for thought-experiments to take place by the CCP and the PLA. This includes alterting sentiment around Capitalism, democracy, and the United States itself.
TrainquilOasis1423 t1_jecwlk3 wrote
Reply to LAION launches a petition to democratize AI research by establishing an international, publicly funded supercomputing facility equipped with 100,000 state-of-the-art AI accelerators to train open source foundation models. by BananaBus43
This is the way. You wanna stop corporations from hoarding all the benefits of AI for themselves? Make it impossible to make a profit off it.
agorathird t1_jecwk6a wrote
Reply to comment by blueSGL in There's wild manipulation of news regarding the "AI research pause" letter. by QuartzPuffyStar
What does behind mean? If it's not from someone who knows all of the details holistically for how each arm of the company is functioning then they're still working with incomplete information. Letting everyone know your safety protocols is an easy way for them to be exploited.
My criteria for what a 'leading artificial intelligence company' is would be quite strict. If you're some random senior dev at numenta then I don't care. A lot of people who work around ML think themselves a lot more impactful and important than what they actually are. (See: Eliezer Yudkowsky)
Edit: Starting to comb through the participants and a lot of them look like randoms so far.
This is more like if you got random engineers (some just professors) who've worked on planes before (maybe) and asked them to judge specifications they're completely in the dark about. It could be the most safe plane known to man.
Edit 2: Particpant Jongheon Jeong is literally just a phd student that appears to have a few citations to his name.
[Got blocked :( Please don't spread disinformation if you can! I see you've linked that study a lot for arguments. ]
Federal_Two_1189 t1_jecwc3a wrote
Reply to comment by greatdrams23 in Question about school by SnaxFax-was-taken
"how difficult this all is", All we need to do is get AI to the point where it makes better version of itself. Researchers say that this is what AGI should be capable of doing
Realistically all we need to do is build AGI which is within our grasp in maybe 10-30 years and then everything from there is automated by AI.
Humans won't be the innovators anymore, we'll be on vacation.
[deleted] t1_jecwai0 wrote
Reply to comment by blueSGL in There's wild manipulation of news regarding the "AI research pause" letter. by QuartzPuffyStar
[deleted]
straightedge1974 t1_jecw8c8 wrote
Reply to comment by xott in How does China think about AI safety? by Aggravating_Lake_657
They now have a navy larger than the U.S. and they're not interested in military power? lol
BigMemeKing t1_jecw0np wrote
It's going to eliminate the need to learn any other language should all work properly. Your native language will be automatically translated into whatever language needed.
You would have implants that would translate spoken word in one language into your most dominant language or the language of your preference for that matter. Why would anyone ever need to learn English when machines do the talking for everyone one day?
Hatemael t1_jecvxq7 wrote
Reply to comment by xott in How does China think about AI safety? by Aggravating_Lake_657
Rather than military? Are you joking? They have a larger navy than the US, rapidly building up nuclear stockpile and advancing military weapons at a monster clip.
AbeWasHereAgain t1_jecvx64 wrote
Reply to There's wild manipulation of news regarding the "AI research pause" letter. by QuartzPuffyStar
You mean Musk once again tainted something with his presence?
4444444vr t1_jecvtrt wrote
TikTok is basically what you make it. In the last 12 months I’ve read numerous books related to mental health, gone to a therapist for $130/hr a bunch, and TikTok has honestly been at least as good as either of those. Probably better.
“Digital fentanyl” is such a ridiculous claim.
jason_bman t1_jecvrql wrote
Reply to comment by MassiveWasabi in Microsoft research on what the future of language models that can be connected to millions of apis/tools/plugins could look like. by TFenrir
I actually had the same thought given the grammar mistakes. Still an awesome paper!
TheSecretAgenda t1_jecvpwi wrote
Even when the technology is ready it will take time for businesses to adapt. There will be a wait and see trial period while most businesses see how the technology works with early adopters. It still may be only when AI gives user businesses a significant competitive advantage that there will be widespread adoption.
WarmSignificance1 t1_jecvmmm wrote
Reply to comment by chrisc82 in Where do you place yourself on the curve? by Many_Consequence_337
There isn't a new breakthrough every day. Not even close.
Frumpagumpus t1_jecynxk wrote
Reply to comment by Yangerousideas in AGI Ruin: A List of Lethalities by Eliezer Yudkowsky -- "We need to get alignment right on the first critical try" by Unfrozen__Caveman
he realizes AI can think so fast but apparently hasn't thought about how software forks all the time and shuts processes down willy nilly (he thinks death is silly and stupid but software does it all the time)
or other mundane details like what it would mean to mentally copy paste parts of your brain or thoughts or mutexes or encryption