Recent comments in /f/singularity
pleasetrimyourpubes t1_jecpgjd wrote
Reply to comment by Cr4zko in AGI Ruin: A List of Lethalities by Eliezer Yudkowsky -- "We need to get alignment right on the first critical try" by Unfrozen__Caveman
Many "Twitter famous AI people"* have turned on him for the TIMES article / Lex interview, when just a few days ago they were bowing at his feet. Yud is gonna for sure expand his blocklist since he is quite famously thin skinned.
Lex's Tweet about weak men gaining a little power was almost certainly about Yud. Because Yud wanted to leave the youth with the wisdom that "AI is going to kill you before you grow up."
The TIMES article was completely asinine.
*who may or may not know shit about AI.
SWATSgradyBABY t1_jecpb0s wrote
Funny question.
maskedpaki t1_jecp89e wrote
Reply to comment by czk_21 in AI Policy Group CAIDP Asks FTC To Stop OpenAI From Launching New GPT Models by TachibanaRE
Won't change much even if this goes through. Open AI has plenty of markets outside the USA willing to pay good money for its technology. Regulation can't stop market forces that are this powerful.
[deleted] t1_jecp7fs wrote
Reply to comment by Mortal-Region in GPT characters in games by YearZero
[deleted]
thecoffeejesus t1_jecp4hn wrote
It already has - mine. Technical documentation is no longer required at my former workplace
journalingfilesystem t1_jecoua6 wrote
Here are my two cents. In a sense it has already begun. We have seen massive tech layoffs recently. There are other factors to account for that, but the idea that fewer coders with AI tools can be more effective then more coders without them probably is a factor that companies are taking into account.
From my own experience, institutions have a good deal of inertia. It takes time and resources to change the way that a company does things. People stick with what they are used to as long as it works, even in the presence of newer more efficient options. If a new option is a big enough improvement people will switch, but it won’t be an instant process.
Add to this all the technologies that have been hyped up as the next great thing, and have turned out to either go nowhere or have proved to have been much less revolutionary than promised. Hype cycle is something that experienced decision makers have learned to largely ignore.
Basically I think that what is going to happen is that the AI tech will continue to advance at a rapid pace. Then a few nimble and forward thinking companies will start using it in a major way. It will then take a few quarters of financial reports for most other companies to realize that this is the real deal. Only then will we start seeing really dramatic changes to the job market.
acutelychronicpanic t1_jecoprq wrote
Reply to comment by Smallpaul in LAION launches a petition to democratize AI research by establishing an international, publicly funded supercomputing facility equipped with 100,000 state-of-the-art AI accelerators to train open source foundation models. by BananaBus43
I My mental model is based on this:
Approximate alignment will be much easier than perfect alignment. I think its achievable to have AI with superhuman insight and be well enough aligned that it would take deliberate prodding or jailbreaking to get it to model malicious action. I would argue that in many domains, GPT-4 already fits this description.
Regarding roughly equivalent models, I think that there is an exponential increase in intelligence required to take action in the world as you attempt to do more complicated things or act further into the future. My intuition is based on the complexity of predicting the future in chaotic systems. Society is one such system. I don't think 10x intelligence will necessarily lead to 10x increase in competence. I strongly suspect we underestimate the complexity of the world. This may buy us a lot of time by decreasing the peaks in the global intelligence landscape to the extent that humans utilizing narrow AI and proto-AGI may have a good chance.
I do know that regardless of if the AI alignment issue can be solved, the largest institutions currently working on AI are not well aligned with humanity as institutions. Especially the ones that would continue working despite a global effort to slow AI cannot be trusted.
I'm willing to read any resources you want to point me to, or any arguments you want to make. I'd rather be corrected if possible.
Geeksylvania t1_jecoodn wrote
Reply to comment by smokingPimphat in What are the so-called 'jobs' that AI will create? by thecatneverlies
When the choice is between spending $500 million to produce a movie with real people and $0 to produce an identical movie with AI, it won't be a hard decision.
AlFrankensrevenge t1_jecojf6 wrote
Reply to comment by Focused-Joe in There's wild manipulation of news regarding the "AI research pause" letter. by QuartzPuffyStar
same person who pays you, probably.
liramor t1_jecocoy wrote
Reply to comment by lawandordercandidate in When will AI actually start taking jobs? by Weeb_Geek_7779
Honestly the things I would go outside ChatGPT for are the more obscure or non-mainstream stuff that GPT is "aligned" so heavily that it will never mention. I definitely would value a search engine that only provides non-AI generated stuff, for that reason alone.
AlFrankensrevenge t1_jeco9ec wrote
Reply to comment by seancho in There's wild manipulation of news regarding the "AI research pause" letter. by QuartzPuffyStar
Read the OpenAI paper on how it will change 80% of jobs. The real power is in the APIs and plugins to other apps. The sky is the limit.
aykantpawzitmum t1_jeco8o6 wrote
Reply to LAION launches a petition to democratize AI research by establishing an international, publicly funded supercomputing facility equipped with 100,000 state-of-the-art AI accelerators to train open source foundation models. by BananaBus43
Tech Bros: "Finally it's time to democratize AI!"
Also Tech Bros: "Lol I'm not hiring any people, I have AI robots to do my work"
Parodoticus t1_jeco2p9 wrote
Reply to comment by greatdrams23 in Question about school by SnaxFax-was-taken
If by educated you mean, lobotomized and successfully conformed route memorization robots, sure people are more educated than ever.
AlFrankensrevenge t1_jecnugx wrote
Reply to comment by scooby1st in There's wild manipulation of news regarding the "AI research pause" letter. by QuartzPuffyStar
But then where does it end? With a superintelligence in 5 years, when we have no clear way of preventing it from going rogue?
AlFrankensrevenge t1_jecnl6n wrote
Reply to comment by Smellz_Of_Elderberry in There's wild manipulation of news regarding the "AI research pause" letter. by QuartzPuffyStar
Unbiased and incorruptible? Have you learned nothing from ChatGPT's political reprogramming?
lawandordercandidate t1_jecnk10 wrote
Reply to comment by liramor in When will AI actually start taking jobs? by Weeb_Geek_7779
Maybe something like "A leader in this field verifies this answer" next to the output?
czk_21 t1_jecng1u wrote
Reply to comment by SupportstheOP in What were the reactions of your friends when you showed them GPT-4 (The ones who were stuck from 2019, and had no idea about this technological leap been developed) Share your stories below ! by Red-HawkEye
> we could have a human-level intelligence within the near future
many models are scoring as or better than average human in intelligence tests, I guess you mean AGI
AmputatorBot t1_jecn5e8 wrote
It looks like OP posted an AMP link. These should load faster, but AMP is controversial because of concerns over privacy and the Open Web.
Maybe check out the canonical page instead: https://www.businessinsider.com/ai-researcher-quit-google-openai-bard-training-on-chatgpt-report-2023-3
^(I'm a bot | )^(Why & About)^( | )^(Summon: u/AmputatorBot)
agonypants t1_jecn2g1 wrote
Reply to AGI Ruin: A List of Lethalities by Eliezer Yudkowsky -- "We need to get alignment right on the first critical try" by Unfrozen__Caveman
Yudkowsky literally suggests that it would be better to have a full-scale nuclear war than to allow AI development to continue. He's a dangerous, unhinged fucking lunatic and Time Magazine should be excoriated for even publishing his crap. EY, if you're reading this - trim your goddamn eyebrows and go back to writing Harry Potter fan-fic or tickling Peter Thiel's nether regions.
czk_21 t1_jecmzvd wrote
Reply to comment by chlebseby in What were the reactions of your friends when you showed them GPT-4 (The ones who were stuck from 2019, and had no idea about this technological leap been developed) Share your stories below ! by Red-HawkEye
I would say that computers and internet have changed life of everyone in developed world, unless you live in the cave, difference is this is bigger and the change will be faster as well
AlFrankensrevenge t1_jecmsuz wrote
Reply to comment by Mortal-Region in There's wild manipulation of news regarding the "AI research pause" letter. by QuartzPuffyStar
Image generators and self-driving cars don't create the same kinds of extensive risk that GPT4 does. GPT4 is much more directly on the path to AGI and superintelligence. Even now, it will substantially impact something like 80% of jobs according to OpenAI itself. The other technologies are a big deal, but don't ramify through the entire economy to the same extent.
smokingPimphat t1_jecmp4j wrote
Reply to comment by Geeksylvania in What are the so-called 'jobs' that AI will create? by thecatneverlies
I don't think this will ever really happen in any large scale way for a few reasons;
People don't actually know what they want in most cases. This is especially true with regards to creativity.
Very few people want to think about what they want to see, they are happy to choose from available options and I will admit that AI generated content will be part of those options at some point in the future, but that is a long way off and people are not going to just stop making all the things they do now. It is far more likely that humans will create most things and then AI will optionally be used to customize them in various ways should someone actually desire it.
To take your example of a movie generation AI, IMO its far more probable that disney will make a movie and you will optionally be able to ask an AI to do things like extent a plot point or make the fights longer. But even that is not really something that most people are going to be willing to do. They just want to see someone else story, they aren't going to write their own.
There are so many things that people "could" do themselves that they choose to pay others to do, if machines can be leveraged to offer more options that is probably a good thing, but to think that the entire entertainment industry will be replaced is IMO silly. There will always be humans in the mix as the machine will never truly know what we want when we barely know ourselves.
Readityesterday2 t1_jecmjn6 wrote
I’ll share my reason why it hasn’t happened.
Jobs are not just about skills. They are also about responsibilities. And you can’t fire ai. Or hold it accountable. Unless you wanna hear “I’m just a model dumbass what did ya think?” 😂
Seriously. There’s more to a hire than churning butter.
ArcticWinterZzZ t1_jecmcf3 wrote
Quite the opposite; GPT-4 is excellent at a wide variety of languages, and as a context-aware translation tool (that can even take in images!) it has the potential to be far better at translating webpages and conversations than even the best currently existing translation software.
johanknl t1_jecpodm wrote
Reply to comment by BigZaddyZ3 in Do people really expect to have decent lifestyle with UBI? by raylolSW
Of course. I never claimed otherwise. Being well produced or having negative effects on a community can absolutely be argued in objective ways.
The fact that for example "a good colour balance in painting is important and good" is subjective, does not mean that we cannot then go on to say that someone objectively produced a better result for that subjective ruleset.
That would focus in on details like that though. Saying that something is "beautiful" is not in any way objective.