Recent comments in /f/singularity
CypherLH t1_jedl7gb wrote
Reply to comment by Emory_C in When will AI actually start taking jobs? by Weeb_Geek_7779
actually there is. low-end blue collar jobs are hard to fill at wages companies want to pay...which is why they tend to hire immigrants. (both legal and illegal)
CypherLH t1_jedkz91 wrote
Reply to comment by Iffykindofguy in When will AI actually start taking jobs? by Weeb_Geek_7779
I assume the ongoing layoff wave in tech is probably AI-related. Perhaps not explicitly but most of the people making those hire/fire calls at tech companies are well aware of AI developments and probably have at least played around with chatGPT, etc. A lot of those jobs won't be coming back.
ptxtra t1_jedkpji wrote
Not just tiktok. All of social media with AI based recommendation algorithms.
[deleted] t1_jedkn2r wrote
Reply to Superior beings. by aksh951357
[deleted]
420BigDawg_ t1_jedkm4y wrote
I can see us notice it in our monthly census probably September 2023. We’ll see tech bro office jobs fall off a cliff. Writer strikes by October 2023. Too many writers in the media from journalists to sitcom writers feel they are at threat of losing their jobs.
This will give Elon an undeserved “i told ya so!” Moment and he’ll probably joke about running for president (🚨)
alphabet_order_bot t1_jedkkgc wrote
Reply to comment by Stinky_the_Grump23 in How does China think about AI safety? by Aggravating_Lake_657
Would you look at that, all of the words in your comment are in alphabetical order.
I have checked 1,428,916,217 comments, and only 272,584 of them were in alphabetical order.
Stinky_the_Grump23 t1_jedkjw2 wrote
Reply to comment by FrogFister in How does China think about AI safety? by Aggravating_Lake_657
It's safer than the West
systranerror t1_jedkgw2 wrote
Reply to comment by Zer0D0wn83 in Ray Kurzweil Predicted Simulated Biology is a Path to Longevity Escape Velocity by Dr_Singularity
Yeah I definitely believe that he's finished it, but the traditional publishing industry can no longer keep up with the speed this stuff is moving at. He might have to change large sections of the book or start predicting further out somehow. To be honest, I find his "computronium" idea the weakest of his predictions, so I hope he doesn't go that route!
Stinky_the_Grump23 t1_jedkgkr wrote
Reply to comment by Yourbubblestink in How does China think about AI safety? by Aggravating_Lake_657
do you think this is different to the USA? ;)
sideways t1_jedkfko wrote
Reply to comment by visarga in Goddamn it's really happening by BreadManToast
You don't really know what level GPT-5 is going to be.
Regardless, you're right - we're not going to leapfrog right over the scientific method with AI. Experimentation and verification will be necessary.
But ask yourself how much things would accelerate if there was an essentially limitless army of postdocs capable of working tirelessly and drawing from a superhuman breadth of interdisciplinary research...
Kafke t1_jedkbke wrote
Reply to comment by scooby1st in When people refer to “training” an AI, what does that actually mean? by Not-Banksy
Some childrens tv shows or media programs stating incorrect information does not make it correct. Additive primaries are RGB, subtractive primaries are CMY. The idea that RBY are primary colors is a popular misconception, but is incorrect. It has it's roots in art classes prior to proper scientific investigation of color, light, and modern technology. If your goal is art history, then yes, people in the past incorrectly believed that the primary colors (both additive and subtractive) were RBY. They were wrong. Just as people believed the earth was flat, yet were wrong.
ptxtra t1_jedk9yh wrote
Reply to comment by [deleted] in The next step of generative AI by nacrosian
That doesn't help with reasoning. It only connects multiple AIs with code. If the AI gives an unreasonable answer to a prompt and forgets the context, you can't help that with chaining it to other AIs.
Stinky_the_Grump23 t1_jedjzqe wrote
Reply to comment by sideways in Goddamn it's really happening by BreadManToast
I have very young kids and I'm already wondering what our discussions will be around when reminiscing about the days "before AI", like I used to ask my dad who grew up without cars or electricity in a village of illiterate farmers. The crazy thing is, we have no real idea where AI is taking us, good or bad. I don't think our future has ever looked so uncharted as it does right now.
Louheatar t1_jedjx7p wrote
Reply to comment by mutantbeings in When will AI actually start taking jobs? by Weeb_Geek_7779
i can also share some insight from a tech startup i worked in until a week ago, no layoffs yet. there was a lot of fear in the air, but in general, everyone, including the leadership, the developers, and the artists, generally lived in a state of serious denialism. they literally keep just laughing about whatever ai tools are popping out, sharing whatever buggy and weird gifs they find (instead of the actually great material the tools can produce), trying to somehow maintain their old world view i guess.
while (generative) ai poses a fundamental business risk for the company and most of the jobs, there was absolutely no deep conversations about it. nobody is taking it seriously, because they don't want to. instead of accepting and adapting, they're doubling down on their obviously wrong path and burning everyone out while doing it.
visarga t1_jedjvrn wrote
Reply to comment by sideways in Goddamn it's really happening by BreadManToast
> The next phase shift happens when artificial systems start doing science and research more or less autonomously. That's the goal. And when that happens, what we're currently experiencing will seem like a lazy Sunday morning.
At CERN in Geneva they have 17500 PhD's working on physics research. Each of them GPT-5 or higher level, and yet it takes years and huge investments to get one discovery out. Science requires testing in the real world, and that is slow and expensive. Even AGI needs to use the same scientific method with people, it can't theorize without experimental validation. Including the world in your experimental loop slows down progress speed.
I am reminding people about this because we see lots of magical thinking along the lines of "AGI to ASI in one day" ignoring the experimental validation steps that are necessary to achieve this transition. Not even OpenAI researchers can guess what will happen before they start training, scaling laws are our best attempt, but they are very vague. They can't tell us what content is more useful, or how to improve a specific task. Experimental validation is needed at all levels of science.
Another good example of what I said - the COVID vaccine was ready in one week but took six months to validate. With all the doctors focusing on this one single question, it took half a year, while people were dying left and right. We can't predict complex systems in general, we really need experimental validation in the loop.
Zer0D0wn83 t1_jedjs50 wrote
Reply to comment by systranerror in Ray Kurzweil Predicted Simulated Biology is a Path to Longevity Escape Velocity by Dr_Singularity
I agree with this, which is a real shame. I'll read it anyway, I've been waiting for it for more than 15 years.
He HAS definitely finished it though - Neil De Grasse Tyson has read it (kurzweil was on his podcast around Xmas, and they talk about it)
Exel0n t1_jedjq0o wrote
Reply to comment by Chatbotfriends in When will AI actually start taking jobs? by Weeb_Geek_7779
doctor and lawyer school in US is just brute, rote memorizing. no real skill required other than memorizing.
thats why ai is so good at it.
med and lawyer school in US are de facto monopolies that put any big tech to utter shame.
Longjumping_Feed3270 t1_jedj04k wrote
Reply to There's wild manipulation of news regarding the "AI research pause" letter. by QuartzPuffyStar
If the media quotes a name, they will only quote one that they can safely assume their audience recognizes.
Elon Musk is such a name.
All the other ones? I have never heard of them and I'm an AI curious software engineer, though not deeply entrenched in the AI community.
0% chance that the average reader knows any of them.
FoniksMunkee t1_jediwl9 wrote
Reply to comment by [deleted] in GPT characters in games by YearZero
Yes, that's a good point I didn't think of is API costs.
The other issue is games kinda move slowly in some respects. There are games that started before ChatGPT was commonly known that wont be finished until after ChatGPT 5 is out. And there is no way these tools will be integrated during that process.
They would also have to convince Sony, MS and Nintendo to have their SDK's in a model. I don't think MS will necessarily have a problem with that... but there's a ton of third party libraries that would need to come on board, not to mention how you deal with existing legacy code. There are still more companies in the AAA industry with custom engines than are using commercial engines like UE4/UE5.
Then comes the RND, then comes the new game... so what, 5 years before we see wide spread adoption in the AAA market?
Andriyo t1_jedirnp wrote
Reply to comment by theotherquantumjim in The argument that a computer can't really "understand" things is stupid and completely irrelevant. by hey__bert
It is certainly fundamental to our understanding of the world, but if we all forget tomorrow that 1+1 =2 and all math altogether, the world won't stop existing :)
akat_walks t1_jediqoe wrote
Reply to What were the reactions of your friends when you showed them GPT-4 (The ones who were stuck from 2019, and had no idea about this technological leap been developed) Share your stories below ! by Red-HawkEye
I tried explaining it in a work meeting a few days ago. Talked about how it will change how our department will work forever moving forward. Blank stares. They had no idea what I was talking about. I think some of them thought I had made it up. I’ll be giving a presentation on it in a couple of weeks. Wish me luck!
tkeRe1337 t1_jedikei wrote
Reply to comment by DetachedOptimist in Goddamn it's really happening by BreadManToast
Man I’ve been trying to explain for 10 years why we need to vote for the Pirateparty here and change copy right laws. Maybe people will realize I’m not a tool sooner rather than later…
musicofspheres1 t1_jedid47 wrote
Reply to Goddamn it's really happening by BreadManToast
More advancements in the next decade than previous 100 years
babreddits t1_jedia3g wrote
Reply to comment by DetachedOptimist in Goddamn it's really happening by BreadManToast
I know I had many sleepless nights thinking about this exact thing. We’re fucked.
visarga t1_jedl81q wrote
Reply to comment by mihaicl1981 in Goddamn it's really happening by BreadManToast
I think the social component of AI is picking up steam. What I mean is the culture around AI - how to train, fine-tune, test and integrate AIs in applications, how to mix and match the AI modules, this used to be the domain of experts. Now everyone is assimilating this culture and we see an explosion of creativity.
The rapid rate of AI advancement is overlapping with the rapid rate of social adoption of AI and making it seem to advance even faster.
12h later edit: This paper comes out HuggingGPT: Solving AI Tasks with ChatGPT and its Friends in HuggingFace . AI is orchestrating AI by itself. What can you say?