Recent comments in /f/singularity
amplex1337 t1_jedvacf wrote
Reply to comment by Professional_Copy587 in Goddamn it's really happening by BreadManToast
I still find more useful code examples from Google search more quickly than chatGPT. Even 4.0 spits out code that doesn't work way too often and I am debugging and finding bad API urls, finding PowerShell cmdlets that don't exist, finding the information is outdated or just doesn't work, etc.. It's often faster just to RTFM. Hate to be in the 'get off my lawn' camp because it's still exciting technology, and I've considered myself a futurist for >20 years, but I completely agree. We could have an AGI by 2025 but I'm not sure if we are as close as people think, and the truth is no one knows how close we really are to it, or if we are even on the right path at all yet. It's nice to give people hope, but don't get addicted to hopium.
Ago0330 t1_jedvaau wrote
It will not be easy to create but do-able if the right parameters are looked at. Most of these AI algorithms look at trillions of parameters when only a handful are truly needed.
Andriyo t1_jedv8ns wrote
Reply to Can you please stop answering technical/meta questions with „ask chatgpt“ or [chatgpt answer]? This is exhausting as f, and makes me worried about a dystopian future where people never use their own mind anymore but ask an AI basically everything, as if using a calculator for 5*4 or so. by BeginningInfluence55
it's only fair for ChatGPT to have a voice in this sub )
Strange_Soup711 t1_jedv4oq wrote
Reply to There's wild manipulation of news regarding the "AI research pause" letter. by QuartzPuffyStar
This was reported perhaps 2-3 years ago: Some people researching new proteins using a biochem AI (not a chatbot) had to repeatedly delete and/or redirect the machine away from creating chemical groups already known to be strong neurotoxins. It wouldn't take evil geniuses to make a machine that preferred such things. Regulate biochem AI? YES/NO.
[deleted] t1_jedv4l0 wrote
Reply to comment by Desperate_Excuse1709 in We have a pathway to AGI. I don't think we have one to ASI by karearearea
[removed]
agorathird t1_jeduq0e wrote
So friends, what did some of you say about Google having something much better hidden? Bard is just playing it safe even though it's been an embarrassment since release?
edit juicy:
>shortly after leaving Google in January, Devlin joined OpenAI. Insider previously reported that Devlin was one of several AI researchers to leave Google at the beginning of the year for competitors.
>
>Devlin, who was at Google for over five years, was the lead author of a 2018 research paper on training machine learning models for search accuracy that helped initiate the AI boom. His research has since become a part of both Google and OpenAI's language models, Insider and The Information reported.
>
>OpenAI has hired dozens of former Alphabet staff over the years. Since the company's chatbot made headlines in November for its ability to do anything from write an essay to provide basic code, Google and OpenAI have been locked in an AI arms race.
Plus-Recording-8370 t1_jedum5j wrote
Reply to comment by visarga in Goddamn it's really happening by BreadManToast
Point taken, but the experimental validation might look very different for ai than you'd think. For instance, instead of needing to run 100.000 generic tests, it would only need 100 extremely detailed tests
BigZaddyZ3 t1_jedui8p wrote
Reply to comment by ItIsIThePope in Do we even need AGI? by cloudrunner69
No, it’ll be Modok except with this guy’s face.
Neurogence OP t1_jeduguf wrote
Reply to comment by RobXSIQ in Open letter calling for Pause on Giant AI experiments such as GPT4 included lots of fake signatures by Neurogence
Fully agreed.
It's very disheartening that this paper was signed by popular individuals like Musk and Wozniak. But hopefully it won't be taken seriously.
amplex1337 t1_jedufrg wrote
Reply to comment by hold_my_fish in Goddamn it's really happening by BreadManToast
So AI will come up with a way to extract resources from the environment automatically, transport them to facilities to refine, create and fabricate, engineer and build the testing equipment, perform the experiments en masse somehow faster than current time requires? It seems like a small part of the equation will be sped up but it will be interesting to see if anything else will change right away .. It will also be interesting to see what kind of usefulness these LLMs will have in uncharted territory. They are great so far with information humans have already learned and developed, but who knows if stacking transformer layers on an LLM will actually benefit invention and innovation.. since you can't train on data that doesn't exist, RLHF is probably not going to help much, etc. Maybe I'm wrong, we will see!
Circ-Le-Jerk t1_jedub44 wrote
Reply to comment by ninjasaid13 in LAION launches a petition to democratize AI research by establishing an international, publicly funded supercomputing facility equipped with 100,000 state-of-the-art AI accelerators to train open source foundation models. by BananaBus43
You’re right CERN is nothing like OpenAI because the private sector has no use for knowing what a Higgs boson is. But they do have parents https://patents.justia.com/assignee/cern
By law in most countries they are required to license and lease out these things to the private sector. They can’t do patent sitting to sniffle the private sector. So whatever they figure out would be required to go into for profit hands
WikiSummarizerBot t1_jedu7t6 wrote
Reply to comment by siggimund1 in My case against the “Pause Giant AI Experiments” open letter by Beepboopbop8
Artificial general intelligence
>Artificial general intelligence (AGI) is the ability of an intelligent agent to understand or learn any intellectual task that human beings or other animals can. It is a primary goal of some artificial intelligence research and a common topic in science fiction and futures studies. AGI is also called strong AI, full AI, or general intelligent action, although some academic sources reserve the term "strong AI" for computer programs that experience sentience or consciousness. Strong AI contrasts with weak AI (or narrow AI), which is not intended to have general cognitive abilities; rather, weak AI is any program that is designed to solve exactly one problem.
^([ )^(F.A.Q)^( | )^(Opt Out)^( | )^(Opt Out Of Subreddit)^( | )^(GitHub)^( ] Downvote to remove | v1.5)
siggimund1 t1_jedu70h wrote
Oh well,- I'll let an opinion piece by Evgeny Morozov on The Guardian speak for me (Ok,- not quite - see *Counterproductive): The problem with artificial intelligence? It’s neither artificial nor intelligent
And a light hearted roast to the general public vs "AI" (not to be taken too seriously,- critical thinking is definitely required when addressing AGI, but I think we are sooo long from that stage, that it would actually be counterproductive* to press the panic-button) : Jim the Waco Kid comforting latest GPT model after its reception by the public
*Counterproductive: The OP is obviously very concerned with US/Western interests, but my concern with AI is,- the wrong problem is being addressed. My viewpoint is that we haven't really made that big a progress in AI yet,- only the apparent display of AI,- right now AI is just a very good BS artist (or a very good echo chamber in many situations, if you like) and that is actually were our current focus of concern should lie,- restrict the weird, disinformation like, output that the models, unintentionally I believe but by design*, sometimes propagate.
On the other hand, as OP addresses, a global stop to/ban on development of proper AGI, will only let nefarious actors, not concerned with the ethics of the matter, press forward with their research, while ethical scientist will be restricted in their research, including, what is very important to me, and I assume is very important to most subscribers of this reddit sub,- possible consequences of AGI.
*By design: We are only humans, and that is one of the things that GPT et al. ,sadly or luckily, essentially relies on,- It's called RLHF. It should, in theory, be able to stop the most fringe ideas, but it does not always work, it seems?!!!
FlimsyVariety t1_jedu5po wrote
Reply to comment by alphabet_order_bot in How does China think about AI safety? by Aggravating_Lake_657
Bro, cool story.
[deleted] t1_jedtzs7 wrote
Reply to Can you please stop answering technical/meta questions with „ask chatgpt“ or [chatgpt answer]? This is exhausting as f, and makes me worried about a dystopian future where people never use their own mind anymore but ask an AI basically everything, as if using a calculator for 5*4 or so. by BeginningInfluence55
[deleted]
RemindMeBot t1_jedtzp3 wrote
Reply to comment by Ok_Faithlessness4197 in Goddamn it's really happening by BreadManToast
I will be messaging you in 2 years on 2025-03-31 08:29:58 UTC to remind you of this link
6 OTHERS CLICKED THIS LINK to send a PM to also be reminded and to reduce spam.
^(Parent commenter can ) ^(delete this message to hide from others.)
| ^(Info) | ^(Custom) | ^(Your Reminders) | ^(Feedback) |
|---|
Ok_Faithlessness4197 t1_jedty5n wrote
Reply to comment by mihaicl1981 in Goddamn it's really happening by BreadManToast
!Remindme 2 years
BigZaddyZ3 t1_jedtx7m wrote
Reply to Do we even need AGI? by cloudrunner69
And this is why we leave the science to the professionals, ladies and gentlemen…
delphisucks t1_jedtsmr wrote
Reply to comment by visarga in Goddamn it's really happening by BreadManToast
Well, I think AI can teach itself how to use a body in VR. like millions of years of training, compressed into days. Then we mass produce robots to do everything for us, including research. The only thing really needed is a basic and accurate physics simulation in VR to teach robot AI.
expelten t1_jedtoxb wrote
Reply to LAION launches a petition to democratize AI research by establishing an international, publicly funded supercomputing facility equipped with 100,000 state-of-the-art AI accelerators to train open source foundation models. by BananaBus43
It is crucial that this power is equally distributed. There is nobody I could trust to keep the power of AGI to themselves. Anyway I'm 100% sure AGI would eventually get leaked but it would be much safer to adapt the world progressively with open source models than to suddenly drop the leviathan.
ItIsIThePope t1_jedtc3m wrote
"Whomever gets ASI first wins"
Well ideally, as soon it comes out, everybody wins, not a bunch of dudes with big bucks or some snazzy politician, ASI is likely smart enough to not be a slave to the bidding of a few and instead look to serve the rest of humanity
Zermelane t1_jedt8ps wrote
Reply to comment by Longjumping_Feed3270 in There's wild manipulation of news regarding the "AI research pause" letter. by QuartzPuffyStar
Yep. Matt Levine already coined the Elon Markets Hypothesis, but the Elon Media Hypothesis is even more powerful: Media stories are interesting not based on their significance or urgency, but based on their proximity to Elon Musk.
Even OpenAI still regularly gets called Musk's AI company, despite him having had no involvement for half a decade. Not because anyone's intentionally trying to spread a narrative that it's still his company, but either because they are just trying to get clicks, or because they genuinely believe it themselves since those clickbait stories are the only ones they've seen.
Kaining t1_jedt5v0 wrote
Reply to comment by SgathTriallair in Goddamn it's really happening by BreadManToast
We're getting good at simulating only the part we need though. Look up what Dassault Systèmes is capable to do for medical practitioner needing trial runs. That's only now.
I guess simulation will only go so far and even AGI will need real world testing for all that's quantum related at the moment but that's the problem with progress. No way to know if what you think is the endgame of possibility really is.
[deleted] t1_jedt0q2 wrote
Reply to comment by greatdrams23 in When will AI actually start taking jobs? by Weeb_Geek_7779
[deleted]
[deleted] t1_jedvdty wrote
Reply to comment by magosaurus in When will AI actually start taking jobs? by Weeb_Geek_7779
[deleted]