Recent comments in /f/singularity

amplex1337 t1_jedvacf wrote

I still find more useful code examples from Google search more quickly than chatGPT. Even 4.0 spits out code that doesn't work way too often and I am debugging and finding bad API urls, finding PowerShell cmdlets that don't exist, finding the information is outdated or just doesn't work, etc.. It's often faster just to RTFM. Hate to be in the 'get off my lawn' camp because it's still exciting technology, and I've considered myself a futurist for >20 years, but I completely agree. We could have an AGI by 2025 but I'm not sure if we are as close as people think, and the truth is no one knows how close we really are to it, or if we are even on the right path at all yet. It's nice to give people hope, but don't get addicted to hopium.

2

Strange_Soup711 t1_jedv4oq wrote

This was reported perhaps 2-3 years ago: Some people researching new proteins using a biochem AI (not a chatbot) had to repeatedly delete and/or redirect the machine away from creating chemical groups already known to be strong neurotoxins. It wouldn't take evil geniuses to make a machine that preferred such things. Regulate biochem AI? YES/NO.

1

agorathird t1_jeduq0e wrote

So friends, what did some of you say about Google having something much better hidden? Bard is just playing it safe even though it's been an embarrassment since release?

edit juicy:

>shortly after leaving Google in January, Devlin joined OpenAI. Insider previously reported that Devlin was one of several AI researchers to leave Google at the beginning of the year for competitors.
>
>Devlin, who was at Google for over five years, was the lead author of a 2018 research paper on training machine learning models for search accuracy that helped initiate the AI boom. His research has since become a part of both Google and OpenAI's language models, Insider and The Information reported.
>
>OpenAI has hired dozens of former Alphabet staff over the years. Since the company's chatbot made headlines in November for its ability to do anything from write an essay to provide basic code, Google and OpenAI have been locked in an AI arms race.

7

amplex1337 t1_jedufrg wrote

So AI will come up with a way to extract resources from the environment automatically, transport them to facilities to refine, create and fabricate, engineer and build the testing equipment, perform the experiments en masse somehow faster than current time requires? It seems like a small part of the equation will be sped up but it will be interesting to see if anything else will change right away .. It will also be interesting to see what kind of usefulness these LLMs will have in uncharted territory. They are great so far with information humans have already learned and developed, but who knows if stacking transformer layers on an LLM will actually benefit invention and innovation.. since you can't train on data that doesn't exist, RLHF is probably not going to help much, etc. Maybe I'm wrong, we will see!

6

Circ-Le-Jerk t1_jedub44 wrote

You’re right CERN is nothing like OpenAI because the private sector has no use for knowing what a Higgs boson is. But they do have parents https://patents.justia.com/assignee/cern

By law in most countries they are required to license and lease out these things to the private sector. They can’t do patent sitting to sniffle the private sector. So whatever they figure out would be required to go into for profit hands

1

WikiSummarizerBot t1_jedu7t6 wrote

Artificial general intelligence

>Artificial general intelligence (AGI) is the ability of an intelligent agent to understand or learn any intellectual task that human beings or other animals can. It is a primary goal of some artificial intelligence research and a common topic in science fiction and futures studies. AGI is also called strong AI, full AI, or general intelligent action, although some academic sources reserve the term "strong AI" for computer programs that experience sentience or consciousness. Strong AI contrasts with weak AI (or narrow AI), which is not intended to have general cognitive abilities; rather, weak AI is any program that is designed to solve exactly one problem.

^([ )^(F.A.Q)^( | )^(Opt Out)^( | )^(Opt Out Of Subreddit)^( | )^(GitHub)^( ] Downvote to remove | v1.5)

1

siggimund1 t1_jedu70h wrote

Oh well,- I'll let an opinion piece by Evgeny Morozov on The Guardian speak for me (Ok,- not quite - see *Counterproductive): The problem with artificial intelligence? It’s neither artificial nor intelligent

And a light hearted roast to the general public vs "AI" (not to be taken too seriously,- critical thinking is definitely required when addressing AGI, but I think we are sooo long from that stage, that it would actually be counterproductive* to press the panic-button) : Jim the Waco Kid comforting latest GPT model after its reception by the public

*Counterproductive: The OP is obviously very concerned with US/Western interests, but my concern with AI is,- the wrong problem is being addressed. My viewpoint is that we haven't really made that big a progress in AI yet,- only the apparent display of AI,- right now AI is just a very good BS artist (or a very good echo chamber in many situations, if you like) and that is actually were our current focus of concern should lie,- restrict the weird, disinformation like, output that the models, unintentionally I believe but by design*, sometimes propagate.

On the other hand, as OP addresses, a global stop to/ban on development of proper AGI, will only let nefarious actors, not concerned with the ethics of the matter, press forward with their research, while ethical scientist will be restricted in their research, including, what is very important to me, and I assume is very important to most subscribers of this reddit sub,- possible consequences of AGI.

*By design: We are only humans, and that is one of the things that GPT et al. ,sadly or luckily, essentially relies on,- It's called RLHF. It should, in theory, be able to stop the most fringe ideas, but it does not always work, it seems?!!!

1

delphisucks t1_jedtsmr wrote

Well, I think AI can teach itself how to use a body in VR. like millions of years of training, compressed into days. Then we mass produce robots to do everything for us, including research. The only thing really needed is a basic and accurate physics simulation in VR to teach robot AI.

9

expelten t1_jedtoxb wrote

It is crucial that this power is equally distributed. There is nobody I could trust to keep the power of AGI to themselves. Anyway I'm 100% sure AGI would eventually get leaked but it would be much safer to adapt the world progressively with open source models than to suddenly drop the leviathan.

1

Zermelane t1_jedt8ps wrote

Yep. Matt Levine already coined the Elon Markets Hypothesis, but the Elon Media Hypothesis is even more powerful: Media stories are interesting not based on their significance or urgency, but based on their proximity to Elon Musk.

Even OpenAI still regularly gets called Musk's AI company, despite him having had no involvement for half a decade. Not because anyone's intentionally trying to spread a narrative that it's still his company, but either because they are just trying to get clicks, or because they genuinely believe it themselves since those clickbait stories are the only ones they've seen.

2

Kaining t1_jedt5v0 wrote

We're getting good at simulating only the part we need though. Look up what Dassault Systèmes is capable to do for medical practitioner needing trial runs. That's only now.

I guess simulation will only go so far and even AGI will need real world testing for all that's quantum related at the moment but that's the problem with progress. No way to know if what you think is the endgame of possibility really is.

16