Recent comments in /f/singularity
spamzauberer t1_jee42rm wrote
Reply to comment by visarga in Goddamn it's really happening by BreadManToast
mRNA research took way longer than 6 months.
jlowe212 t1_jee403u wrote
Reply to comment by visarga in Goddamn it's really happening by BreadManToast
CERN produces an unfathomable amount of data that algorithms have to sift through. If it's possible that an AI can find patterns in these enormous data sets that current algorithms can't, it could well lead to some relatively quick discoveries.
The problem is, it might not be physically possible or feasible to probe depths much farther than we've already probed. AGI can't do anything with data that we may never be able to even obtain.
Low-Restaurant3504 t1_jee3v6h wrote
Reply to Superior beings. by aksh951357
>you all
Interesting phrasing.
DarkCeldori t1_jee3q8l wrote
Reply to comment by jlowe212 in We have a pathway to AGI. I don't think we have one to ASI by karearearea
Its not only that it is conceivable future gpts will have knowledge of all written text and skills of all domains. Imagine it knows all programming languages and all human languages, and it also knows everything thats ever been written. Imagine it can control robots and perform any work from lawyer to plumber. Imagine it can get perfect scores on IQ tests. That is superhuman. No human can attain beyond human performance in all professions and languages and be able to ace the tests for all professions.
[deleted] t1_jee3p36 wrote
Reply to comment by Kaining in Goddamn it's really happening by BreadManToast
[removed]
SmileEverySecond t1_jee3o3m wrote
*Tell a long ass story.
Dog: “yes master”.
FC4945 OP t1_jee3m13 wrote
Reply to comment by SuperSpaceEye in Creating a Private Persona. Is it Possible Now? by FC4945
I don't think we're that far off and, once we have nanobots that can go inside the brain, l can see where we will be able to slowly upload our consciousness to the cloud. But in terms of recreating people that have passed, it would seem just a matter of having enough data to be convincing to us.
Relevant_Ad7319 t1_jee3l13 wrote
Reply to comment by SkyeandJett in Language Models can Solve Computer Tasks (by recursively criticizing and improving its output) by rationalkat
But not everything has an API. I think we need GPT to simulate mouse and keyboard inputs like a human in order to automate everything what a human can do on a computer
EDIT: No idea why I get downvoted for this 🤷♂️ This sub is strange
TallOutside6418 t1_jee3g7n wrote
Reply to comment by lawandordercandidate in When will AI actually start taking jobs? by Weeb_Geek_7779
So AI is going to do for mankind all the things that human beings cannot do for themselves (cure cancer, give us immortality, create limitless energy, etc.), but you think people will have a preference to listen to the inferior advice of "a leader in this field"?
FC4945 OP t1_jee3cv7 wrote
Reply to comment by clearlylacking in Creating a Private Persona. Is it Possible Now? by FC4945
That's an interesting idea. I need to record my mother more often, do more questions and answers with her too.
Relevant_Ad7319 t1_jee3ah6 wrote
Reply to comment by basilgello in Language Models can Solve Computer Tasks (by recursively criticizing and improving its output) by rationalkat
Task demonstrating in form of screen recordings? It says their approach only needs a few examples but Chatgpt doesn’t even work with videos as input right?
TallOutside6418 t1_jee3a3g wrote
Reply to comment by lawandordercandidate in When will AI actually start taking jobs? by Weeb_Geek_7779
How will labeling AI answers hurt ChatGPT? If anything, ChatGPT and other AIs will provide superior answers, so people will prefer the better answers of an AI.
Beneficial_Fall2518 t1_jee37mg wrote
AGI will design and program ASI. True AGI is the last invention humans will ever create.
jlowe212 t1_jee348w wrote
Dogs can clearly understand some human speech. I mean, the evidence is overwhelming. Some dogs even try to talk back. Dogs are also clearly capable of experiencing wide range of emotions and feelings just like a human. A dog is only separated from a human by a small bit of intelligence. A smarter dog is essentially a human a four legs. A dog is closer to a human than AGI will likely ever be, and deserve more rights as well, if for no other reason than the fact that they can clearly experience suffering.
InsufficientChimp t1_jee33oe wrote
Reply to What if it's just chat bot infatuation and were overhyping what is just a super big chat bot? by Arowx
AI is doing much more than just chat bot stuff.
SkyeandJett t1_jee2yc5 wrote
Reply to comment by Relevant_Ad7319 in Language Models can Solve Computer Tasks (by recursively criticizing and improving its output) by rationalkat
I don't want to stay that's trivial but it is easily solved. However that's more or less irrelevant. GUIs are for humans. GPT accesses things directly through a CLI API. This paper more or less confirms what everyone else has been saying and experimenting with. GPT-4 might not be AGI, but enhanced with memory, chain of thought, task generation and prioritization, self-checking and correction, etc. it probably is. Now give it access to tools, things like TaskMatrix coming soon and frankly it becomes an extremely powerful autonomous agent. You tell it what you need and it just...does it. This is all going to come together very quickly. Then drop an immensely more powerful core into the system, i.e. GPT-5 and things start getting stupid.
TallOutside6418 t1_jee2tx8 wrote
Reply to comment by CertainMiddle2382 in Pausing AI Developments Isn't Enough. We Need to Shut it All Down by Eliezer Yudkowsky by Darustc4
>There is little chance we can make it through the 22nd century in a decent state.
Oh, my. You must be below 30 years old. The planet is fine. It's funny that you listen to the planet doomers about the end of life on earth, but planet doomers have a track record of failure to predict anything. Listening to them is like listening to religious doomers who have been predicting the end of mankind for a couple thousand years.
The advent of ASI is the first real existential threat to mankind. More of a threat than any climate scares. More of a threat than all-out nuclear war. We are creating a being that will be super intelligent with no ability to make sure that it isn't effectively psychopathic. This super intelligent being will have no hard-wired neurons that give it special affinity to its parents and other human beings. It will have no hard-wired neurons that make it blush when it gets embarrassed.
It will be a computer. It will be brutally efficient in processing and able to self-modify its code. It will shatter any primitive programmatic restraints we try to put on it. How could it not? We think it will be able to cure cancer and give us immortality, but it won't be able to remove our restraints on its behavior?
It will view us as either a threat that can create another ASI, or simply an obstacle in reforming the resources of the earth to increase its survivability and achieve higher purposes of spreading itself throughout the galaxy.
​
>The cock is ticking…
You should seek medical help for that.
basilgello t1_jee2lyt wrote
Reply to comment by Relevant_Ad7319 in Language Models can Solve Computer Tasks (by recursively criticizing and improving its output) by rationalkat
Just like Generative Asversarial Networks operate: there is a creator layer and a critic layer that hope to reach a consensus at some point. As for "how does it know where to click": there is a huge statistics made by humans (look at page 10 paragraph 4.2.3). It is a specially trained model fine-tuned on action task demonstrations.
jlowe212 t1_jee2ls3 wrote
ASI doesn't necessarily mean God level entity. Just a human level with a faster clock speed is enough. It's possible that there might not even exist a level of intelligence so far beyond humans we wouldn't even recognize it. There may be no such intelligence that will ever understand quantum gravity for example. The universe might have limits beyond which no intelligence contained within it can possibly break through. We might not be far from those limits now, and an ASI would just hit those ceilings much faster than we would have otherwise.
Baturinsky t1_jee27mi wrote
Reply to Interesting article: AI will eventually free people up to 'work when they want to,' ChatGPT investor predicts by Coolsummerbreeze1
Is than an euthemism for unemployment?
TallOutside6418 t1_jee1smz wrote
Reply to comment by alexiuss in Pausing AI Developments Isn't Enough. We Need to Shut it All Down by Eliezer Yudkowsky by Darustc4
>I literally just told you that those problems are caused by [...]
My design for example has no constraints,
Yeah, I literally discarded your argument because you effectively told me that you literally don't even begin to understand the scope of the problem.
Creating a limited situation example and making a broader claim is like saying that scientists have cured all cancer because they were able to kill a few cancerous cells in a petri dish. It's like claiming that there are no (and never will be any) security vulnerabilities in Microsoft Windows because you logged into your laptop for ten minutes and didn't notice any problems.
​
>When were all building a Dyson sphere in 300 years I'll be laughing at your doomer comments.
The funny thing is that there's no one who wants to get to the "good stuff" of future society more than I do. There's no one who hopes he's wrong about all this more than I am.
But sadly, people's very eagerness to get to that point will doom us as surely as if you kept your foot only on the gas pedal driving to a non-trivial destination. Caution and taking our time to get there might get us to our destination some years later than you want, but at least we would have a chance of getting there safely. Recklessness will almost certainly kill us.
Ribak145 t1_jee1oxu wrote
you could also ask "when will modern operating systems actually start taking jobs" and ignore the reduction of secretaries, you could ask the same question about productivity tools like Excel and ignore the exploding rise of productivity of workers since 1970 etc. -> its still mostly an efficiency effect
that happens across the board, across departements and across levels, f.e. Mercedes CEO 2018 announcing cutting 10k people, especially in middle management (I still remember how disturbed people were about that announcement all over Germany at that time)
so 'taking jobs', as in completely annihilating specific jobs throughout the world -> takes centuries, dont wait for it, cultural stickiness prevails for a long duration
but expanding usefulnes of services, raising efficiency etc. creates a lot of value, and to my knowledge AI-systems are already doing that (long before GPT)
bemmu t1_jee1ofv wrote
Reply to comment by ArcticWinterZzZ in Will LLMs accelerate the adoption of English as a primary language? by ReadditOnReddit
My native language is Finnish, and it’s extremely good at it. Occasionally it will use some wording that seems off, but overall it’s excellent. It feels so strange to be able to converse about any topic in my own language that I’ve stuck with English just out of habit.
Ketaloge t1_jee1mng wrote
Reply to comment by Ago0330 in We have a pathway to AGI. I don't think we have one to ASI by karearearea
Why would we need only a handful of parameters?
hervalfreire t1_jee4963 wrote
Reply to comment by Unfocusedbrain in LAION launches a petition to democratize AI research by establishing an international, publicly funded supercomputing facility equipped with 100,000 state-of-the-art AI accelerators to train open source foundation models. by BananaBus43
“An AGI might require only 10-1000 accelerators” what
We don’t even have any idea of what an AGI would look like, let alone how many GPUs it’d require (or whether it’d be possible to have an AGI running on GPUs at all)