Recent comments in /f/singularity
PM_ME_ENFP_MEMES t1_jeg4oec wrote
Reply to ChatGB: Tony Blair backs push for taxpayer-funded ‘sovereign AI’ to rival ChatGPT by signed7
None o’that foreign muck
SucksToYourAssmar3 t1_jeg4m0y wrote
Reply to comment by Moist_Chemistry1418 in The only race that matters by Sure_Cicada_4459
An immortal toddler. The future is bright.
blackremover t1_jeg4lt8 wrote
Reply to comment by confused_vanilla in Just my two cents by [deleted]
This is different. The curve is exponential, exponentially getting harder. It's easier to make a computer than make a computer that works on it's own.
ShamanicHellZoneImp t1_jeg4jv5 wrote
I will say i watched that interview he did with Lex real closely because i didn't know much about him. He seems more thoughtful and level headed than 95% of the people who could have wound up in his position. Small comfort i guess, hopefully that counts for something.
Relevant_Ad7319 t1_jeg4g5y wrote
Reply to Should AIs have rights? by yagami_raito23
No
darkkite t1_jeg4eey wrote
Reply to comment by Sigma_Atheist in Google CEO Sundar Pichai promises Bard AI chatbot upgrades soon: ‘We clearly have more capable models’ - The Verge by Wavesignal
I think only one of the authors still works at Google
but I would still put raw engineering of google over open ai but google has problems in execution and maintaing
Focused-Joe t1_jeg4d4a wrote
Reply to Google CEO Sundar Pichai promises Bard AI chatbot upgrades soon: ‘We clearly have more capable models’ - The Verge by Wavesignal
Sundar Pichai reached his dead end. He is ready to go and replaced with somebody better
kiropolo t1_jeg4b5v wrote
lol
Pitchforks_n_puppies t1_jeg49n8 wrote
Reply to comment by AdditionalPizza in Over-under that Google already has AGI? Try and rationalize the release of Bard. by AdditionalPizza
because it's what they had
Parodoticus t1_jeg46tp wrote
Reply to Just my two cents by [deleted]
Your understanding of what's actually happening is profoundly lacking. The human brain's parallel networking supports something that might very well be impossible in a computer, that being subjectivity. But the whole horrifying discovery of the recent AI explosion is that this subjectivity is an arbitrary byproduct of evolution that is not required in any way to produce a mind every bit as capable as that of man and even exceeding it. This marvelous soul of ours, this subjectivity we wield behind our eyes, this 'semina Prometheae', the seeds of Promethean fire invested to us: the new discovery is that it's fucking meaningless. It doesn't do anything. It's just an evolutionary accident that isn't used for anything (kind of like how hiccups are just an evolutionary accident, they go along with some other thing necessary to support our biological systems, they are not themselves used to do anything) and you can take it out of the system entirely and nothing changes about the functioning of cognition- at all. It is worth nothing, whatsoever. Not only does it not have any existential meaning, it doesn't even have any PRACTICAL MEANING. Hey man it doesn't make me feel good knowing this either, but the cold, hard, objective reality has just told us that is exactly the fucking case. Because minds that can do everything our own minds do are being created that don't have any of that magical subjectivity at all.
We need computers a trillion times more advanced to simulate subjectivity, but subjectivity doesn't actually do anything- and with the computers we already have, we can produce intelligence completely detached from any qualia or subjectivity. You see, THAT is the point. It requires an infinitesimal fraction of the power of biological neurons to create cognition and intelligence by themselves, without any subjectivity component. Almost all of the power of biology is wasted producing our consciousness, not producing our intelligence. Because we, humans, are already somewhat intelligent, it turned out to be possible for us to reverse engineer intelligence itself, separate it from subjectivity, and then reproduce it in silicon to create a thinking mind that has no soul, no experience, no subjectivity, but that can still write a symphony, communicate in language, and do literally everything that we can do; including forming unique personalities, theory of mind, and intrinsic motivations; independent thought- the whole shabang.. Soon these new silicon minds will not only match us, but best us even in those domains in which we were most proud, like art, like music. And yet these have no consciousness, no subjectivity. They have thought though, cognition, and intelligence. They have a mind with no soul. They are golems, they are shoggoth, and the world is always inherited by them once they are created.
I bet everything I own that you will admit what I just said is true in less than 18 months, because that is how long man has left at the top of the food chain. I get sick of dealing with people drunk on their delusional copium, refusing to accept AI is here despite it passing every single conceivable test we have thought to give for intelligence and independent thought. It's obvious what it is, self-imposed delusion because the idea of being supplanted as the universe's most intelligent being is just that repulsive to a lot of people. I for one welcome it if it means I don't ever have to have another meaningless debate with somebody on something that's already over and done with.
Essentially: in order for evolution to produce intelligence,- because evolution is blind, deaf, and dumb,- Nature had to begin with a weird mutant fish tadpole thing that accidentally spawned with a clump of its nerve cells on the outside of its fuckin' head or something. When that little deformed dingus then bumped into something, the cells fired and jerked it out of the way, gave it a little micro seizure that ended up saving it from lemming-ing itself to death like all of its forbears: yeah, it was "intelligent". So that locked evolution into producing intelligence in this roundabout way, namely by developing the nervous system. These kind of primordial reflexes get filtered through multiple layers of brain tissue, all the way up to the human neocortex, each time becoming more and more tightly interconnective, supporting more and more complex behavior- more intelligence, which lends itself to more successful rounds of passing our genome on. So greater intelligence going along with greater subjectivity, with deepened internal experience: that was just an accident. And not even a happy accident, just a stupid one. We have essentially proven that now. And now- now we can create this intelligence in a much better, less roundabout way- we can create intelligence, we can create entire minds that are not burdened with supporting this purposeless 'experience' thing. Indeed, we cannot simulate experience and subjectivity even if we pooled all the computing resources on earth with present technology- but that's meaningless because subjectivity is not required to produce intelligence and cognition, which these new AIs are doing with the smallest fraction of the resources demanded by a nervous system, given the fact that 99 percent of that nervous system's resources are devoted to 'computing' things that don't have any impact on the mind and intelligence and thought. We found a way to cut that whole 99 percent bullshit out of the picture.
confused_vanilla t1_jeg43uh wrote
Reply to Just my two cents by [deleted]
>AGI will not happen in your lifetime
There are still people alive from before the first real computers were made (Not the theory), and from when the flight was still a new technology. Who are you to make this bold of a claim with such certainty?
Focused-Joe t1_jeg42la wrote
Reply to Google CEO Sundar Pichai promises Bard AI chatbot upgrades soon: ‘We clearly have more capable models’ - The Verge by Wavesignal
Google Cloud platform is amazing place for developers .even better then azure and AWS. I don't know what are they waiting with integrating AI in Google platforn
AdditionalPizza OP t1_jeg3ynf wrote
Reply to comment by Pitchforks_n_puppies in Over-under that Google already has AGI? Try and rationalize the release of Bard. by AdditionalPizza
Awesome, can you ask Pichai the question in OP for me, why release Bard at all?
Ok-Variety-8135 t1_jeg3txj wrote
Reply to Should AIs have rights? by yagami_raito23
Only if they sincerely ask for.
AdditionalPizza OP t1_jeg3qc0 wrote
Reply to comment by SharpCartographer831 in Over-under that Google already has AGI? Try and rationalize the release of Bard. by AdditionalPizza
In the same eerie vein as my thinking, friend. Releasing Bard to take prying eyes away maybe.
turnip_burrito t1_jeg3lh1 wrote
Reply to comment by webernicke in Goddamn it's really happening by BreadManToast
Born just in time to become immortal, better versions of ourselves, maybe.
yeaman1111 t1_jeg3fac wrote
Reply to Tech World Be Like... by sweetpapatech
While I appreciate the meme, I'm not sure I like what it says about this subs direction...
A_Human_Rambler t1_jeg38wo wrote
Reply to Should AIs have rights? by yagami_raito23
We need to define legal personhood and extend rights to more animals and someday AI. It's not there yet, but it will be and we need to be ready.
Jeffy29 t1_jeg33to wrote
Reply to Language Models can Solve Computer Tasks (by recursively criticizing and improving its output) by rationalkat
That's one of the most boring names I've have seen a paper have lol, though I skimmed it and it looks quite good and is surprisingly readable. Though I don't think this method will be adopted anytime soon, from the description it sounds quite heavy on inference and given how much compute is needed for current (and rapidly growing) demand, you don't want to add to it when you can just train a better model.
The current field really reminds me of the early semiconductor era, everyone knew that there were lots of gains to be had by making transistors in a smarter way but there wasn't the need when node shrinking was going so rapidly and gains were, it wasn't until the late 2000s and 2010s when the industry really started chasing those gains which there are plenty but it isn't nearly as cheap or fast as the good ol' days of transistor shrinking. But it is good to know that even if LLM performance gains inexplicably completely stops tomorrow, we still have lots of methods (like this and others) to improve their performance.
SharpCartographer831 t1_jeg2w0h wrote
Reply to Over-under that Google already has AGI? Try and rationalize the release of Bard. by AdditionalPizza
If we're only a few years away? Some suggesting 18 months away, then it stands to reason, Google and multiple other labs have crossed the finish line and waiting to inform the public. Possibly also due to government intervention, which could also explain a few things about Google, going dark.
Pitchforks_n_puppies t1_jeg2qea wrote
Reply to Over-under that Google already has AGI? Try and rationalize the release of Bard. by AdditionalPizza
I've talked to people at Google and all accounts are that they are scrambling. There's no subterfuge behind Bard.
Fair_Jelly t1_jeg2oyh wrote
Reply to What do I do? by SprayOnMe43
Don't take career advice from this sub.
SkyeandJett t1_jeg2kr6 wrote
Reply to comment by Different-Froyo9497 in Meta AI: Robots that learn from videos of human activities and simulated interactions by TFenrir
While this is true at a surface level it's another exponential process (within reason and limited by logistics). You build your initial factory and every robot produced does nothing else but bolster the infrastructure necessary to build more robots until you hit some critical mass of production and then send them out to do everything else (all calculated and coordinated by the AGI). That's the smartest thing to do but it relies on the government pulling their heads out of their ass. Instead we'll probably continue to rely on capitalism and draw the whole fucking process and pain out way longer than necessary.
VancityGaming t1_jeg2gcc wrote
Reply to comment by SkyeandJett in Meta AI: Robots that learn from videos of human activities and simulated interactions by TFenrir
>Absolutely fantastic. To the top of the sub you go. Generalist android labor is Androids trained on Pornhub are coming hard and fast. The world is about to become a VERY different place.
Relevant_Ad7319 t1_jeg4qmg wrote
Reply to Sam Altman's tweet about the pause letter and alignment by yottawa
Very difficult to build an alignment data set that everyone agrees on