Recent comments in /f/singularity
agonypants t1_je8hynu wrote
He'd rather see a full-scale nuclear war than train some AI machines? What a fucking kook this guy is. Hopefully nobody takes this loon seriously.
StevenVincentOne t1_je8hw4z wrote
Reply to comment by [deleted] in The argument that a computer can't really "understand" things is stupid and completely irrelevant. by hey__bert
Excellent points. One could expand on the theme of variations in human cognition almost infinitely. There have to be books written about it? If not...wow huge opportunity for someone.
As a mediator and a teacher of meditation and other such practices, I have seen that most people have no cognition that they have a mind...they perceive themselves as their mind activity. A highly trained mind has a very clear cognitive perception of a mind which experiences activity of mind and can actually be turned off from producing such activity. The overwhelming majority of people self-identify with the contents of the mind. This is just one of the many cognitive variations that one could go on about.
Truly, the discussion about AI and its states and performance is shockingly thin and shallow, even among those involved in its creation. Some of Stephen Wolfram's comments recently have been surprisingly short sighted in this regard. Brilliant in so many ways, but blinded by bias in this regard.
Shack-app t1_je8hr31 wrote
Reply to comment by Iffykindofguy in What are the so-called 'jobs' that AI will create? by thecatneverlies
Maybe, maybe not. “Best” will be a moving target, but generically it will mean “best at using LLMs to do their job”.
Spire_Citron t1_je8hn6c wrote
Reply to comment by Mrkvitko in The Only Way to Deal With the Threat From AI? Shut It Down by GorgeousMoron
Especially since AI has the potential to make incredible positive contributions to the world. Nuclear war, not so much.
justowen4 t1_je8hj5f wrote
Reply to The argument that a computer can't really "understand" things is stupid and completely irrelevant. by hey__bert
It’s also not true, even Stephen Wolfram, who is a legitimate genius in the technical definition of genius, has to rework the definition of “understand” to avoid applying it to ChatGPT. Understanding, like intelligence, has to be defined in terms of thresholds of geometric associations, because that’s what our brain does. And guess what, that’s what LLMs do. It’s coordinates at the base layer. Doesn’t mean it’s conscious, but it’s definitely intelligence and understanding at the fundamental substrate. To redefine these words so that only humans can participate is just egotistical nonsense
CaspinLange t1_je8hao3 wrote
What’s also interesting is the fact that autocratic governments don’t want to unleash an AI that can’t be controlled just as much as Democratic countries don’t want to unleash an AI that can’t be controlled.
Mortal-Region t1_je8h882 wrote
A neural network has very many weights, or numbers representing the strengths of the connections between the artificial neurons. Training is the process of setting the weights in an automated way. Typically, a network starts out with random weights. Then training data is presented to the network, and the weights are adjusted incrementally until the network learns to do what you want. (That's the learning part of machine learning.)
For example, to train a neural network to recognize cats, you present it with a series of pictures, one after the other, some with cats and some without. For each picture, you ask the network to decide whether the picture contains a cat. Initially, the network guesses randomly because the weights were initialized randomly. But every time the network gets it wrong, you adjust the weights slightly in the direction that would have given the right answer. (Same thing when it gets the answer right; you reinforce the weights that led to the correct answer.)
For larger neural networks, training requires an enormous amount of processing power, and the workload is distributed across multiple computers. But once the network is trained, it requires much less power to just use it (e.g., to recognize cats).
FlyingCockAndBalls t1_je8h4g3 wrote
Reply to comment by ActuatorMaterial2846 in When people refer to “training” an AI, what does that actually mean? by Not-Banksy
what is so special about the transformer architecture?
Iffykindofguy t1_je8h3td wrote
Reply to comment by Shack-app in What are the so-called 'jobs' that AI will create? by thecatneverlies
No, it will allow the most connected and rich in each field. "best" has nothing to do with it. Code will eventually die out you wont be able to keep up with the AI.
Iffykindofguy t1_je8gquu wrote
Reply to comment by GorgeousMoron in The Only Way to Deal With the Threat From AI? Shut It Down by GorgeousMoron
Me, I am to say.
FlyingCockAndBalls t1_je8gqju wrote
Reply to Microsoft research on what the future of language models that can be connected to millions of apis/tools/plugins could look like. by TFenrir
it feels like the pace is moving lightning fast, and yet also super slow.
Iffykindofguy t1_je8gmhr wrote
Reply to comment by BigZaddyZ3 in Do people really expect to have decent lifestyle with UBI? by raylolSW
Life.
agonypants t1_je8gik7 wrote
Reply to The argument that a computer can't really "understand" things is stupid and completely irrelevant. by hey__bert
Hinton in his recent CBS interview echoed that while present LLMs are "prediction engines," the model cannot predict the next word in a given sentence without understanding the context of the sentence. No matter how much the /r/futurology doomers want to deny it, these machines have some level of understanding.
ActuatorMaterial2846 t1_je8gg68 wrote
Reply to Thoughts on this? by SnaxFax-was-taken
It's certainly possible. But I've read his books, and his concepts to obtain immortality are through nanobots. Although I have great respect for this man, I'm have no clue where we are in terms of nano tech. I haven't read about any papers or notable research regarding this.
So yes, I think he is usually on to something when he makes his predictions, and I'm particularly in agreement with his AGI predictions, albeit he seems a little conservative compared to others, I'm not sure how quickly nano tech will advance to get us to the stage he expects by 2030.
Mind_Of_Shieda t1_je8gahg wrote
Reply to comment by yagami_raito23 in What are the so-called 'jobs' that AI will create? by thecatneverlies
I CAN! Window cleaner for rich people.
Justtelf t1_je8g8g2 wrote
It’s possible. Just like we can likely expect advancements in every field. It’s also possible that there no easy simple solution, and there will never be a magic pill or procedure to fix it other than consistent work tied with or without medication.
NikoKun t1_je8g4xn wrote
Reply to The argument that a computer can't really "understand" things is stupid and completely irrelevant. by hey__bert
I agree. Tho I think it's just people using the idea of AI not "understanding" to make themselves feel more comfortable with how good things are getting, and 'move the bar' on what constitutes "real AI".
I recently stumbled upon this video that does a decent job explaining what I think you're trying to get across.
lightinitup t1_je8fwca wrote
Reply to comment by Szabe442 in Let’s Make A List Of Every Good Movie/Show For The AI/Singularity Enthusiast by AnakinRagnarsson66
I don’t think that’s a fair comparison. If the majority of white male portrayals were evil and inept, then I would agree that it was problematic. But that’s not the case. For every negative portrayal, there are many more of positive white male portrayals.
And I think you are drawing the wrong conclusions. I’m not saying every character needs to be positive. I’m saying we need more balanced representation in the media. Once we do that, then this stereotype wouldn’t exist, and then it would be acceptable. Until then, perpetuating it is harmful.
And if you refuse to believe that perpetuating stereotypes is harmful to society, then I think we will just have to agree to disagree.
ActuatorMaterial2846 t1_je8fqgw wrote
Reply to comment by Not-Banksy in When people refer to “training” an AI, what does that actually mean? by Not-Banksy
No worries. I'll also point out the magic behind all this is particularly the transformer architecture. This is he real engine behind LLMs and other models.
Localmutant850 t1_je8fbw7 wrote
I would like to join
Comfortable-Hat9821 t1_je8f9v8 wrote
We need a road map to use in our best interests. Alan Levy
Loud_Clerk_9399 t1_je8f5og wrote
Reply to comment by Loud_Clerk_9399 in What are the so-called 'jobs' that AI will create? by thecatneverlies
Actually I was incorrect. This did incorporate estimates of four but we will be at 5. It sounds like things will be slowing down after 5 at least a little bit.
FlyingCockAndBalls t1_je8f2a5 wrote
Reply to comment by barbariell in OPUS AI: Text-to-Video Game, the future of video gaming where you type and a 3D World emerges: A Demo by Hybridx21
real as fuck
skuzzkitty t1_je8f0xp wrote
I’m dying for personal therapists. Imagine an always on, always available ai therapist that you can connect any time you need help, or keep it on all the time, so it learns when you will have issues and starts talking you through it before it gets bad.
WarProfessional3278 t1_je8i1a9 wrote
Reply to OPUS AI: Text-to-Video Game, the future of video gaming where you type and a 3D World emerges: A Demo by Hybridx21
Just a heads up, this demo has existed for more than two years. Here's the original demo by the CEO of said company (posted on Jun. 23, 2020). Also, the tweet author linked here is a pretty unreliable source imo.
I have been unable to find any playable alpha version of their software, so I have to remain skeptical of how it actually works. The demo could be scripted.