Recent comments in /f/singularity

WarProfessional3278 t1_je8i1a9 wrote

Just a heads up, this demo has existed for more than two years. Here's the original demo by the CEO of said company (posted on Jun. 23, 2020). Also, the tweet author linked here is a pretty unreliable source imo.

I have been unable to find any playable alpha version of their software, so I have to remain skeptical of how it actually works. The demo could be scripted.

25

StevenVincentOne t1_je8hw4z wrote

Excellent points. One could expand on the theme of variations in human cognition almost infinitely. There have to be books written about it? If not...wow huge opportunity for someone.

As a mediator and a teacher of meditation and other such practices, I have seen that most people have no cognition that they have a mind...they perceive themselves as their mind activity. A highly trained mind has a very clear cognitive perception of a mind which experiences activity of mind and can actually be turned off from producing such activity. The overwhelming majority of people self-identify with the contents of the mind. This is just one of the many cognitive variations that one could go on about.

Truly, the discussion about AI and its states and performance is shockingly thin and shallow, even among those involved in its creation. Some of Stephen Wolfram's comments recently have been surprisingly short sighted in this regard. Brilliant in so many ways, but blinded by bias in this regard.

6

justowen4 t1_je8hj5f wrote

It’s also not true, even Stephen Wolfram, who is a legitimate genius in the technical definition of genius, has to rework the definition of “understand” to avoid applying it to ChatGPT. Understanding, like intelligence, has to be defined in terms of thresholds of geometric associations, because that’s what our brain does. And guess what, that’s what LLMs do. It’s coordinates at the base layer. Doesn’t mean it’s conscious, but it’s definitely intelligence and understanding at the fundamental substrate. To redefine these words so that only humans can participate is just egotistical nonsense

1

Mortal-Region t1_je8h882 wrote

A neural network has very many weights, or numbers representing the strengths of the connections between the artificial neurons. Training is the process of setting the weights in an automated way. Typically, a network starts out with random weights. Then training data is presented to the network, and the weights are adjusted incrementally until the network learns to do what you want. (That's the learning part of machine learning.)

For example, to train a neural network to recognize cats, you present it with a series of pictures, one after the other, some with cats and some without. For each picture, you ask the network to decide whether the picture contains a cat. Initially, the network guesses randomly because the weights were initialized randomly. But every time the network gets it wrong, you adjust the weights slightly in the direction that would have given the right answer. (Same thing when it gets the answer right; you reinforce the weights that led to the correct answer.)

For larger neural networks, training requires an enormous amount of processing power, and the workload is distributed across multiple computers. But once the network is trained, it requires much less power to just use it (e.g., to recognize cats).

23

agonypants t1_je8gik7 wrote

Hinton in his recent CBS interview echoed that while present LLMs are "prediction engines," the model cannot predict the next word in a given sentence without understanding the context of the sentence. No matter how much the /r/futurology doomers want to deny it, these machines have some level of understanding.

1

ActuatorMaterial2846 t1_je8gg68 wrote

It's certainly possible. But I've read his books, and his concepts to obtain immortality are through nanobots. Although I have great respect for this man, I'm have no clue where we are in terms of nano tech. I haven't read about any papers or notable research regarding this.

So yes, I think he is usually on to something when he makes his predictions, and I'm particularly in agreement with his AGI predictions, albeit he seems a little conservative compared to others, I'm not sure how quickly nano tech will advance to get us to the stage he expects by 2030.

2

Justtelf t1_je8g8g2 wrote

It’s possible. Just like we can likely expect advancements in every field. It’s also possible that there no easy simple solution, and there will never be a magic pill or procedure to fix it other than consistent work tied with or without medication.

1

NikoKun t1_je8g4xn wrote

I agree. Tho I think it's just people using the idea of AI not "understanding" to make themselves feel more comfortable with how good things are getting, and 'move the bar' on what constitutes "real AI".

I recently stumbled upon this video that does a decent job explaining what I think you're trying to get across.

1

lightinitup t1_je8fwca wrote

I don’t think that’s a fair comparison. If the majority of white male portrayals were evil and inept, then I would agree that it was problematic. But that’s not the case. For every negative portrayal, there are many more of positive white male portrayals.

And I think you are drawing the wrong conclusions. I’m not saying every character needs to be positive. I’m saying we need more balanced representation in the media. Once we do that, then this stereotype wouldn’t exist, and then it would be acceptable. Until then, perpetuating it is harmful.

And if you refuse to believe that perpetuating stereotypes is harmful to society, then I think we will just have to agree to disagree.

1

skuzzkitty t1_je8f0xp wrote

I’m dying for personal therapists. Imagine an always on, always available ai therapist that you can connect any time you need help, or keep it on all the time, so it learns when you will have issues and starts talking you through it before it gets bad.

2