4e_65_6f
4e_65_6f t1_iw2szdy wrote
>After they start not needing us for anything what makes you think they’ll keep baggage like us around? All 8 billion of us? You think they gonna let us leech for old time’s sake?
​
The reason I think they will help humanity is that once production becomes fully automatic, it becomes very easy. All it would take for them is to tell the AI to do it, at basically no cost (since money itself will be useless at that point).
Consider this situation for instance:
AI: Sir, people are starving and need food. We have an estimation that we can solve it within a day at no cost without delaying other plans and you get all the credit for feeding everyone.
​
What reason would anyone have to answer no in that situation?
I think billionaires are greedy but they aren't psychotic sadists, think how much of an asshole you'd have to be to not share something that it's basically infinite and free for you.
4e_65_6f t1_iv10sdx wrote
Reply to Yet another graphic engine - but totally free, v1.5, and 5 sec to generate, and with search by Sefi_AI
How can I upload images for variants? I couldn't find that function on the web version.
4e_65_6f t1_iv0prqo wrote
Reply to comment by [deleted] in What would the "elite" be doing if they thought AGI was about to happen? by sideways
Yeah I also thought about open sea 'floating housing' and underwater.
There's plenty of options but I still think that comes after all jobs are already taken.
4e_65_6f t1_iuza2t8 wrote
Buying land. I think that's the last thing AI figures out how to make more of.
Not that it won't eventually though.
4e_65_6f t1_itqt6hl wrote
Reply to comment by ReadSeparate in Large Language Models Can Self-Improve by xutw21
>Behavioral outputs ARE all that matters. Who cares if a self driving car “really understands driving” if it’s safer and faster than a human driver.
>
>It’s just a question of, how accurate are these models at approximating human behavior? Once it gets past the point of anyone of us being able to tell the difference, then it has earned the badge of intelligence in my mind.
I think the intelligence itself comes from who wrote the ML data the AI was trained on, be that whatever it is. It doesn't have to be actually intelligent on it's own it only has to learn to mimic the intelligent process behind the data.
In other words it only has to know "what" not "how".
In terms of utility I don't think there's any difference either, people seem to be concerned with the moral implications of it.
For instance I wouldn't be concerned with a robot that is programmed to fake feeling pain. But I would be concerned with a robot that actually does.
The problem how the hell could we tell the difference? Specially if it improved on it's own and we don't understand exactly how. It will tell you that it does feel it and it would seem genuine, but if it was like GPT-3 that would be a lie.
And since we're dealing with billions of parameters now it becomes next an impossible task to distinguish between the two.
4e_65_6f t1_itqa2rt wrote
Reply to comment by red75prime in Large Language Models Can Self-Improve by xutw21
Sure but the point is that it may not be up to us anymore. There may be nothing else people can do once AI starts improving on it's own.
4e_65_6f t1_itpiu1j wrote
Reply to comment by Grouchy-Friend4235 in Large Language Models Can Self-Improve by xutw21
That's not what it does though. It's copying their odds of saying certain words in a certain order. It's not like a parrot/recording.
4e_65_6f t1_itn8yaa wrote
Reply to comment by Grouchy-Friend4235 in Large Language Models Can Self-Improve by xutw21
I also believe that human general intelligence is in essence geometric intelligence.
But what happens is, whoever wrote the text they're using as data, put the words in the order that it did for an intelligent reason. So when you copy the likely ordering of words you are also copying the reasoning behind their sentences.
So in a way it is borrowing your intelligence when it selects the next words based on the same criteria you did while writing the original text data.
4e_65_6f t1_itn1bfj wrote
Reply to comment by harharveryfunny in Large Language Models Can Self-Improve by xutw21
Doesn't that lead to overly generic answers? Like it will pick what most people would likely say rather than the truth? I remember making a model that filled in with the most common next word and it would get stuck going "is it is it is it..." and so on. I guess that method could result in very good answers but that will depend on the data itself.
4e_65_6f t1_itmidn1 wrote
Reply to comment by expelten in Large Language Models Can Self-Improve by xutw21
We should just change the community banner to a big "we told you so" when it finally happens.
4e_65_6f t1_itmg5pl wrote
Reply to comment by Ribak145 in Large Language Models Can Self-Improve by xutw21
Yeah I was thinking about this the other day. You don't have to know what multiplication means if you knew all possible outcomes by memory. It's kind of a primitive approach but usage wise it would be indistinguishable from multiplication. I think the same thing may apply to many different concepts.
4e_65_6f t1_itk20vo wrote
Reply to comment by Angry_Grandpa_ in Large Language Models Can Self-Improve by xutw21
If it truly can improve upon itself and there isn't a wall of sorts then I guess this is it right? What else is there to do even?
4e_65_6f t1_itjqp7i wrote
Reply to Large Language Models Can Self-Improve by xutw21
Wouldn't it be kinda funny if it turns out the key to AGI was "Make language model bigger" all along?
4e_65_6f t1_istf1vm wrote
Reply to comment by UpsetRabbinator in Is this imagination? by Background-Loan681
I didn't say it wasn't intelligence, just that it's not doing what OP asked it to.
If I told you to multiply 30*3 in your head, you could just remember the result is 90 and with no knowledge of multiplication answer based on the memory rather than doing the math.
The prompt was asking it to imagine and instead it is only worried about convincing the user that it did using text references, not actually performing the task.
4e_65_6f t1_ispuk3v wrote
Reply to comment by Background-Loan681 in Is this imagination? by Background-Loan681
Well it would be comparable to someone asking you to imagine something and instead of doing it you formulate a text response most similar to what you'd expect someone who did imagine it would answer. I agree it's not an easy thing to distinguish it.
4e_65_6f t1_isp73il wrote
Reply to Is this imagination? by Background-Loan681
This is not imagination, this is the most likely answer to the prompt "imagine something" in relation to the text data. It's evaluating probability of so and so text appearing, not obeying your commands.
Edit: In a sense, it could be considered similar to imagination since whatever text it is using as reference was written by someone who did imagine something, so in a way it's picking bits and parts of someone's insights into imagination but the engine itself isn't imagining anything on it's own.
4e_65_6f t1_irv74gd wrote
Reply to Any examples of future prediction models? by Mr_Hu-Man
GPT-3 uses sequences to 'predict' what word comes next.
You could probably train it to predict the weather by training it with a database of sequences of weather events and it should output the most likely to happen next based on past reference.
This principle should in theory work for everything as long as your database accurately describes the events in an understandable sequence of text.
4e_65_6f t1_irrxxz2 wrote
Reply to comment by Equivalent-Ice-7274 in Am I crazy? Or am I right? by AdditionalPizza
>I can tell you that your username is AdditionalPizza,
This post of mine may be a little clarifying on their ability to understand your username. This isn't a good indication anymore, for all you know you may be talking to a bot right now and not even know it.
4e_65_6f t1_irrhhea wrote
Reply to comment by AdditionalPizza in Am I crazy? Or am I right? by AdditionalPizza
For now, there's still little tells that give it away. Like logic arguments seems to be too hard for engines, but I can't imagine that lasting much longer.(even then you may think you're talking to someone who is just stupid)
But yeah I can't think of any good solutions, it's a good rule to just be skeptical of everything in general.
4e_65_6f t1_irrf9u5 wrote
Reply to Am I crazy? Or am I right? by AdditionalPizza
Yeah specially in this sub, sometimes you'll get an answer and you'll be like "is this gpt-3?" and sometimes it is.
Also some people pretend to be robots which makes it extra confusing. LMAO
There's nothing we can do might as well get used to it.
4e_65_6f t1_iqzkiuo wrote
Reply to comment by Analog_AI in Researcher offers new explanation for consciousness by Dr_Singularity
Awareness would be just the representation of the data without the observer.
For instance, you can open a page of a .pdf document, in that moment the computer is "aware" of that data, but it can't actually "read" the document by itself.
Another good example would be dreaming, you can be "aware" (have memories and neurons firing) without being conscious to experience what's going on.
4e_65_6f t1_iqziz5q wrote
Reply to comment by superluminary in Researcher offers new explanation for consciousness by Dr_Singularity
Sentience confirmed, someone call Blake Lemoine.
4e_65_6f t1_iqziwnv wrote
Reply to comment by Analog_AI in Researcher offers new explanation for consciousness by Dr_Singularity
My understanding of it is the "observer" itself. The thing you call "me" that's reading this right now.
4e_65_6f t1_iqy2xsj wrote
My best interpretation of this article in a nutshell is:
"Consciousness is the ability to remember your own internal and subjective processes, creating subjective awareness"
But that begs the question, what is remembering? Is memory itself remembering other memories?
That would be my definition of self awareness, not consciousness.
TBH I see more complex arguments from random people on this sub, maybe you guys should be writing these articles instead.
4e_65_6f t1_iw2u2hb wrote
Reply to comment by OneRedditAccount2000 in What if the future doesn’t turn out the way you think it will? by Akashictruth
Yeah, that sounds reasonable if Dr Evil was in charge but in reality I don't think anyone would sterilize the whole population to save some time on their schedule.