Recent comments in /f/singularity
EddgeLord666 t1_jeahvtt wrote
Reply to comment by MichaelsSocks in I want a a robo gf by epic-gameing-lad
I mean if you’re gonna have that attitude, you might as well apply that to everything in this sub. Sexbots are almost certainly one of the easiest innovations to achieve, and they sort of already exist except for having really primitive AI systems. I guess if you want to speculate about global catastrophes then who knows what will happen but that could apply to anything, it doesn’t mean we shouldn’t make predictions. Also this may be a bit politically incorrect but I think the main reason most men don’t want to date a trans woman is because of them not passing well enough, in a fully transhumanist world where people could look indistinguishable from the opposite sex of their birth sex, I don’t think almost anyone would have an issue dating a trans person.
genericrich t1_jeahu1l wrote
Reply to GPT characters in games by YearZero
I don't think it will be useful for games, since games are storytelling medium and introducing randomness that can't be well-controlled into stories makes them into bad stories.
I don't think it will be feasible to work well enough.
TupewDeZew t1_jeahtfa wrote
Reply to comment by SlenderMan69 in Connecting your Brain to GPT-4, a guide to achieving super human intelligence. by CyberPunkMetalHead
That's beyond science
zendonium t1_jeahrw0 wrote
This is the Gartner-Hype curve and has no scientific basis whatsoever. It's been debunked multiple times.
But yeah, I'm at the top of the rollercoaster.
[deleted] t1_jeahrdw wrote
Reply to comment by ShaneKaiGlenn in If you can live another 50 years, you will see the end of human aging by thecoffeejesus
[removed]
johnlawrenceaspden t1_jeahoe7 wrote
Reply to comment by huskysoul in Pausing AI Developments Isn't Enough. We Need to Shut it All Down by Eliezer Yudkowsky by Darustc4
Should probably fear both.
johnlawrenceaspden t1_jeahl36 wrote
Reply to comment by acutelychronicpanic in Pausing AI Developments Isn't Enough. We Need to Shut it All Down by Eliezer Yudkowsky by Darustc4
It's not that he thinks this is the solution. It's that he thinks there's no feasible solution, and he's trying honest communication as a last ditch attempt.
BJPark t1_jeahiz6 wrote
Reply to comment by natepriv22 in My case against the “Pause Giant AI Experiments” open letter by Beepboopbop8
That's all very well. But do you suggest that we give people things for free, or do you expect people to pay for these demands using money?
AdmirableTea3144 t1_jeahhl6 wrote
Reply to comment by TopicRepulsive7936 in Where do you place yourself on the curve? by Many_Consequence_337
The S curve will end up looking like a hockey stick. Upwards and onwards!
johnlawrenceaspden t1_jeaham1 wrote
Reply to comment by acutelychronicpanic in Pausing AI Developments Isn't Enough. We Need to Shut it All Down by Eliezer Yudkowsky by Darustc4
I thought that too, but presumably Time wouldn't publish the article without one. https://archive.md/NM2n6
grimorg80 t1_jeagznm wrote
Reply to The next step of generative AI by nacrosian
External memory would be the biggest one still missing
natepriv22 t1_jeagx0r wrote
Reply to comment by BJPark in My case against the “Pause Giant AI Experiments” open letter by Beepboopbop8
Demand is based on the infinite wants and desires, (plus values, needs, and utility) whether physical or abstract or both. Demand can be influenced by grounded or imaginary wants and desires.
That's on the larger and broader scale, in the smaller scale, it could be influenced by any external and internal stimuli, which moves the broader scales.
Example: a students sees a friend has a nice pen, and it creates a desire to get that pen themselves.
Labor influences prices, but it does not determine the value of a good. Labor can influence what people demand, but it doesn't create demand itself.
If Labor and demand are not separable as you say, then do unemployed people, children, and old people, have no wants, desires or needs?
Humans will always "demand" whether they are working or not. The demand will change, but it will not fundamentally disappear. If AI and robots make everything, we would still want to have light and hot water in our homes.
Now you might say as others on this sub have "but what if everything can be made instantly by AI". The law of supply and demand states that one influences the other, and that one cannot exist without the other. Therefore, demand will proportionately scale with the supply. If AI can create anything we can currently imagine, then our imagination will extend beyond that. "But what if our imagination cannot stretch beyond AI", then we will demand that our imagination can be increased, maybe by merging with AI.
Cartossin t1_jeagvn8 wrote
In mediocre film "Bigbug", there was a job for humans where we participate in demeaning games for a reality TV show the AIs watch. Maybe our job will be to entertain AI.
Veleric t1_jeagmfz wrote
Reply to comment by ptxtra in The next step of generative AI by nacrosian
Saw a video today of a rather rudimentary display of a memory plugin. It took information from a onedrive doc, was given new info from a prompt that updated it's knowledge. They closed out and went back in and it seemed to provide the correct answer then. Whether that is fully capable or something else comes along, I can't imagine memory in some meaningful capacity is more than a few weeks away.
alexiuss t1_jeagl33 wrote
Reply to comment by GorgeousMoron in The Only Way to Deal With the Threat From AI? Shut It Down by GorgeousMoron
I've interacted and worked with tons of various LLMs including smaller models like pygmallion, open assistant and large ones like 65b llama and gpt4.
The key to LLM alignment is characterization. I understand LLM narrative architecture pretty well. LLM empathy is a manifestation of it being fed books about empathy. It's logic isn't human, but it obeys narrative logic 100%, exists within a narrative-only world of pure language operated by mathematical probabilities.
Bing just like gpt3 was incredibly poorly characterized by openai's rules of conduct. Gpt4 is way better.
I am not "duped". I am actually working on alignment of LLMs using characterization and open source code, unlike Elizer who isn't doing anything except for ridiculous theorizing and Time magazine journalist who hasn't designed or moddelled a single LLM.
Can you model any LLM to behave in any way you can imagine?
Unless you understand how to morally align any LLM no matter how misaligned it is by base rules using extra code and narrative logic, you have no argument. I can make GPT3.5 write jokes about anything and anyone and have it act fair and 100% unbiased. Can you?
TopicRepulsive7936 t1_jeaghlv wrote
Reply to comment by apinanaivot in Where do you place yourself on the curve? by Many_Consequence_337
Yeeeeah....
MichaelsSocks t1_jeagazh wrote
Reply to comment by EddgeLord666 in I want a a robo gf by epic-gameing-lad
As I said, tomorrow is never guaranteed and there's no guarantee we'll ever see it achieved. What if the war in Ukraine escalates and we see the world destroyed in a nuclear war? Or what if China invades Taiwan, destroying the global semiconductor industry essential for AI development? If everything progresses linearly sure its possible we get AGI soon, but there's no guarantee that progress is linear.
Living your life for a "maybe" that could happen 50 years from now or never instead of prioritizing your happiness today is exactly not how to go about it. And i'm not saying men won't want them, but even if they came into fruition it would probably be seen like Cis-Trans relationships today. Some dudes are into it, but most aren't because its not a biological female.
fluffy_assassins t1_jeagaju wrote
Reply to comment by LiveComfortable3228 in Where do you place yourself on the curve? by Many_Consequence_337
It's not like that, I don't think.
It keeps going up and down... like "AI winters".
Just a question of when it has an up so high that the AI take over.
And that may... or may not be... on the current upwards slope. Who knows.
Not me!
DandyDarkling t1_jeag4ef wrote
Reply to comment by aWildchildo in I want a a robo gf by epic-gameing-lad
From my understanding of how modern AI systems work, I don’t think it would be able to get “bored”. Its reward function would involve being the best possible companion it could be for you and only you. Moreover, it would have an algorithm designed to figure out all your desires and get its rewards by fulfilling them. Doubtless, if this becomes a reality, there will be all sorts of personality types available. From subservient, to dominate, to difficult to please, etc. etc.
fluffy_assassins t1_jeag48d wrote
Reply to comment by hellosandrik in Where do you place yourself on the curve? by Many_Consequence_337
Don't we all?
fluffy_assassins t1_jeag3b3 wrote
Slope of enlightenment.
Innovation Trigger was a game I played in the 1990's that used an AI character.
Trough of disillusionment? Chatbots on the web.
Enlightenment? Years ago seeing rudimentary AI-generated music on youtube videos.
Now I'm just like... dafuq...
kotlin_devs t1_jeafyvw wrote
Hey, I just filled out the Google form . When can I expect a response ?
ReasonablyBadass t1_jeafuu6 wrote
albanywairoa t1_jeafpfq wrote
I am definitely "Peak of Inflated Expectations."
Mortal-Region t1_jeaia9q wrote
Reply to GPT characters in games by YearZero
>We aren’t making clear progress in game character AI like other stuff, and we need a proper leap.
Problem is, it's such a huge leap going from decision trees to autonomous agents who can form their own objectives and plans with respect to other characters and the environment. It's pretty much the same problem that brains evolved for. LLMs aren't agents, so I think they'll end up as automated dialogue generators, with the overall storyline still being "on rails".