turnip_burrito
turnip_burrito t1_j9xpekn wrote
Reply to comment by sweatierorc in Does the concept of consent apply to chatbot like chatgpt ? by sweatierorc
Yes, that's right.
turnip_burrito t1_j9xnc5o wrote
Reply to comment by sweatierorc in Does the concept of consent apply to chatbot like chatgpt ? by sweatierorc
No, you can design it to act very emotional.
There's a big difference.
For example an actor can act sad, but in their heart they are happy or feel apathetic.
For an AI, it could just plain feel nothing.
turnip_burrito t1_j9xknyh wrote
Depends on whether they feel anything or not.
turnip_burrito t1_j9x0trz wrote
Reply to comment by blueSGL in Open AI officially talking about the coming AGI and superintelligence. by alfredo70000
When AI builds better AI:
"It's not AI, it's just a representative state simulation transfo-network that predicts the next set of letters recursively using combined multi-modal training data".
turnip_burrito t1_j9wx75x wrote
Reply to comment by bist12 in People lack imagination and it’s really bothering me by thecoffeejesus
Yeah I know. At least both of us know better and stand above the crowd with our obvious credentials.
It's hard being so knowledgeable and wise on a daily basis, especially surrounded by these plebians. 🧙♂️
turnip_burrito t1_j9wwo5t wrote
Exponential growth of AI capability isn't a law of nature. It's only obvious in hindsight and depends on a lot of little things and a nice conducive R&D environment. We're not guaranteed to follow any exponentials.
Some people on this sub are going to be disappointed when we don't have AGI in 5 or 10 years. Or maybe they'll have forgotten that they predicted AGI by 2030 by the time 2030 actually rolls around.
turnip_burrito t1_j9wwcvy wrote
Reply to comment by helpskinissues in People lack imagination and it’s really bothering me by thecoffeejesus
Specifically Andorra, Vatican City, Lichtenstein, and like a couple others which are all tiny.
turnip_burrito t1_j9vv9m1 wrote
Reply to comment by [deleted] in Fading qualia thought experiment and what it implies by [deleted]
It may even be that we are also different second to second. 🤔
turnip_burrito t1_j9vuzm2 wrote
Reply to comment by LambdaAU in Fading qualia thought experiment and what it implies by [deleted]
Only after AGI.
turnip_burrito t1_j9vcj3u wrote
Reply to comment by 7734128 in What are the big flaws with LLMs right now? by fangfried
That's good. My example was from back in December, so maybe they changed it.
turnip_burrito t1_j9v5wje wrote
Reply to comment by MrSickRanchezz in US Copyright Office: You Can't Copyright Images Generated Using AI by vadhavaniyafaijan
Read the rest of the discussion. "Art" has several different definitions, and we were using two of those different definitions. This led to disagreement.
>English better, or stop bickering with people when you can't even write coherently.
Was that necessary? I see now that you're either a troll, or if not, a strange person. My written English is fine, and I'm sorry if you have trouble reading it.
turnip_burrito t1_j9v5jro wrote
Reply to comment by MrSickRanchezz in US Copyright Office: You Can't Copyright Images Generated Using AI by vadhavaniyafaijan
You're clearly missing the point.
turnip_burrito t1_j9v5eum wrote
Reply to comment by MrSickRanchezz in US Copyright Office: You Can't Copyright Images Generated Using AI by vadhavaniyafaijan
You have a particular definition of art that gives you this view. There are other definitions of art that will provide a different view.
turnip_burrito t1_j9v56hc wrote
Reply to comment by Terminator857 in What do you expect the most out of AGI? by Envoy34
>I can explain everything with standing EM waves
Bullshit.
Explain the existence of electrically neutral particles like neutrinos and why they're able to interact at all with other particles.
> Ugly = wrong, just look at history of bad / wrong theories.
No, ugly = ugly and wrong = wrong. Physics has no reason to be elegant to humans. The Standard model is incomplete (dark matter/energy, quantum gravity, and antimatter imbalance not explained) and inelegant, but has made predictions which up until now have worked for the rest of particle physics. In the sense of incompleteness, it could be considered "wrong". However, it is effective at predicting everything we are able to test here on Earth, so in that sense it is "right".
In fact, scientists at the LHC have been trying very hard, to no avail, to find deviations to the standard model.
> Hasn't produced anything useful, another hallmark of something very wrong.
Bad predictions and inconsistency with reality are the hallmark of something wrong. Subatomic physics isn't really that useful (we have no real use for gluons, neutrinos, etc), but we still do test theories of it.
turnip_burrito t1_j9v2vnp wrote
Reply to comment by MysteryInc152 in What are the big flaws with LLMs right now? by fangfried
That's good! I wonder if it consistently answers it, and if so what the difference between ChatGPT and Bing Chat is that accounts for this.
turnip_burrito t1_j9t9qi0 wrote
Reply to comment by LiveComfortable3228 in What do you expect the most out of AGI? by Envoy34
"This world is imperfect.. If only I could wipe away the impurities, and make it as beautiful as me."
-OP
turnip_burrito t1_j9t9fo3 wrote
Reply to comment by 3xplo in What do you expect the most out of AGI? by Envoy34
Yes please.
If I'm going to be always at the mercy of a "higher power", then I want it to be consistently fair, considerate, and actually doing its damn job, as opposed to whatever humans in authority do.
turnip_burrito t1_j9t90d1 wrote
Reply to comment by Terminator857 in What do you expect the most out of AGI? by Envoy34
>. An end to absurdities like climate change disaster and standard model particle physics (there are only standing EM waves).
You're joking I hope lol
The standard model is ugly, but least it works. You can't explain much of anything using standing EM waves.
turnip_burrito t1_j9t8awh wrote
Reply to comment by wfF1K9YoHB in What do you expect the most out of AGI? by Envoy34
Well it sure beats the current hell of people slaving away at a job and losing their health just to get barely enough money to support their family. Shortages of medical care, single parents raising their children by themselves, lack of sleep....
You think a world without required work would be bad? Well, a world with required work is worse. Way worse.
I'd take work-less induced, well-fed "purposelessness" over that any day of the week.
turnip_burrito t1_j9srav1 wrote
Reply to comment by [deleted] in What are the big flaws with LLMs right now? by fangfried
Bad bot.
turnip_burrito t1_j9sr922 wrote
Reply to comment by [deleted] in What are the big flaws with LLMs right now? by fangfried
Bad bot.
turnip_burrito t1_j9sr58c wrote
Reply to comment by nul9090 in What are the big flaws with LLMs right now? by fangfried
That's a really good point. The Hungry Hungry Hippos and RWVST (still can't remember the acronym :'( ) papers are two good examples of the things you mentioned. Transformers now give the impression of being "cumbersome".
turnip_burrito t1_j9sr0vp wrote
Reply to comment by fangfried in What are the big flaws with LLMs right now? by fangfried
I'll give you a poor example, off the top of my head, since I'm too lazy to look up concrete examples. I've asked it a version of this question (not exact but you'll get the idea):
"Say that hypothetically, we have this situation. There is a bus driven by a bus driver. The bus driver's name is Michael. The bus driver is a dog. What is the name of the dog?"
This is just a simple application of transitivity, which people intuitively understand:
Michael <-> Bus driver <-> Dog
So when I ask ChatGPT what the name of the dog is, ChatGPT should say "Michael".
Instead ChatGPT answers with "The bus driver cannot be a dog. The name of the bus driver is given, but not the name of the dog. So there's not enough information to tell the dog's name."
It just gets hung up on certain things and doesn't acknowledge clear path from A to B to C.
turnip_burrito t1_j9spyp3 wrote
Reply to What are the big flaws with LLMs right now? by fangfried
Like you said: truthfulness/hallucination
But also: training costs (hardware, time, energy, data)
Inability to update in real time
Flawed reasoning ability
Costs to run
turnip_burrito t1_j9xsnps wrote
Reply to comment by helpskinissues in People lack imagination and it’s really bothering me by thecoffeejesus
There's a lot of people. So what?
All those cities are well-marked and mapped for the most part compared to most everywhere else. And their weather is also better than most everywhere else (clear skies most of the time, almost no snow to speak of).