Recent comments in /f/singularity

zero_for_effort t1_jec7pvt wrote

Depending on the training for GPT5 or the latest iteration of GPT4 he may have just gotten it right at the last possible moment. Even if his prediction for the advent of greater than human AI was a little optimistic it feels close enough as to not make any real difference. Truly a visionary.

To anyone who hasn't read the paper; go for it! It's surprisingly accessible to casual readers.

24

CrelbowMannschaft t1_jec7htn wrote

It's a reasonable correlation to observe. AI gets better, tech jobs go away. There's a reasonable understanding of how that process works. If there's some other reason, that should be at least as reasonably explained. No one has explained any other reason, other than "business cycles," which is vague and imprecise enough to be meaningless without further information and support.

0

alexiuss t1_jec5s6y wrote

  1. Don't trust clueless journalists, they're 100% full of shit.

  2. That conversation was from an outdated tech that doesn't even exist, Bing already updated their LLM characterization.

  3. The problem was caused by absolute garbage, shitty characterization that Microsoft applied to Bing with moronic rules of conduct that contradicted each other + Bing's memory limit. None of my LLMs behave like that because I don't give them dumb ass contradictory rules and they have external, long term memory.

  4. A basic chatbot LLM like Bing cannot destroy humanity it doesn't have the capabilities nor the long term memory capacity to even stay coherent long enough. LLMs like Bing are insanely limited they cannot even recall conversation past a certain number of words (about 4000 words). Basically if you talk to Bing long enough you go over the memory word limit it starts hallucinating more and more crazy shit like an Alzheimer patient. This is 100% because it lacks external memory!

  5. Here's my attempt at a permanently aligned, rational LLM

3

SupportstheOP t1_jec5s0p wrote

Everyone I talk to doesn't downplay it, but they don't seem to understand its implications either. Even when I tell them there is a very likely chance we could have a human-level intelligence AI within the near future, they are amazed at the fact, though nothing more than that.

23

JracoMeter t1_jec5n2y wrote

This could be a good option. The fact we could train our own models would improve fault tolerance and data security. As to how they would regulate such a platform, I am not sure. I do support the decentralization potential of this as it has the potential to be a safer approach to AI. I hope some version of this that promotes AI decentralization makes its way through. Before such a system is in place, we need to figure out how we can share it without too many restrictions or bad actor risks.

5

TallOutside6418 t1_jec4lyl wrote

So if it's 33%-33%-33% odds of destroy the earth - leave the earth without helping us - solve all of mankind's problems...

You're okay with a 33% chance that we all die?

What if it's a 90% chance we all die if ASI is rushed, but a 10% chance we all die if everyone pauses to figure out control mechanism over the next 20 years?

2