Recent comments in /f/singularity

ozten t1_je762va wrote

Unlike many futuristic topics... Fusion has been demonstrated to work, just inefficiently. You put in a dollar worth of energy to create 5 cents worth of energy (not an actual ratio). If we can make existing Fusion tech 100x more efficient, then we could largely solve the energy crisis. So there is an engineering path forward that is compatible with real-world physics.

3

nobodyisonething OP t1_je751vl wrote

I'm expecting a predictable scenario like this:

  1. The growth of freely available information on the internet slows down as proprietary AIs become the go-to for answers.
  2. Proprietary AIs start actively trying to hide information behind paywalls to gain an advantage over their rivals
  3. The golden age of all-you-can-eat information is lost and nobody realized it was happening.
12

BigMemeKing t1_je74m5d wrote

Not really. Why does 2+2=4? The first question I would as is. What are we trying to solve for? I have 2 pennies, I get 2 more pennies, now I have 4 pennies. Now, we could add variables to this. One of the pennies has a big hole in it, making it invalid currency. So while yes, you do technically have 4 pennies, in our current dimension, you only have 3. Since one is in all form and function, garbage.

Now, let's say one of those pennies has special attributes that could make it worth more. While you may now have 4 pennies, one of these pennies is worth 25 pennies. So, while technically you only have four pennies, your net result in our current dimension you now have a total of 28 pennies. 2+2 only equals 4 in a 1 dimensional space. The more dimensions you add to an equation, the more complicated the formula/format becomes.

−1

drekmonger t1_je74aq3 wrote

Also noteworthy, we "train" and "infer" with a fraction of the energy cost of running an LLM, and that's with the necessary life support and locomotive systems. With transformer models, we're obviously brute forcing something that evolutionary biology has developed more economical solutions for.

There will come a day when GPT 5.0 or 6.0 can run on a banana peel.

1

xott t1_je747l7 wrote

New Zealand has had no big conversations about ai since introduction of ChatGPT.

Previously it looked like we were moving well, with a Digital Strategy and an Algorithm Charter.

They weren't great initiatives, being mostly well intentioned and aimed at XAI/accountability and preventing harm or bias against our citizens.

The biggest citizen group is called NZ AI forum. I don't like them very much as they come across as real pearl-clutchers, but at least they're promoting conversation.

There's been such a great advance in the last 6 months that the AI landscape has entirely changed. Like most countries, our government looks like it will end up being reactive instead of proactive.

6

drekmonger t1_je73xjv wrote

While the statement that "AGI would have the power of recursive self-improvement and would therefore very rapidly become exponentially more powerful" is a possibility, it is not a required qualification of AGI.

AGI is primarily characterized by its ability to learn, understand, and apply knowledge across a wide range of tasks and domains, similar to human intelligence.

Recursive self-improvement, also known as the concept of an intelligence explosion, refers to an AGI system that can improve its own architecture and algorithms, leading to rapid advancements in its capabilities. While this scenario is a potential outcome of achieving AGI, it is not a necessary condition for AGI to exist.

--GPT4

11