Recent comments in /f/singularity

HarbingerDe t1_jef93gg wrote

I think people need to temper their expectations a bit. Things definitely are ramping up, but there's no saying when we'll reach broadly usable AGI.

For one, transistors have pretty much stopped getting smaller. We're butting up on fundamental physical limits there.

So, without some as of yet unknown computational paradigm shift, it's possible that true AGI may always need to run on building sized computers consuming megawatts/gigawatts of power.

People could still access this remotely via the cloud, presumably, but it would severely limit the scale and impact of AGI in regular life.

2

Prevailing_Power t1_jef8zfr wrote

Makes sense to me. I've known for a long time language was power. You can bind concepts to words so that just thinking the word lets you experience what that word means. If you are an expert at some subject, you can only really think about that subject my knowing all the jargon because it allows you to compare complex thoughts side by side by merely invoking two words. Eventually you can create a more complex experience that can be binded into a new word. That word will hold the power of those two other words, and on and on.

Your reality is literally shaped by the words you know. Your perspective is only as good as your words.

2

StarCaptain90 OP t1_jef8r8l wrote

That's the problem. Why are we so focused on wages. Because it allows people to spend more time with their families and not work 3 jobs. It allows people to pay for their living.

But an AI based economy will remove those constraints that prevent us from living peacefully. So if you are truly on the side of helping humanity with resolving their issues, we need AI.

1

hydraofwar t1_jef89y0 wrote

You're right, but I particularly believe that all our stored scientific information still has a lot to say, things that we humans haven't seen yet, and something that could decipher this, and very quickly, would be an AI.

What could bypass experimental validation would be quantum computing to simulate systems/environments.

1

FeepingCreature t1_jef872m wrote

The problem with "simulating understanding" is what happens when you leave the verified-safe domain. You have no way to confirm you're actually getting a sufficiently close simulacrum, especially if the simulation dynamically tracks your target. The simulation may even be better at it than the real thing, because you're also imperfectly aware of your own meaning, but you're rating it partially on your understanding of yourself.

> To your last point, yes you'd have to find a set of statements that exhaustively filters out undesirable outcomes, but the only thing you have to get right on the first try is "don't kill, incapacitate, brain wash everyone." + "Be transparent about your actions and their reasons starting the logic chain from our query."

Seems to me if you can rely on it to interpret your words correctly, you can just say "Be good, not bad" and skip all this. "Brainwash" and "transparent" aren't fundamentally less difficult to semantically interpret than "good".

2

Current_Side_4024 t1_jef86l4 wrote

The God gamble. When you think about it we’re kinda going thru the same thing God goes through in the Old Testament. He regrets his creation, regrets creating man bc they bring shame upon Him. Then Jesus comes along and God finds a way to love His creation again. God makes a sacrifice and His relationship with his kids is good again. We need that Jesus figure, that sacrifice. What does man have to sacrifice for us to stop fearing/hating/regretting AI? Probably our pride

3

AvgAIbot t1_jef7yxq wrote

Sounds like a good idea tbh. I’ve considered getting into trades but I’ve never been a hard worker. My decent remote tech job pays pretty well and I have alot of free time. But I know it won’t last after the next 5-10 years.

1