Recent comments in /f/singularity

MichaelsSocks t1_je8qq4r wrote

No, since an AGI would quickly become ASI regardless. A superintelligent AI would have no reason to favor a specific nation or group, it would be too smart to get involved in civil human conflicts. What's more likely is that once ASI is achieved, it will begin using its power and intelligence to manipulate politics at a level never seen before until it has full control over decision making on the planet.

0

blueSGL t1_je8q3lm wrote

> 3. it will be initially unaligned

if we had:

  1. a provable mathematical solve for alignment...

  2. the ability to directly reach into the shogoths brain, watch it thinking, know what it's thinking and prevent eventualities that people consider negative outputs...

...that worked 100% on existing models. I'd be a lot happier about our chances right now.

As in the fact that the current models cannot be controlled or explained in fine grain enough detail (the problem is being worked on but it's still very early stages) what makes you think making larger models will make them easier to analyze or control.

The current 'safety' measures are bashing at a near infinite whack-a-mole board whenever it outputs something deemed wrong.

As has been shown. OpenAI has not found all the ways in which to coax out negative outputs. The internet contains far more people than OpenAI's alignment researches, and those internet denizens will be more driven to find flaws.

Basically until the AI 'brain' can be exposed and interpreted and safety check added at that level we have no way of preventing some clever sod working out a way to break the safety protocols imposed on the surface level.

1

Galactus_Jones762 t1_je8pmph wrote

People don’t know what the word understand means because to define it you have to rely on other words that are ill-defined. We don’t fully know what it means to understand something because we don’t ourselves know how consciousness works. So to say in a condescending dismissive way that “LOOK…what you have to realize is it’s not understanding anything you idiots, no understanding is taking place,” aside from the annoying condescension it’s also a nonsense statement, because understand is not well-defined and thus saying it doesn’t “understand” is not a falsifiable statement any more than saying it does understand is a falsifiable statement. Agreed that saying it doesn’t understand is irrelevant.

0

bh9578 t1_je8pmh6 wrote

I keep thinking about what AGI would do to the stock market. While it’s difficult to say who will be the winner or loser, I think it’s fair to say the broader market will grow like crazy as economic output skyrockets. I believe it was Nick Bostrom who stated in his Superintelligence book that if the AGI gave the same jump in economic output that the agricultural revolution or Industrial Revolution brought, the broad market would double every two weeks. That sounds crazy but then again the markets doubling about every 8 years would have sounded insane to anyone before the industrial era. Such growth would accelerate wealth inequality where anyone who doesn’t have a decent amount of their net worth in equities gets left behind with no chance of ever catching up.

That kind of world gets even scarier when AGI starts tackling aging. There’s always been differences in life expectancy among economic classes, but that difference could quickly widened.

2

CodingCook t1_je8pdff wrote

Honestly - in my experience of Stack Overflow and the absolute pig-headedness involved in some of their users - using something like GitHub Copilot purely for code related questions would be worth the £10 a month to get instant and accurate answers over potentially getting my account temporarily banned because the question wasn’t considered good enough for them.

2

Andriyo t1_je8pc91 wrote

Our understanding is also statistically based on the fact that majority of texts that we saw use 10-based numbers. One can invent math where 2+2=5 (and mathematicians do that all the time). so your "understanding" is just formed statistically from the fact that it's the most common convention to finish text "2+2=...". Arguably, a simple calculator has better understanding of addition since it has more precise model of addition operation

2

DragonForg t1_je8pbf6 wrote

New AI news. Now imagine, pairing up the task API with this: https://twitter.com/yoheinakajima/status/1640934493489070080?s=46&t=18rqaK_4IAoa08HpmoakCg

It will be OP. Imagine, GPT please solve world hunger, and the robot model it suggest could actually do physical work. We just need robotics to get hooked up to this so we can get autonomous task robots.

Imagine, we can start small but we can say, Robot build a wooden box. And with this API along with this: https://twitter.com/yoheinakajima/status/1640934493489070080?s=46&t=18rqaK_4IAoa08HpmoakCg you can get seemingly a robot doing the task autonomously.

15

smokingPimphat t1_je8oows wrote

I don't think that AI in and of itself will generate jobs, but it will drop the costs of doing certain tasks by reducing the number of humans required to do the same work today, This will allow more & smaller groups and companies to come in and participate even opening up new things to be created to cater to smaller groups that are ignored today because " the numbers don't work" IE think of every tv show that was killed to soon even though they had pretty good reviews but didn't get 100M views or some other arbitrary metric.

contrived example:

If today it takes at least 1000(it takes way more probably) people to produce all the effects to hit a blockbuster level of quality ( think marvel type movie ), if AI can drop that number to 500 a disney level company would probably either;

A) produce 1 even bigger movie with bigger effects with the same # of people or

B) produce 2 current level productions for the same cost ( both in terms of money and people)

if you agree that this is a probable outcome then it would stand to reason that there would be more smaller projects that would become available because the base level of quality & returns can be met with a smaller team that can produce higher quality things. This would apply not only to art but to many other things.

2