Recent comments in /f/singularity

mbcoalson t1_jeff57x wrote

My two cents. The moment AI is coding itself, it stops being controlled. GPT-4 is already testing better than the majority of humans on an incredibly broad range of subjects. In some number of iterations, 2 or 50, I don't know, it will be smarter and more capable than any group of humans. Nobody's controlling that and power structures will change accordingly. The belief that AIs will decide to be our Nannies and just take care of us seems optimistic. Ambivalence towards us from an advanced AI seems likely. God forbid it decides we are a hindrance to its goals.

But, it will be built off datasets we feed it. Carefully curating those datasets will be biased, but is also our best bet, IMO.

2

FoniksMunkee t1_jeff2h4 wrote

Short answer. No. It is still useful to learn another language. Language isn't just about direct translations. It actually rewires your brain - you think differently in another language. It gives you insight to the culture. It's also super annoying when you are stuck in local government office and your damn phone / AR headset / whatever, runs out of batteries.

If you are just travelling to another country for a holiday - AI translation is probably going to be the best bet. If you are going to move to another country, or get in a relationship with someone from another country - learn their language.

3

Artanthos t1_jefezc5 wrote

Where is everyone going to live when the world has a population of 20 billion? Housing prices are already rising faster than inflation.

Where are you going to get the food? We are already draining the aquifers, rivers, and lakes.

What are you going to do about population. Supporting 20 billion people is going to consume far more energy, require increased manufacturing, and necessitate more mining. The oceans are already being depleted, this would only accelerate.

More crowded living conditions presents a breeding ground for both crime and disease.

How are young people supposed to advance in careers where their seniors never move on?

0

Iffykindofguy t1_jeferli wrote

Reply to comment by Zer0D0wn83 in 1X's AI robot 'NEO' by Rhaegar003

I agree to a degree but even if they're not replaced the amount of man power needed per job will go down. If you're already a small-scale operation or you run a bunch of individual contractors that may not impact you but for larger-scale jobs it will displace a lot of people already with hours that will then be taking up any open slots.

1

Sure_Cicada_4459 OP t1_jefepok wrote

With a sufficiently good world model, it will be aware of my level of precision of understanding given the context, it will be arbitrarily good at infering intent, it might actually warn me because it is context aware enough to say that this action will yield net negative outcome if I were to assess the future state. That might be even the most likely scenario if it's forecasting ability and intent reading is vastly superior, so we don't even have to live through the negative outcome to debug future states. You can't really have such a vastly superior world model without also using the limitations of the understanding of the query by the user as a basis for your action calculation. In the end, there is a part that is unverifiable as I mentioned above but it is not relevant to forecasting behaviour kind of like how you can't confirm that anyone but yourself is conscious (and the implications of yes or no are irrelevant to human behaviour).

And that is usually the limit I hit with AI safety people, you can build arbitrary deceiving abstractions on a sub level that have no predictive influence on the upper one and are unfalsifiable until they again arbitrarily hit a failure mode in the undeterminable future. You can append to general relativity a term that would make the universe collapse into blackhole in exactly 1 trillion years, no way to confirm it either but that's not how we do science yet technically you can't validate that this is not in fact how the universe happens to work. There is an irreducible risk to this whose level of attention is likely directly correlated to how neurotic one is. And since the stakes are infinite and the risk is non-zero, you do the math, that's enough fuel to build a lifetime of fantasies and justify any actions really. I believe the least talked about topic is that the criteria of trust are just as much dependent on the observer as the observed.

By the way yeah, I think so but we will likely be ultra precise on the first tries because of the stakes.

2

burnt_umber_ciera t1_jefej76 wrote

I guess we just disagree then. There are so many examples of intelligence not correlating with ethics that I could go on ad infinitum. Wall Street has some of the most intelligent actors yet have been involved in multiple scams over the years.

Enron is what I meant and I don’t agree with your characterization.

2