Recent comments in /f/singularity

throwaway12131214121 t1_je8bnv9 wrote

Yeah but every other system would also not have colonized the entire planet through countless continuous genocides and centuries of exploitation.

AGI is probably the last trick capitalism will pull before it dies. Either that or climate change. I’m hoping it’s AGI, because that has the potential to actually be positive.

1

throwaway12131214121 t1_je8bf6d wrote

There are a lot of similarities. The profit motive of capitalism, and more recently(aka past 200 years) the requirement that companies grow to appease shareholders, is what has caused capitalism to spread itself and become the dominant global system. Now it can’t grow geographically anymore, so it’s been growing in other ways and it’s going to continue doing so until there are no resources left and the system, along with everything else on earth, collapses. AI gives me some hope because it offers an alternative way for capitalism to collapse that doesn’t ruin everything for everyone forever.

5

FoniksMunkee t1_je8ba4w wrote

You may be missing the point of the statement (or perhaps people are using it wrong?) - but let me give you this example.

Midjourney doesn't understand what a hand is. It knows what one looks like, so it can draw it in most cases. But it has no understanding of it's use in any real sense. That means it will quite happily draw a hand in a position that would break bones and tendons of a human. That's not an issue when you're just doing a drawing, but there are plenty of cases where that lack of context can be an issue. And it may not be just the case of feeding it more data.

That is the kind of understanding that is entirely relevant and not stupid to point out. yes, people get input data to learn, but they also have other senses like pain for instance. They also get experience by trying things out, i.e. experience.

A problem for AI in some tasks is that it has a lack of understanding of the implication of it's choices.

4

aintnonpc t1_je8b2xq wrote

Agree. At some point, machines and humans will reach parity on quality of judgement. Beyond that point, in theory we should have remaining some human oligarchs using AI to make money. But then pretty soon there will be machine oligarchs which are using fellow machines (and humans) to create value (which could be fuel for electricity, minerals needed to spawn new bots etc).

1

MassiveWasabi t1_je8atls wrote

This is really big, it’s basically a multimodal AI assistant that can be used for image, text, audio, etc. I’m really underselling it so at least skim the paper.

In terms of gaming, it can even control AI teammates individually so you can give different orders to each of your teammates to carry out complex strategies, which they say will let you feel like a team leader and increase the fun factor.

Most importantly:

All these cases have been implemented in practice and will be supported by the online system of TaskMatrix.AI, which will be released soon.

Sounds like this is something we will be able to play with sometime soon. Microsoft definitely wants to get these products into the hands of customers.

TL;DR: use ChatGPT

49

DreamWatcher_ t1_je8aqtb wrote

I'll take the words of engineers and the people who work with these models over the words of some pseud who appeals to wannabe intellectuals with his use of philsophical buzzwords

The reason why you't really argue against his points is because he presents scenarios that haven't been proven. Kind of reminds me of the alarmism of how the research over at cern could end the universe. A lot of things can happen, I remember a couple years back there was a lot of talk about how AI was going to replace blue collar jobs first and now it's opposite.

The future is unpredictable and there's no point in trying to prevent scenarios that haven't happened.

If you want a good expert in the more non-technical side you should start with David Deutsch who actually has good credentials.

5

SnooWalruses8636 t1_je8ap4s wrote

Here's Ilya Sutskkever during a conversation with Jensen Huang on LLM being a simple statistical correlation.

>The way to think about it is that when we train a large neural network to accurately predict the next word in lots of different texts from the internet, what we are doing is that we are learning a world model.
>
>It may look on the surface that we are just learning statistical correlations in text, but it turns out that to just learn the statistical correlations in text, to compress them really well, what the neural network learns is some representation of the process that produced the text.
>
>This text is actually a projection of the world. There is a world out there, and it has a projection on this text, and so what the neural network is learning is more and more aspects of the world, of people, of the human conditions, their their their hopes and dreams, and their interactions and the situations that we are in, and the neural learns a compressed abstract usable representation of that. This is what's being learned from accurately predicting the next word.
>
>And furthermore, the more accurate you are in predicting the next word, the higher fidelity, the more resolution you get in this process.

The chat is available to watch officially on the Nvidia site if you're registered for GTC. If not, there's an unofficial lower-quality YouTube upload as well.

Being too reductive is still technically correct, but there're understanding of emergent properties left unexplored as well. Mitochondria is a collection of atoms vs Mitochondria is the powerhouse of the cells.

5

Mrkvitko t1_je8ajsn wrote

Most people mention air attacks on the datacenters as the most controversial point, and miss the paragraph just below. > Make it explicit in international diplomacy that preventing AI extinction scenarios is considered a priority above preventing a full nuclear exchange, and that allied nuclear countries are willing to run some risk of nuclear exchange if that’s what it takes to reduce the risk of large AI training runs.

That is downright insane. The ASI might kill billions, assuming:

  1. it is possible for us to create it
  2. we will actually create it
  3. it will be initially unaligned
  4. it will want to kill us all (either by choice or by accident)
  5. it will be able to gain resources to do so
  6. we won't be able to stop it

Failure at any of these steps means nobody is going to die. And we don't know how big is the probability of each of the steps succeeding or failing.

We however know that nuclear exchange will certainly kill billions. We know the weapon amounts and yields, we know their effect on human bodies.

If you argue it's better to certainly kill billions and destroy (likely permanently) human civilization over the hypothetical that you will kill billions and destroy human civilization, you're at best deranged lunatic, and evil psychopath at worst.

19

MichaelsSocks t1_je89ji1 wrote

> That's pretty damn optimistic, considering Yudkowsky estimates a 90% chance of extinction if we continue on our current course.

Even without AI, we're probably a greater than 90% chance of extinction within the next 100 years. Climate change is an existential threat to humanity, add in the wildcard of a nuclear war and I see no reason to be optimistic about a future without AI.

> I don't see why narrow AI couldn't be trained to solve specific issues.

Because humans are leading this planet to destruction for profit, and corporations wield too much power for governments to actually do anything about it. Narrow AI in the current state of the world would just be used as a tool for more and more destruction. I'm of the mindset that we need to be governed by a higher intelligence in order to address the threats facing Earth.

15

Mindrust t1_je89g09 wrote

> but too stupid to understand the intent and rationale behind its creation

This is a common mistake people make when talking about AI alignment, not understanding the difference between intelligence and goals. It's the is-vs-ought problem.

Intelligence is good at answering "is" questions, but goals are about "ought" questions. It's not that the AI is stupid or doesn't understand, it just doesn't care because your goal wasn't specified well enough.

Intelligence and stupidity: the orthogonality thesis

4

zeychelles t1_je88jjh wrote

Kinda? Our government is discussing it primarily to help citizens sort out taxes easily but particularly about policies about it.

https://www.actuia.com/english/morocco-towards-the-implementation-of-a-national-policy-dedicated-to-ai-to-accelerate-digital-transformation/

They’re also trying to follow UNESCO’s suggestions about the implementation of AI.

https://www.moroccoworldnews.com/2022/03/347908/morocco-unesco-pledge-to-strengthen-artificial-intelligence-ethics

It’s important to note that we’re an African country, so it’s interesting to see that they’re talking about it at all.

1

Spire_Citron t1_je88dfm wrote

Yup. Everyone seems to think the rich will just hoard all the wealth and be happy while the rest of us die in the streets, but everyone benefits from living in a functional and stable society. The rich may be reluctant to contribute their fair share to achieve that, but that doesn't mean it makes no difference to them.

5