Recent comments in /f/philosophy

Otto_von_Boismarck t1_j9yedln wrote

Why do you people keep talking about language? That has nothing to do with the topic. Things are simply constructs of the human mind. A chair is only so much a chair as we all agree it is one. The universe as far as we know is made up of a bunch of intrinsic items (fields or whst have you) that create new emergent properties through a bunch of interactions we call extrinsic properties. And a collection of those extrinsic properties are in our brain constructed together to give the illusion that we are looking at a singular "thing", when such a thing in reality doesn't exist. It's a mind construct, useful at that, but a construct nonetheless.

I also don't deny reality exists. Quite the opposite. I'm a physicalist after all.

−2

Jay_Louis t1_j9ye76k wrote

Or the foundational Jewish belief about God. Jews write G-d to remind that even the name isn't comprehensible, language itself is incomplete, early Derridean theory 3000 years before Derrida

1

Otto_von_Boismarck t1_j9ydgib wrote

Youre missing the point. There simply isn't an actual "thing" called a bottle, it's simply a category for a collection of extrinsic properties. The bottle only exists so far as it exists in the construct of your mind.

You keep trying to talk about this "thing" called a bottle, but it doesn't exist. I could use a flame thrower on the bottle and it would change into something entirely different, but would still broadly exist of the same subatomic particles. Just with different extrinsic properties, and therefore looking like something different to us.

Yes of course our mind doesn't actually make an accurate and realistic version of the universe. That's why we see "things" and not just clumps of fields interacting with each other. We can't actually see subatomic particles after all. But plenty of evidence suggests those exist.

−1

Rayqson t1_j9yd09o wrote

For real. People don't understand how alarmingly quick AI is going to grow and quote me on this because it IS going to happen: People are going to lose their jobs to AI robots because they can learn much faster, plus they can keep them running 24/7. CEO's WILL choose robots over humans, all in the name of profit. And it IS already happening as we speak. For example, 20 million automation jobs are going to be lost to automation by 2030.

Nvidia said in the next 10 years, AI is going to be a million more times advanced than it is now, and with supercomputers, this is going to be even worse.

AI needs regulation, and human life is in serious danger. And I don't mean in the way of rebelling AI robots, no. This is going to be a slow, structural decline of the society we've built so far. First, it's the manual labor folks. Then, once we can automate and learn AI how to manage data entry/office jobs, it's the white collar folks.

And they're not gonna compensate these folks. They don't care. Back then in the automation phase nobody got anything. You just get fired and that's it.

You can "nah, AI isn't growing that quick besides it's not usable right now it's so inefficient." me all you want, but go tell that to computers. Tell that to the internet. Tell that to mobile phones. They ALL got the same comments in the beginning, and look at where we are now.

Even Stephen Hawking warned us about it before he died. We need to regulate this because it is structurally endangering humanity, where only the elite who own companies are going to be left. (Even though I won't be surprised if this causes a serious civil war against the rich once they've claimed all wealth for themselves. Think full on raids to kill people like Elon or Bezos.)

Stephen Hawking also specifically stated it's either the best thing or the worst thing that's ever going to happen to us. But if we keep valueing money over people like we are now, it IS going to be the worst thing.

35

bbreaddit t1_j9yc6x3 wrote

It's kind of scary to think of what one entity could do with ai on their lonesome. It should not be hard to access the data used to create an ai to ensure anyone can recreate the ai so we wouldnt need to be worried about what it would say and why, and if it were using any data it shouldn't be (hmm at current models)

1

siliconecookies t1_j9ya0kj wrote

Your choice of values should be entirely based on your own opinion. Those that make you feel warm and fuzzy inside and not what society expects you to choose. It is not about what is correct or incorrect, but rather what aligns with what you, personally, find important.

0

GepardenK t1_j9y9tth wrote

> Did we just pass the Turing Test?

Well, no. Even if everyone thought this article was written by a human that would not pass the Turing Test.

The Turing test requires two participants to engage back and forth in conversation, one being human and the other an AI, and then for a third party to watch the conversation in real time and not be able to distinguish who is human and who is not.

It is a significantly higher standard than simply confusing some algorithmic text for having been written by a person.

5

AllanfromWales1 t1_j9y8p35 wrote

Sorry, by 'apocalypse' I meant loss of the workplace as the social norm, not some sci-fi nonsense. In my judgement the timescale for this exceeds my life expectancy quite significantly. That in part is mediated by the fact that I'm in my late sixties with various health conditions, but even without that I think the doomsday predictions are too premature.

2