Recent comments in /f/philosophy

smaxxim t1_ja7wee1 wrote

I guess all the confusion comes from the word "know". We are, actually, don't know what it’s like to be us, we feel what it’s like to be us, we feel our feelings, we don't "know" them. Feel something it's not the same as "know" something. So Nagel should have said: "you can't feel what a bat is feeling". With that, I guess everyone can agree.

1

James_James_85 t1_ja7w1gi wrote

A thought experiment on consciousness:

Imagine you had an infinite piece of paper, an infinite pencil. You scan a human's brain on the cellular level and draw a 2D map of its entire neural network with all its gory details on the piece of paper, and represent electrical signals e.g. by small circles inside the axons.

Then, using your expert knowledge about chemistry and the dynamics of cell movement, you repeat an endless cycle of going through the entire drawing, erasing current electrical messages and redrawing them slightly ahead in the axons, and erasing the free dendrites and redrawing them in a slightly altered position (and any other aspect of brain function I may have missed). perhaps also feed it some visual/auditory signals through the optic/auditory nerves, and other made-up sensory inputs. In a way, you'd be doing a full "manual simulation" of the brain on that piece of paper. Overlook the fact that this would be impossibly tedious, imagine you had infinite time on your hand and are precise enough not to make any mistakes.

Now here's the question: would that "brain on a paper" have its own consciousness, provided the simulation is accurate enough?

I'd suspect yes! It would be experiencing its own "fake reality" of sorts, and as soon as you stop drawing it's like it blacks out. Draw again and its experience resumes without it noticing anything has happened. It'd also think time would run at normal speed for it, provided the sensory input you're feeding it is slowed down to the appropriate speed.

What are your thoughts?

2

BernardJOrtcutt t1_ja7rswn wrote

Your comment was removed for violating the following rule:

>Argue your Position

>Opinions are not valuable here, arguments are! Comments that solely express musings, opinions, beliefs, or assertions without argument may be removed.

Repeated or serious violations of the subreddit rules will result in a ban.


This is a shared account that is only used for notifications. Please do not reply, as your message will go unread.

1

Otto_von_Boismarck t1_ja7o5qa wrote

Well no the mind is obviously also not real. It is also extrinsic. And our own mind interprets itself as a "thing". But regardless that wasn't the original subject of discussion. And yes you're right that it serves no utility. I don't deny that. The reason the human brain thinks in "things" is exactly because it serves a useful evolutionary purpose.

In general, it is a hard subject to talk about because our mind constantly wants to categorize EVERYTHING.

Edit: I have a way to explain it better. Essentially our mind constructs are an emergent property from our brain, while our brain itself emerged from biochemistry, while biochemistry emerged from chemistry and so on. They are all extrinsic, and somewhere down there there's "things" (again I use that terminology because our minds are simply limited in that way) that are in fact intrinsic. Electrons seem to be one for example. Meaning there is nothing that electrons themselves emerge from. And again, this might change based on developments in science.

1

Wolkrast t1_ja7o3mz wrote

>The reason why AI can’t love anything or yearn to be free is because it has no body. It has no source of feeling states or emotions, and these somatic feelings are essential for animal consciousness, decision-making, understanding, and creativity. Without feelings of pleasure and pain via the body, we don’t have any preferences.

The article makes a number of very strong claims here. At the very least we know that AI is capable of decision-making, in fact that is the only thing it is designed to do.

The heart of the argument seems to be less about a body - after all a robot with onboard AI would fulfill that definition, which is clearly not what the author is talking about - but about the difference between decisions motivated by logic versus decisions motivated by feelings. This begs the question how for example pain avoidance is different to optimizing a value function to avoid things that deduct from it's score. From outside, there is no way to observe that difference, because all we can observe is the behavior, not the decision making process.

We should remember that until as recent as 1977, animals were generally considered as mere stimulus reaction machines. Today you'd be hard pressed to find a scientist arguing that animals are not conscious.

4

HamiltonBrae t1_ja7mq2t wrote

I think they are being too strict; a brain in a vat can conceivably be conscious without habing a body. I think what better describes the things the author suggests as being needed for consciousness is that a.i. needs a sense of self or separation of things that are it and not it.

1

Wolkrast t1_ja7i41r wrote

So you're implying what's important is the ability to adapt, not the means by which the body came into existence?
There are certainly algorithms around today that are able to adapt to a variety of circumstances, and to not influence one's environment sounds conceptually impossible.
Granted, the environments we put AIs into today are mostly simulated, but there is no reason other than caution we shouldn't be able to extrapolate this into the real world.

2

Gorddammit t1_ja7h4eq wrote

It's a bit falacious to set a stone definition for AI when we're talking potential. My basic question is what characteristic is both necessary for human intelligence and impossible to be incorporated by AI?

​

>the piece of code...

currently yes, but there's no rule that says this must be true. Also I don't think this has much to do with 'designer' so much as adaptability. We can design a virus, but it will still mutate.

​

>I understand AI as a scientific discipline. "Artificial intelligence" is not the same as human intelligence but artificial. They are fundamentally different.

If you're just speaking of AI in it's current form, then sure, but I think the real question isn't whether current AI's are intelligent, but whether they can be made to be intelligent. And more specifically whether the networks in which they operate can function as a 'body'

6

unskilledexplorer t1_ja7el8a wrote

Thanks for the questions you have good points. Please define what do you mean by "intelligence" and "artificial intelligence", and I will try to answer the questions. They are very challenging so it will be pleasure to think about it.

>Why does a designer matter at all?

The piece of code that has been programmed in let's say 1970 still works the same way as back then. Although the world and the technology changed very much, the code did not change its behavior. It does not have an ability to do so.

However, a human born around 1970 has changed their behavior significantly by its continuous adaptation to ever changing environment. Not only it adapt itself to the environment, but equally adapt the environment to their behavior.

That is roughly why the role of designer matters.

===

I understand AI as a scientific discipline. "Artificial intelligence" is not the same as human intelligence but artificial. They are fundamentally different.

2

ANightmareOnBakerSt t1_ja7cy2l wrote

Things either exist or they do not. It’s seems incoherent to say the bottle does not exist except in my mind and also to claim that thing in my mind exist as a bunch of fundamental particles. This seems to me what you are claiming is the case.

It is almost like you are saying the thing I am calling a bottle exists but not in the way I think it does. But, I do realize that the thing I am calling a bottle is made of a bunch of fundamental particles. It is just that is of no utility to me or anyone else to describe individual objects as a collection of fundamental particles, because that is such a general description that it could be used for any thing.

2

Maleficent-Elk-3298 t1_ja7b030 wrote

Yea like what constitutes a body? You certainly can say that the computer infrastructure, both hard and soft, fulfills all the same roles as a human body. I guess the question would be what constitutes the body of an AI? The code behind it is the brain but is the body constituted of the server hosting it, it’s particular slice in the digital sphere or is it simply the physical machine(s) it has access to? Or is it both?

3

MonsieurMeowgi t1_ja79ctz wrote

We have no way of knowing that what I see let's say in terms of color is what you see, but we do know that the structures that are interpreting that information are the same or very very similar between you and i, so we can infer a similarity in what we're seeing. Same is true with affection in dogs etc. We're all mammals.

3

Gorddammit t1_ja79c3o wrote

Your differentiators for what makes a human and an ai sepprate forms of intelligence don't read as foundational differences so much as superficial ones.

How would an ai be necessarily a closed system such that human intelligences are not?

How would an ai be necessarily a passive system such that human intelligences are not?

Why does a designer matter at all?

You're saying the parts cannot be taken out and replaced, but they can they? A heart can be replaced by plastic, you can replace insulin production with a pump. None of these things seem to fundamentally change the particular human intelligence such that you wouldn't call it the same intelligence.

11

unskilledexplorer t1_ja7457f wrote

No, I do not think so. While there may be some level of abstraction at which we see similarities between the human body and the hardware of a computer system, there are fundamental differences that arise from their emergence.

A computer is a closed system of passive elements that were put together by an external intelligence. It is a composition of passive parts that somehow work together, because they were designed to do so. This is called nominal emergence.

In contrast, the human organism is an open and actively growing system that shapes all of its parts. This is called strong emergence. Organism was not put together by an external intelligence, it grew by itself thanks to its own intelligence. All its parts are actively shaping all the other parts. However, I would like to use a stronger word than a "part" because these parts cannot be simply taken out and replaced (like in the case of computers). Sorry, I do not know a better English word for it. But they are integral or essential to the whole organism. You cannot simply take out the "intelligence" from human's brain and replicate it, because the human intelligence resides in the entire organism, which goes even beyond the physical body.

While AI may exhibit stronger types of emergence, such as is seen in deep learning, these emergent properties are still local within the particular components of a closed system. It is possible to use technology to reproduce many parts of human intelligence and put them together, but they will still be fundamentally different due to the principles of how they emerged.

Please take a look in the emergence taxonomy by Fromm to get more nuanced differentiation: https://arxiv.org/pdf/nlin/0506028.pdf

12

jliat t1_ja74189 wrote

> For example, his own fMRI studies on dogs have shown that they can feel genuine affection for their owners.

Made my day, having owned three. Next up fMRI studies on their owners show the same...

And as this is a philosophy sub, I think Wittgenstein said something to the effect even if lions could speak English we would not be able to understand them.

2