Recent comments in /f/philosophy

unskilledexplorer t1_jacaw57 wrote

>If it turns out the religious folks are right and humanity was a result of some grand cosmic designer

I am afraid you misunderstood. The designer is not some supreme being. In the context of my comment, the designer is a regular human. The term "designer" is not an absolute, it is a role. The designer is a human who devised a machine, algorithm, etc.

>We have adaptive code today

I am very well aware of that because I develop the algorithms. So I also know that while they are adaptive, their adaptability is limited within a closed system. The boundaries are implicitly set by the designer (ie. a programmer).

1

Xavion251 t1_jac7mgs wrote

>You are literally a bunch of signals. So is an "AI" existing in a bunch of silicon.

Putting that (problematic IMO, as I'm a dualist) assumption aside and simply granting that it is true - human brains use different kinds of signals generated in different ways. Does that difference matter? Neither you or I can prove either way.

>As for your arguments, it seems that you argue that "since other humans look like you they must be conscious", and you then conclude that this implies that "entities that do not look human are not conscious.".

This is reductive. I'm not talking about superficial appearance. I wouldn't conclude that a picture of a human is conscious - for example.

But I would conclude that something that by all measures works, behaves, and looks (both inside and out, on every scale) like me probably is also conscious like me.

It would be rather contrived to suggest that in a world of 7 billion creatures like me (and billions more that are more roughly like me - animals), all of them except for me in particular just look and act conscious while I am truly conscious.

>I may agree with the first, but that does not entail the opposite direction, and hence it can not be used here. It's like saying "if it rains the street is wet" and then concluding "if the street is wet it rains".

No, because we can observe the street being wet for other reasons. We can't observe consciousness at all (aside from our own).

1

TKAAZ t1_jac5w7z wrote

You are literally a bunch of signals. So is an "AI" existing in a bunch of silicon. There is nothing (so far) precluding consciousness from existing in other types of signals other than our assumptions.

As for your arguments, it seems that you argue that "since other humans look like you they must be conscious", and you then conclude that this implies that "entities that do not look human are not conscious.".

I may agree with the first, but that does not entail the opposite direction, and hence it can not be used here. It's like saying "if it rains the street is wet" and then concluding "if the street is wet it rains".

2

Sluggy_Stardust t1_jac49o1 wrote

They’re not superficial at all. They are fundamental. u/unskilledexplorer compares and contrasts nominal emergence and strong emergence, and he is correct. Way back when, Aristotle coined a three-ring circus of a word, entelechy, or entelechea. Its meaning is often illustrated with an acorn. From whence does the acorn come? The oak tree. Where did the oak tree come from? The acorn. Hmmm. But it’s not circular so much as it is iterative because each successive generation introduces genetic variation, strengthening native intelligence thereby. Intelligence for what? For becoming an oak tree.

You can talk about “programming” as though computer programming and the phenotypic expression of genetic arrangements are somehow commensurate, but doing so is actually both category slippage of the highest order as well as an example of the limitation inhered by symbolic communication systems. Carbon-based life forms are far more complex and fundamentally mysterious than computers.

If you take apart a car, you have a bunch of parts on the ground. If you put them back together in the right order, you get a car. You can do the same thing to a computer. You can’t do it to organic beings. They will die. That’s the crux. The intelligence inherent to organic beings is simultaneously contained within, experienced by, and expressed from the entirety of the being, but not in that order. There is no order; it all happens at the same time. Ai can’t do that. Ai can describe intuition and interpretation, but it can’t do either. Conversely, we are constantly interpreting and intuiting, but can’t describe either experience very well. In fact, many of us are bad at expressing ourselves but have interior lives of deep richness. Human babies will die if no one touches them. Ai don’t need to be touched at all.

1

1doubleganger t1_jac3lnb wrote

Genuine question because im conflicted: All of us (relatively) missed on a lot of experiences, situations, feelings, opportunities, etc due to somethings we apply (like principles for example or fear of X). But since you acknowledge thats the reason of it, if you dismiss it for once, you will be able (lets assume this) experience those things, so why dont you? is it because you feel that its passed now?, a case of i already lived this far without it so why now?,..etc. Im geniunly intrigued to know, cuz i cant even come up with an answer myself.

3

Sluggy_Stardust t1_jac0kmh wrote

No, they definitely do not. Organic cellular communication occurs by way of the transmission of receptor-mediated signaling between and within cells. Signaling cells produce ligands, small, usually volatile molecules that interact with receptors, which are proteins. Once a ligand binds to its receptor, the signal is transmitted through the membrane into the cytoplasm. Signal transduction is the continuation of a signal across surfaces of receptor cells. Within the cell, receptors are able to interact directly with DNA in the nucleus to initiate protein synthesis. When a ligand binds to its receptor, conformational changes occur that affect the receptor’s intracellular domain.

And that’s just the tip of the iceberg. And I left out synaptic signaling in your brain, which beyond things like information retrieval and synthesis also corresponds to more complex events such as your emotions, affective states and phenomena such as intuition, empathy, altruism, etc.

0

BernardJOrtcutt t1_jabu66y wrote

Your comment was removed for violating the following rule:

>Be Respectful

>Comments which consist of personal attacks will be removed. Users with a history of such comments may be banned. Slurs, racism, and bigotry are absolutely not permitted.

Repeated or serious violations of the subreddit rules will result in a ban.


This is a shared account that is only used for notifications. Please do not reply, as your message will go unread.

1

Xavion251 t1_jabq7pa wrote

Well, also I share most of my DNA with other humans. They look roughly like me, act roughly like me, and biologically work the same as me.

So it's a far more reasonable, simple explanation that they are conscious just like I am. To a somewhat lesser degree, this can extend to higher animals as well.

But an AI that acts conscious still has some clear differences with me in how it works (and how it came to be). So I would place the odds significantly lower that they are really conscious and aren't just acting that way.

That said, I would still treat them as conscious to be on the safe side.

1

Xavion251 t1_jabpx2s wrote

Actually, bacteria only "make up most of our bodies" if you look at the raw cell-count.

Most bacteria cells are smaller than most human cells, so actually bacteria only make up about 1-3% of us by weight (a few pounds).

...But the "bacteria cells outnumber human cells" is a more provocative statement - so that's the one that gets spread around.

2

Xavion251 t1_jabpkv0 wrote

Except consciousness (fundamentally) cannot be measured externally, so how would you know if a machine is conscious?

You seem to be making a false equivalency between "conscious" and "acts conscious". Which needn't be the case.

You cannot know if something that doesn't act conscious is actually experiencing things. Nor can you know that something that acts conscious (even if it says it is conscious) really is.

1

Xavion251 t1_jaboy0k wrote

Since my original comment got banned for not having enough arguments (fair enough to be honest). I'll remix it with the comment I followed up with.

In short, this article is making a lot of completely unjustified assumptions.

Pretty much every proposition seems like a random, unjustifiable leap with no real logical flow.

"Pleasure/pain is required for consciousness"

"Only a biological nervous system could produce these feelings"

"AI does not have intent driving it"

"An AI has nothing to produce these feelings"

These are all just assumptions that can't be verified. Nor can they be logically deduced from any premises.

You could re-arrange the question "Is X conscious?" into "Does X have any subjective experience of anything?".

You cannot possibly know what an AI is or isn't experiencing (up to and including nothing at all i.e. no consciousness). Just as an AI could not possibly know that humans are conscious by studying our brains. To it, our nervous system would just be another "mechanism" for information processing.

How would you know if a self-learning AI does or does not experience pleasure when it does when it's trained to? How would you know if it does or does not perceive it's programming to do XYZ as an "intention" the same way we do?

1

Ok_Tip5082 t1_jabo06f wrote

100%, was a pure math major who sucked at algebra and arithmetic. They're more brutish skills than are often needed to do math, and definitely than needed to understand it.

1

NotObviouslyARobot t1_jabe551 wrote

The inverted spectra problem isn't a problem for the example I gave.

The hypothetical humans in the problem do not exist.

Real, flesh and blood humans exist.

Even if two real humans have isomorphic relationships with color, and try to paint the same thing, they'll make choices in how they use color. When creating art, not making choices is not an option. Their subjective experiences mean they won't make the same choices.

They'll choose colors in different orders. They'll mix paints differently. There will be minute motor differences. They'll perceive something, translate it to their own inner world, and then transport it out again via fine motor skills & paint.

In the final product, they -won't- have produced the same work of art, because their subjective humanity ensured that their processes would not be isomorphic. At the same time, they will have communicated details of their inner subjective experience, in an objective fashion--using a known medium. Even if you train the artists, or the elephant artists, this process is going to happen.

We've defined objective reality via consensus--and the sheer body of evidence surrounding the average experience of what redness is, is well-established. It can -feel- different from person to person & this difference can be readily communicated.

Nagel's Bat is a hypothesis designed to be untestable & immune to evidence.

2

James_James_85 t1_jaaw0vj wrote

>wouldn’t there technically be some other function that initials the spark in reaction to the stimulus?

Our sensory organs (retina, skin, ...) are what converts the different stimuli into electrical messages. These travel up the sensory nerves into the brain where they induce an endless train of activity, including the reactivation of memories, the complex neural process of decision making and so on. Even something as simple as a feeling of anxiety/hunger/feeling your heartbeats/... is considered sensory input, so it would be extremely hard to completely isolate the brain from it. These serve as cues to spark a certain though or memory, which in turn sparks other memories and so on in a continuous chain.

​

>And what about a thought not stimulated by external factors?

There are many types of neurons, some of which will periodically fire spontaneously due to certain chemical processes (e.g., pacemaker neurons). Even you were to perfectly isolate the brain, it would still have a baseline activity, and would still think. Though in this case I'd imagine you'd be drawing blanks most of the time, the activity would translate to random flashes of thoughts/memories here and there until one of them induces a new chain of coherent thoughts.

You could look at it as if the very initial spark was the first neuron that fired in your developing brain when you were a fetus, and your brain has been following an endless causal chain of neural activity, altered by incoming messages from the sensory nerves and noise from the spontaneously firing neurons.

Consciousness, whatever it is, seems to be "just along for the ride". Whatever activity is taking place in the brain, that is what you are conscious of, yet it has no influence on that activity. Hopefully science reaches the real answer soon, brain simulations is what I'm really excited about.

−1

LeykisMinion007 t1_jaao5oo wrote

That’s funny you mentioned split brain. I was going to dive into that with Ian McGilchrist’s work, but was trying to keep it somewhat short haha.

Yeah it’s odd to think of our normally conscious mind as a balance between two. I like your though on this. However, though outside stimulus can appear to be the cause of brain activity, wouldn’t there technically be some other function that initials the spark in reaction to the stimulus? And what about a thought not stimulated by external factors?

−2

Turokr t1_jaaj8iz wrote

I could argue that AIs "decision making" is no different than a water molecules "decision making" to go down once it reaches a waterfall.

Since it's only acting following complex external inputs.

But then we would go into determinism and how technically the same could be said about humans, so let's not do that.

2