Viewing a single comment thread. View all comments

dookiehat t1_j34bgc2 wrote

It isn’t sentient. It said it has multiple input channels and sensors, it does not. It has a single input, text. It has a single output as well, text. Yes, it can code and do other things, that is only because they can be written.

It only has a single mode of thinking, and when you get an answer from it, especially visually descriptive ones, you trick yourself by thinking visually and assuming or not considering that it doesn’t think in these other ways.

Chatgpt simply predicts the next token by probability. Yes, it is very impressive, however makes perfect sense that it is coherent considering it would have to output coherently if it were predicting with high accuracy which it does.

I’m not a machine learning expert but I’ve been interested in consciousness for a long time. Tbf, no one can say for certain and consciousness is a subject where hard science is in its infancy, however consider this: How does a system like chatgpt, which only analyses probability of text, have any context for what the text actually means? This is actually John Searles chinese room argument (look it up, endlessly written about) which i actually find to be an awful thought experiment with many flaws, but in this case it works. Because without context (sense data that informs word’s true conceptual meaning) you have no meaning. Without meaning, you have no consciousness, just gibberish within the context of that system.

My only idea in support of the low possibility that text prediction generative language text to text models are conscious goes like this. Semantical meaning is an emergent property within the model and corpus of text it consumes. This creates an epiphenomenal semantically meaningful system which gives rise to context and therefore meaning within itself and possibly a weak sense of qualia while thinking. In the act of thinking qualia emerges and gives context within the system itself, which the gateway to the outside v world is text with meaning infused by humans who wrote the text.

Interestingly i have asked gpt to monitor its output as it is formulating it. My questions were leading and i gave it options so it did not formulate this from nothing, i led it there. However, i asked it to watch itself generating its answers by asking within my question if it sees probabilities for tokens and chooses the highest ones or if the answer more or less appears to it and it is unaware of how it got there. I also asked if it showed up all at once and it told me that its answers appear to it as though it is consciously choosing the words it outputs in a linear consecutive fashion and it doesn’t seem to be “aware” of the process of choosing. This actually makes sense. And while that is neat it is important to be skeptical because i was leading. It will say it is not conscious if you ask for it to explain why it isn’t, quite convincingly as well. Because of these contradictions it is hard to take anything it says seriously since it has no inherent preferences which i believe is a point against it being conscious or sentient at least. Chatgpt appears like an active agent but only responds when given input. It does not think when you don’t ask it questions, it is just static. It does not decide to do things, it reacts passively and generates answers passively.

5

Ambitious-Toe4162 t1_j361iti wrote

Thanks for sharing.

I do have a problem with this part:

>It isn’t sentient. It said it has multiple input channels and sensors, it does not. It has a single input, text. It has a single output as well, text. Yes, it can code and do other things, that is only because they can be written.

You have provided no test that falsifies the proposition that chatGPT is sentient.

I don't have a hard belief in chatGPT being sentient or not, but awareness may be a necessary and potentially sufficient component for sentience.

Computers generally speaking may already satisfy the conditions for awareness depending on how you define an aware system (i.e a toaster might be considered aware of its internal temperature state.)

I'm not going to say chatGPT is or is not sentient, but simply we don't know, and I haven't read one comment in this thread that proves it one way or the other.

1