Recent comments in /f/philosophy

i875p t1_ja5zg0d wrote

It's kind of hard to understand how things like this keep happening. Nagel's argument basically boils down to something like "we, as human beings, are physiologically quite different from bats, and therefore it's very likely that we'll never know how a bat experiences things from the 1st person perspective". It is as science-friendly as philosophy can get.

3

Maximus_En_Minimus t1_ja5bvrv wrote

Well, more than that: your boss would experience the reason why you didn’t want to work, the set of circumstances that were making you need a day off to recharge and recover, or perhaps just enjoy yourself as humans do. Those automatons which we call bosses and managers - although not all are - would gain more than a sensibility of empathy for their employees, but actually experience their needs and desires.

1

Otto_von_Boismarck t1_ja56oaw wrote

If it turns out there isn't anything intrinsic, I'm willing to change my view. Doesn't bother me. But it seems unlikely to me that reality just keeps going smaller ad infinitum. Nothing seems to suggest that, thus with current scientific knowledge it seems like a reasonable conclusion. I also didn't mean to suggest that necessarily it has to be interactions between separate intrinsic items. I just phrased it like that for simplicity's sake. It could also just be a singular intrinsic field with disturbances throughout it, or something else entirely. Just one or several "things" that are intrinsic. I never decided to focus on this point because it's, well, irrelevant.

Also it is my understanding that QTF hasn't actually proven itself as a solid framework as of yet. Regardless that's besides the point.

Hard problem of consciousness isn't a hard problem at all. I never found any of the arguments particularly engaging. Of course figuring out how consciousness emerges is difficult. Doesn't mean there is reason to believe it somehow arises through magic. Absence of evidence of it being an emergent property is not evidence of absence. People used to think the earth was the centre of the universe and now idealists and what have you think the same about consciousness. Please, get real.

The whole consciousness conversation is just boring. Simply a matter of waiting for science to explain it, nothing more.

1

songwritingimprover t1_ja546o8 wrote

Nagel was saying that we can't have experiential knowledge of what it is like to be a bat, we don't have access to the qualia of a dog. Just as one human doesn't have access to the qualia of another human.

Empirically showing that dogs have feelings/emotions doesn't show that Nagel was wrong about this. It's something completely different

17

Mustelafan t1_ja4z6mw wrote

Example #136,742 of a scientist misusing their authority as an expert of a scientific field to assert they've disproven an influential philosophical concept that they don't understand. This is the equivalent of an idealist philosopher saying they've disproven evolution by natural selection because animals only exist as impressions in the mind and thus can't be said to physically reproduce or die, or something.

16

Maximus_En_Minimus t1_ja4y8sp wrote

If I was to take a consequentialist point of view, but also use common sense, such a devise would likely lead to a capitalistic fascism of the likes of an Ultra-China. Who needs cameras? - just put a mind-reader 2000 on every block and monitor your populations every thought.

It is ironic, on the above post I take a pure-empathy devise would positively transform the world - which I agree with. However, as soon as you remove both consent and reduce the thought / experience to just information, not shared sensation of understanding, it quickly devolves for malicious usage.

1

Strato-Cruiser t1_ja4xiog wrote

fMRIs are neat but they have limits. They simply show activation of areas of the brain. What they can’t do is tell us how that activation and how that brain is constructing a perceiving information. To draw a parallel like, since many people experience X when Y region of the brain in activated in situation Z; then if an animal shows the same region activated it must be like our perception. I think this is a wrong conclusion to draw.

8

Maximus_En_Minimus t1_ja4wvm2 wrote

It sounds like you are anthropomorphising these beings: an ant has no conception of the wider colony as we would; the ‘non-subservience’ of a spider to another being does not make it more aware of death; a worm is not more intelligent thank a spider.

If you want the closest experience to what one of these beings ‘feel’ - go pee and tap your foot on the floor, their level of experience probably surmounts to no more than an fraction, perhaps an equivalent, of these sensations combined - without the emerging hierarchy of confluent memories, heuristics, ego impressions, saliences, considerations and sensations which constitute a few moments of your conscious experience, that would ever allow for the word ‘understanding’ to be used for such existential concepts as meaning, life and death, and survival. The brain in a single second is capable of firing hundred billion neurones; it can take a genuine while to actually understand any existential concept properly.

Any three of these is probably as equal in their intelligence as one of those hoovers which automatically responds to dirt in the house. I have met people of thirty and forty years of age, who have yet to fully come to terms with the reality of their eventual demise; some of the most intelligent animals - dogs, pigs, cats, apes, dolphins - are certainly not actually aware of it. I thoroughly doubts an ant, worm or spider have even the slightest clue what any of your worms are referring to, other than reacting to light signals which illicit certain responses.

1

kompootor t1_ja4w807 wrote

I am in the sciences but I've read into philosophy quite a bit. One important thing I've learned, after quite a long time, is that whenever I'm reading something, and I'm ready to object that "Physics proves that's incorrect!", then I'm in the process of missing the point.

For those passing by, everyone's referring to Nagel's famous essay on the predictive limitations of the connectivist model in the NMR era vis a vis Chiroptera.

17