Recent comments in /f/philosophy

BernardJOrtcutt t1_jd7p4po wrote

Please keep in mind our first commenting rule:

> Read the Post Before You Reply

> Read/listen/watch the posted content, understand and identify the philosophical arguments given, and respond to these substantively. If you have unrelated thoughts or don't wish to read the content, please post your own thread or simply refrain from commenting. Comments which are clearly not in direct response to the posted content may be removed.

This subreddit is not in the business of one-liners, tangential anecdotes, or dank memes. Expect comment threads that break our rules to be removed. Repeated or serious violations of the subreddit rules will result in a ban.


This is a shared account that is only used for notifications. Please do not reply, as your message will go unread.

1

VioletKate99 t1_jd7nvgb wrote

Just pointing out a fallacy is not enough, you also have to be able to show how that fallacy discredits the argument as it is used. People commiting fallacies are just doing a quick patch job on a structure that is their argument. And as any patch job it can be just fine, ugly, or it can be a life hazard.

2

Whetfarts69 t1_jd7liw4 wrote

Really depends on where the fallacy is, and how many forks exist within the argument - that is how complex and dependent upon arguments vs evidence - and how many credible arguments are necessary to consider. What the magnitude and scope of the argument is, and how important it may potentially be or influential it is, on something say socioeconomic quality of life or morality, immediate physical threat, etc. VS something like which artwork is better or whether pineapple belongs on pizza 😂 (it's been firmly-regarded as irrefutable philosophical truth that it doesn't FYI).

I mean yeah technically containing a fallacy doesn't make the whole argument fallacious or worthless...but it still often does. So I don't think we need to do away with Fallacy Theory; we need to use it more appropriately/proportionally; conversely we also should make better arguments, with few, less extreme, or no logical fallacies.

I can readily disregard Fallacy Theory, then proceed to have my above argument become stronger, or more true than not, at least partly due to Fallacy Theory not being here to thwart me! 😂

2

kilkil t1_jd7b1mj wrote

> They expose their opinion almost as if they really weighed the alternatives, selected and then chose (!) the best thesis.

Well, of course they chose. The determinist might simply reply that the fact that they ended up choosing that option is the result of ancient chains of cause-and-effect, stretching back far into the distant past, theoretically traceable to the Big Bang.

Those chains of causality, the determinist might continue, led them to have the childhood they did, to developing the thoughts they did, and ultimately, to their own interest in philosophy and to their own careful reasoning and conclusions on the subject of free will: that it is nothing but an occasionally useful fiction.

1

ViolinistDrummer t1_jd78v74 wrote

>Punishment can act as a deterrent to some

Yes, and notably this neither implies nor requires free will. Even punishment for the sake of revenge can be valid* without free will. Fear and catharsis are just biological responses to stimuli... ¯\_(ツ)_/¯

* I do not advocate for this, but it is a consideration

2

kilkil t1_jd78ej9 wrote

I've pondered this question as well. What I've concluded is that, instead of assigning "blame", "fault", or "responsibility", it's better to simply take a more consequentialist view, and ask: what are the likely outcomes of this person's actions? Should I convince them to do otherwise? Would it lead to an overall better outcome if something were done to stop them from doing it (again)? What should that something be?

By focusing on these questions, we can sidestep the question of who to hold accountable and instead look at what would be the best thing to do overall.

However, what's interesting is that answering that first question, "what are the outcomes", can be very complicated given the chaotic nature of human behaviour ("chaotic" here means "deterministic, but unpredictable in practice"). We have to use rule-of-thumb approximations for this sort of thing, instead of precise calculations. And it turns out that concepts like "accountability", "blame", "fault", and "personal responsibility" are very useful rules of thumb; in effect, when you blame someone for something, you are asserting that their behaviour requires some internal changes, or they'll just do it again. Even if the underlying causes are far outside that person's control, the logic works out the same.

To put it in maybe a more whimsical/poetic way: if we are but the fingers of the hands of Fate, then we cannot be judged for our sins, for they belong to Fate just as we do. But, since Fate doesn't have a mailing address, we'll have to settle for cutting off its fingers as necessary.

2

HugoJP t1_jd78b1t wrote

>then how would one endeavor to live life as if he has free will?

You can't.

The more interesting question is, how does it look like to endeavor to live life as if you have free will? And the answer is going to be different from person to person, so the question you asked is completely hopeless ;)

1

HugoJP t1_jd784mu wrote

>Here's the problem - this lack of free will implies none of us have true moral responsibility for our actions

This is true but you can connect consequences to certain actions regardless of being a free will agent or not.

>and operating according to this assumption is detrimental to both individuals and society.

And therefore this is also false because there are consequences to actions regardless of free will. In nature as well as the man made world.

And this is essentially what we have done with our laws. None of us have true moral responsibility for our actions, but we hold people responsible nonetheless, because the alternative would be worse. Most people just don't realize this.

Where this gets more complicated is if I created a self-conscious killer AI. Does he deserve the consequences for his killings or do I? And if we loop back to what I explained above we get into a problem, because the only difference between a killer AI & a human murderer is that in the second example I engineered this robot and the person was engineered by 'circumstances'. Of course, so was the engineer of the AI...

2

Sveitsilainen t1_jd7657q wrote

> It is not wholly detrimental. It grants us the liberating power to forgive anyone, for anything. Why hate anyone for their actions, when they are just an automaton, like I am? And you do not need hatred to take pragmatic actions, to protect yourself from bad people.

A rifle is way closer to an automaton, and I hated having to hold and use one. Automaton can be badly automated.

1