Recent comments in /f/philosophy

FrozenDelta3 t1_jch4jg0 wrote

Allow me an opportunity at a different approach.

The incompleteness theorem states that in any reasonable mathematical system there will always be true statements that cannot be proved. Responses to this theorem have been varied. Some people have proposed that if we demand that the standard of proof in the sciences is mathematical certainty and math is not 100% entirely provable then absolutely nothing is certain. While the incompleteness theorem presents a problem for those that want math to be entirely provable, this theorem only applies when self-referencing in a negative. So, as of now, math is provable except in this specific paradoxical self-referencing scenario yet people still claim that all math is now suspect despite it’s accepted provability.

I would rather judge a situation’s provability first before participating in likelihood of occurring or being real. It’s unprovable whether we are or are not brains in jars and that is my ultimate position, but if I were forced to choose I would lean unlikely. Do I believe it’s unlikely? No, I think it’s unlikely. First and foremost, it’s unprovable either way.

Would you happen to have access to this journal?

https://www.cambridge.org/core/journals/behavioral-and-brain-sciences/article/abs/knowledge-before-belief/B434EF04A3EA77018384EABEB4973994

While many philosophers may agree that knowledge depends on true belief, I see that not everyone does. It seems to be a semantics game, each side clamoring for their specific words choice to become primary.

  • A philosophy professor of mine once asked me if I knew that George Washington crossed the Delaware.*

My response would be “it’s been mentioned in history books so there may be truth to this story.” If the professor pushes me to choose belief or disbelief in the story I would push back against participating in belief or disbelief. I would much rather report on the origin or state of communicated beliefs rather than participating in choosing to belief or disbelieve.

In your second example you speak as if you may have read about Washington’s crossing being propaganda or intentional misinformation and then how you could believe what you read. Again, I would state what is written or even accepted by others without taking the next step of believing (or disbelieving) it myself.

Edit I can speak about things without participating in believing or disbelieving

1

Base_Six OP t1_jcgtaf8 wrote

I think there's space for an everyday sort of knowledge if we define it as "beliefs in which I'm highly certain, and for which I'm highly certain I will not encounter contrary evidence." That feels like it falls far short of the general philosophical constructions of knowledge, though. For instance, under that sort of construct I can "know" things that are false, or know things that are contrary to my other beliefs. It's a useful shorthand, but not the same thing as the Knowledge of Descartes, Russell, or Goldman. It's a far cry removed from JTB, in any case.

There's people that subjectively have no doubt that the world is flat. Does that mean they have knowledge that the world is flat? Similarly, I have had dreams in which I've had zero subjective doubt that what I'm experiencing is reality. Does that mean I know my dreams to be reality? I don't think these sorts of edge cases are a problem for a colloquial knowledge-as-strong-belief sort of a construction, but I think they speak to its frailty as a philosophical construct.

I would define "reasonable" as the conclusions you come to that you subjectively feel to be most logical. These may not actually be logically sound, but we have to make do with the best we're capable of. If there's better logic out there that I don't have access to, it's irrelevant to me when I ask the question of what I ought to believe.

The caveat here is that I'm premising that statement on the notion that said logic is inaccessible. If I gain access to new logic, it would be unreasonable for me to discard it out of hand because it disagrees with my conclusions. This applies to most conspiracy theorists: they aren't unreasonable because they've come to false conclusions, they're unreasonable because they've supported their false conclusions on the basis of cherrypicked and/or fabricated evidence that's extensively contradicted. Ignoring those contradictions and ignoring the baseless construction of those beliefs is what renders them unreasonable.

If someone believes the Earth is flat because they're a child in an isolated community that's been told by trusted teachers and parents that the Earth is flat, they're reasonable in holding that belief. If someone is insistent in believing the Earth is flat when confronted with the mountains of counter-evidence and thousand year old proofs of its roundness, those same beliefs are no longer reasonable.

1

aaclavijo t1_jcgt2h2 wrote

"Well that's what makes it so exhausting to be human in a highly capitalist society, your whole value is based on how well you can adapt to situation and you constantly adapting to them to meet your basic needs. It's basically like being a wild creature out there in the world. We just don't realize how stressful it is."

-Stacey Higginbotham

the context of the quote comes from a discussion about an advanced AI being defeated in the game of Go. The players had finally defeated the Ai with a tactic of grouping. The computer couldn't understand or recognize the pattern of grouping. To most players the this seems very obvious however the Ai will learn and adapt and so will humans.

What I found striking about the quote was that here we are 2023 thinking that the role of hunter/gatherer is behind us and yet Stacey points out that we're still performing the age old role of hunter/gatherer, and accurately defines value in our society.

You can here the full conversation on :This week in google, episode 707.

2

Ohgodgethelp t1_jcghkf4 wrote

>So maybe the Mugger should up the price -- why ask for a measly 10£ if it only can be done once? And now we are in familiar, but arguably unavoidable, "icky" territory of assining cash value to the physical well-being of individual humans

From the post above mine. This raises another interesting question, where there is a threshold variable. From 0 to x dollars life has a value. At X dollars the cost to self passes the danger to self and the icepick becomes an attractive option. So the mugger does in fact start the process of assigning a value to a life. Then the individual (or more realistically the community that looks away) decides at what point that the danger to self and the value of the muggers life cross. So really it wanders into the territory of the most utilitarian of pursuits, the judicial system.

3

anon19895 t1_jcgfg41 wrote

Can someone please explain this part, it doesn't make any sense to me:

> I wouldn't take the course. If you gave me the money, I would in fact keep it for myself... I don't see how it would be relevant for a Rule Utilitarian. Your giving me the money would be part of the best possible combination of everyone's acts no matter whether I would take the course. What should matter to you is that I could do so – not whether I would.

Rule Utilitarianism seeks to act in such a way that utility will be maximized if everyone acted in the same way.

"Give $10 to someone if it will make them an effective altruist" makes for a utility-maximizing rule.

"Give $10 to someone even if there is a negligible chance of them becoming an effective altruist" does not make for a utility-maximizing rule.

What rule exactly is the mugger proposing that requires Bentham to give him $10? "Give $10 to anyone who could become an effective altruist, even if they just told you they won't"? That hardly sounds like a utility-maximizing rule.

7

HamiltonBrae t1_jcgawei wrote

What do you mean by beliefs here? If a belief is "a subjective attitude that something or proposition is true", then I feel like a reasonable/justified belief that something is true isn't really that different from knowledge here. Obviously, the thing you believe has to be true to count as knowledge but then you believe it is true by the definition of belief. If your evidence is strong enough or reasonable enough where you subjectively have no doubt then to me that says you would logically believe that you have knowledge of it, so is there much practical difference? In cases where you have less confidence or certainty in the evidence then yes you may not believe you have knowledge because you are obviously not sure; but then again, I don't think someone who is engaging in the "folly of knowledge" which you are arguing against would say they have knowledge either, because they are unsure: the stances are hard to distinguish. So, even if knowledge here is defined by JTB, I may not practically be able to get rid of the belief in knowledge; I believe I have knowledge in certain circumstances where subjective uncertainty approaches zero (e.g. like where my house is). Your article's view ends up with something like a Moorean paradox of claiming to be "discarding knowledge" but still logically ending up believing in it in the same cases someone would normally. Surely then the problems of skepticism about knowledge remain when using the term belief as defined above, if you believe that you have knowledge (regardless of whether you actually do under JTB)?

 

Regarding your skeptical hypothesis: you say we shouldn't believe the strongest skeptical hypotheses because they are "unactionable". I will give you that one, though I think maybe its conceivable for some one to have weird/incoherent beliefs like that and still function. The unactionable thing doesn't really seem to affect most of the weaker skeptical hypotheses at all though; just believing (or even just being unsure that) you live in a simulation or an evil demon deceiving your senses you seem to be things that don't contradict "actionable" beliefs at all; its still possible to have a normal life in a simulation.

 

Also, it seems that what counts as reasonable evidence is subjective. Your examples kind of preach to the choir of someone with relatively normal beliefs but could you actually convince someone who holds some of these skeptical hypotheses to change their beliefs? Probably not if their beliefs seem reasonable to them. Their beliefs and what counts as evidence may seem arbitrary and weird but so might yours to them. They might ask about your "falsifiable hypotheses" of why you can be so sure that there are no bees in the suitcase or how you know your test to check the broken watch is reliable. I feel like ultimately you would end up resorting to things like "because it happened before" or "because I remember these things tend to happen", then they might ask how can you show that this memory or knowledge is reliable and that opens the door for them to say that you're beliefs are just coming out of nowhere or that you haven't shown or justified that they are definitely true and that the skeptic should believe them. I think if you cannot convince the skeptic then you haven't truly solved the problem, unless you are implying in the article that the skeptic should believe in their skeptical hypotheses based on their "reasonable beliefs". I guess thats fine but its unintuitive to me to pit these different hypotheses against eachother if the message is just essentially believe whatever you think is reasonable. Neither would there seem to be much consequence of someone simply entertaining their uncertainty about an evil demon or even crossing the threshold to belief if doing so didn't have any effect on their "actionable" living.

 

I think an interesting point also is that these types of skeptical hypotheses are held by real people in some sense. Some people genuinely believe we are in a simulation, some people believe that the universe is purely mental(or physical) and many many people believe in some kind of God. Is God that much different from a (non)evil demon? Especially something like a creationist God where all of the evidence for evolution ans that the universe is billions of years old is just wrong.

 

Edit: Following from the last paragraph, it's also interesting to think how a Christian crisis of faith is kind of analogous to the skeptical problems raised by descartes, but inverted. Christians are faced with the problem that it is conceivable that their world could have been created without the existence of a (non)evil demon, and so everthing that follows in their beliefs is also false.

1

Traveevart t1_jcg9otw wrote

The mugger's failings didn't discourage him, I would argue, because he had previously succeeded with Bentham. If you do something and it works the first time, even if it fails in several subsequent attempts, you'll keep trying because you know it can work. If you try something and it doesn't work the first time, you have no reason to believe it is capable of working at all, and therefore are less likely to continue.

2

qj-_-tp t1_jcg8y93 wrote

While those conjectures are fascinating to debate, at some point an ice pick to the temple obviates the need to make any further payments, or have tedious ongoing non-consensual encounters with violent criminals.
I get that’s a different debate. But I think variety in debates leads to better, more optimal outcomes, so we should consider it.
“How many times has it been? Three? Here’s the situation, I don’t have cash handy, but I have something of equivalent value that I can give you instead.” “Well, don’t just stand there, gimme!” And thus the infinite regression is averted and, bonus, after cleaning the ice pick, it can be used again if needed elsewhere.

6

MordunnDregath t1_jcg5n6m wrote

But that's the point, isn't it? A dialogue like this hinges on a few assumptions about the characters involved, including the contradictory position that the author expects from the audience: that we will treat these characters as both facsimiles and accurate representations of the philosophies under discussion.

Yet it all falls apart when we go "Why wouldn't the Utilitarian simply respond to the Deontologist with 'I don't believe you?'" There's no point in continuing this conversation past that realization.

5

mirh t1_jcg3oer wrote

There's absolutely nothing wrong with thought experiments, and even with spherical cows (to the extent that the approximation is still usable).

The problem comes up when you try to focus monolithically on just a single facet of a topic (like this article), forgetting not just the common grounds and results of a discipline.. but even omitting the most basic common sense that even a random joe would have.

7

mirh t1_jcg2v55 wrote

Damn, there's so much going on I don't even know where to start...

https://en.wikipedia.org/wiki/Moral_hazard

https://en.wikipedia.org/wiki/Prisoner%27s_dilemma#The_iterated_prisoner's_dilemma

And be that as it may deontology has to be opposed to consequentialism (they aren't really, if you don't conveniently cherry-pick particular frames of reference or time spans) how does that say a iota about lying? You can even be a saint, but if nazis were to knock at your door, you wouldn't reveal the people hidden in the basement.

No shit ethics and morality are completely different from whatever real rational expectations you'd have of them, if you somehow introduce "ontologically indissoluble and unavoidable" pacts/contracts/bindings that can assure you of a certain behavior no matter what.

2

2ndmost t1_jcg1grl wrote

I've always thought about this, too! I knew a lot of would-be stoics in college and, besides how stoicism in general is misinterpreted, the Meditations is always particularly thorny for me.

Like no one ever seems to talk about Aurelius' philosophy from the point of view that he was the most privileged person (arguably) in the known world.

OF COURSE he would want people to do what's good for the state without emotion or worry about strife - he needs them to in order to justify his rule.

2

Traveevart t1_jcg1dlh wrote

Well-written! However, I have two primary disagreements with the author here.

  1. Bentham begins as an act utilitarian, requiring that he maximize utility, such that none of the potential alternative actions could be more effective. I would argue that, in the first scenario, the maximally effective course of action for Bentham would be to make his best effort to convince the mugger to abandon deontology--and ideally all his scheming efforts.
     
  2. Partially related, I'm not even convinced that giving the mugger the money is the maximally effective action in the first case. I would assume, in most circumstances like this, that if one were to agree to the mugger's demands, he would continue his scheme into the future. The author sort of engages with this point when Bentham asks, "Won't this encourage copycats?" and the man responds, "No, it'll be our secret." However, we aren't concerned with other copycats; we're concerned with that specific mugger continuing his scheme. Reasonably so, since that's exactly what happens when Bentham gives him the money. In reality, both choices Bentham can make eventually end with the mugger cutting at least one finger off. Either he refuses himself, and the mugger cuts it off immediately, or he gives over the money, and the mugger gets multiple cut off later, having been emboldened by his previous success. Therefore, Bentham should not give over the money, as even if the mugger cuts off one finger in the moment, the failure of his scheme could discourage him from ever trying again, which ultimately saves more of his fingers.
11

Ohgodgethelp t1_jcg1biz wrote

>What matters to Bentham is the future, so his moral calculus would be the same. That is, on the second iteration of the threat, Bentham must hand over another 10£. And so on...

I feel like I should point out this is literally how the mafia works. You add a few layers, such as the money was originally given as a loan, and the 10 is an "interest payment," meaning it comes on regular schedules and isn't a surprising or crippling amount.

9