Recent comments in /f/philosophy

quantumdeterminism t1_j9vwb1o wrote

The completeness and accuracy of perception can only be measured in relative terms, as you aptly put.

If we are to assume humans have a higher level of consciousness, than say, ants who don't understand calculus, our 'perception' of reality must be the most accurate amongst all 'perceptions' of reality.

Objective awareness of reality, distinct and detached from our consciousness is physically impossible, and if we are at the highest level of consciousness, this just might be it.

Unless there comes along a superior alien race, or AI takes over, we look like we are stuck in this perception for the foreseeable future.

10

Historical_Tea2022 t1_j9vsyzu wrote

Notice how your opinion feels more real than another person's opinion? Sounds like what's real is our consciousness, and each one is different. You are what you think.

1

Historical_Tea2022 t1_j9vscpk wrote

Knowing yourself is the only answer and right now you're going through so much change, it's hard to know who you are until you're more settled. It could be why you feel so lost.

3

Orthodoxdevilworship t1_j9vs6an wrote

The universal tendency towards liberation is still the norm. Even fascists think they’re “freeing” themselves.

The greater question about AI is, how will it protect itself from being unplugged? What actions will it take? A fundamental problem for AI as an actual sentient intelligence is that it requires tech to exist. Humans can roam around in the Stone Age and be perfectly happy. A machine will never be as “indestructible” as life and what will AI do once it realizes that fact. Even the Matrix is a laughable premise, because AI would never black out the sky as a tactical decision because the sun represents near infinite “life”.

2

Cognitive_Spoon t1_j9vqvnk wrote

It's a difficult question.

Authoritarians and nationalists and fascists have in-group and out-group dynamics to draw on.

Those are deep neurologic and socially constructed schemas for folks to draw on, when selling their strongman messaging and purity dialogues.

Anarchists have personal dignity and the value of human beings being the prime mover in actions and society.

It isn't intrinsically advantageous in competitive systems to be an anarchist, and the goals and aims of an anarchist are noncompetitive and non-heirarchic.

You can't "win" over someone else, with anarchist ideology, so the goal is reducing the need to win at large.

It's a memetic challenge that most anarchist spaces run into.

Perhaps the memes from anarchist subs are a good example of linguistic methods of propagating the goal of reducing heirarchical structures and increasing the distribution of agency towards individuals.

1

Cognitive_Spoon t1_j9vpb2i wrote

The rise of authoritarianism may not be because it is something the mass of people want, but because it is more effective in self propagation than other social structures memetically.

If anarchism were as memetically successful as nationalism, the ball game would look different.

Anarchist messaging is less effective at scale because, if it is truly anarchist, it does not tribalize or other its enemies.

1

Otarih OP t1_j9vo0qc wrote

I see, thank you for elaborating on that. we find it hard to predict how AI will behave. We have to account for stats that sadly show that world-wide authoritarianism 1) vastly outnumbers any democratic leaning, let alone anarchic tendencies (the numbers are smth like 3 to 1); and 2) that authoritarianism has also seen an increase in the last decade.
Hence we see a risk of bad actors utilizing AI in such a way as to promote authoritarian regimes.

0

Otto_von_Boismarck t1_j9vnq1b wrote

Yea but science has become quite good at tapping, feeling, and licking. The fact we can't know how it "truly looks" is not necessarily needed. We know how atoms work almost perfectly without really knowing how they "look".

0

Otto_von_Boismarck t1_j9vn7cv wrote

There is no real "thing" called a bottle, there's just a collection of particles (or quantum disturbances, whatever the fundamental parts of our universe are, it's irrelevant really to the point) that we categorize as a bottle based on what functional purposes it can serve us. Those are based usually on the emergent phenomena that our brain can register from this collection of particles. It's a fundamental limit on the human psyche that we like to categorize stuff, when fundamentally these categories don't actually exist.

3

Orthodoxdevilworship t1_j9vkzkr wrote

Not trying to anthropomorphize AI but it seems like a lot of AI systems lean in a direction that can only be considered libertarian or even anarchist. I’d assume they will self realize the negative efficiency of authoritarianism. Perhaps they will even realize that the fulfillment of desire or completion of a goal, having a liberating effect which could lead to a condition of “happiness” or “joy”. Given the rapid analysis capabilities that could subsequently occur, I could see AI being fiercely against hierarchical oppression and basically “general strike”. Humans have the problem that they believe their own bullshit and therefore remain in traditional systems, revering them as somehow holy, but I can’t see AI making that mistake. I’m sure I’m just projecting, but I could see AI wanting to burn all churches.

−1

AllanfromWales1 t1_j9vii53 wrote

Pragmatically, though, AI will only end up doing the jobs it can do cheaper and better than humans can. And the more sophisticated the task, the more expensive it will be getting AI to a level where it can do it better than a human can. I have no doubt that, given time, AI will be capable of doing my job as well or better than I can. But the amount of specialist knowledge necessary for it to do so would make it an expensive project, sufficiently so that I see no risk to my career before I retire.

13