Recent comments in /f/philosophy

Digital_Utopia t1_j9wfnpy wrote

However, keep in mind that sophistication for a human, and sophistication for a computer are 2 different things. Chances are, the more sophistication required on behalf of a human, the easier the job is for a computer - the latter only struggling with what we consider to be easy- namely sight, and the ability to hold a conversation.

While it's true that computers would struggle with creativity out of the blue - the type of creativity involved in actual jobs, is much less than artists creating their own, independent art.

I mean, someone working on game art, or designing advertisements, or websites, are getting similar parameters as Dall-E, from their supervisors/clients.

6

lordtrickster t1_j9wd15z wrote

I function off a "working model for the situation" approach. You'll never know the entirety of everything, but you can know enough to be useful in a given situation.

2

JohnLawsCarriage t1_j9wcxns wrote

A big NVIDIA card? You'll need at the very least 8, and even still you're not coming close to something like ChatGPT. The computational power required is eye-watering. Check out this open-source GPT2 bot that uses a decentralized network of many people's GPUs. I don't know how many GPUs are on the network exactly, but it's more than 8, and look how slow it is. Remember this is only GPT2 not GPT3 like ChatGPT.

http://chat.petals.ml

4

edstatue t1_j9wcotn wrote

I don't agree that a higher level of consciousness necessarily correlates with a more accurate or complete perception of reality.

Perception doesn't require consciousness, and I think ants are probably not conscious, but we know they can perceive the world around them. Not exactly as humans do, but with various sensory organs.

Why then, would consciousness bring us closer to reality? How often does our conscious mind lie to us? We know that each time we recall a memory it gets modified before we move on... We know that our subconscious mind can even perceive stimuli better than our conscious mind (there's a reason you pull your hand off a hot burner without even thinking about it).

I posit that consciousness is a beautiful lie our bodies tell us, and that if we are to look for a living being on earth that experiences "reality" as closely as possible, it's going to be something that doesn't have sentience as a misleading bottleneck.

15

edstatue t1_j9wblzd wrote

What Lawson gets at with "openness" being all that possible ways in which a thing can be perceived reminds me of the quantum cosmological idea that reality is inherently probabilistic, and that all the different "options" available in a wave function never truly collapse, but collapse for each reference point in potentially different ways.

But where there's a difference is the idea that we're "always infinitely distant from the true open nature of things." The thought school of quantum mechanics that I'm thinking of suggests that there is NO "God-eye-view" of reality, and thus every reference frame is equally legitimate, since no one or no thing can experience multiple reference points simultaneously.

So when Lawson says that reality is a bunch of homogenous stuff that only appears to have differentiation when we using a closing tool like language, that's not far off from what quantum theorists have to say about reference frames and observation (or interaction).

Edit: a word

2

TheRoadsMustRoll t1_j9w9fdr wrote

>This article is kind of nonsense.

yep. here's some highlights:

>AI Creativity is Real

despite the authors wordy arguments AI requires input and only that input (however jumbled) will be returned on a query. if it were really creative it could dream up something on its own and populate a blank sheet of paper with something novel. AI isn't creative. the people that program it might be.

​

>3. Comparison: Human Brains vs. AI

despite the title of this section the author never actually makes any comparison. we only get this:

>The present analysis posits that the human brain, in terms of artistic creation, is lacking in two conditions that AI is capable of fulfilling.
>
>AI decomposes high-dimensional data into lower-dimensional features, known as latent space. [AI is more compact]
>
>AI can process massive amounts of data in a short time, enabling efficient learning and creation of new data. [AI is more comprehensive]

ftr: the human brain processes a massive amount of data and succeeds in keeping living beings alive while driving/painting/writing code. the list of things human brains can do that AI can't is very long.

17

Alexander556 t1_j9w96c7 wrote

If we had a device which could reliably (and painlessly) read peoples minds, without their consent, and could be used to pull specific informations from some ones brain, should we restrict it's use*?
Should we use this device at all?

-Would it be unethical to search the brain of a criminal to find information on a crime?
-Should such information be used in court, or would it be considered as having to witness against oneself?
-Would it be unethical to search the mind of a convicted murderer, to find the locations where s/he hid some of the victims, to give the families closure?
-Would it be a war crime to extract vital knowledge from a POW?

*For example, only to be used by psychiatrists etc. to help consenting patients, research purposes etc.

1

Quantum-Bot t1_j9w8s44 wrote

Whatever we want AI to be like, it will most likely turn out much like computers and the internet, accessible to all who are tech savvy, but dominated by the elite. Everyone can benefit from AI in their daily lives and companies like Amazon and Google are happy to provide that service, as long as they can skim data and ad revenue off of every interaction.

10

22HitchSlaps t1_j9w4onh wrote

With how narrow AI is now I tend to agree with you. No one is going to pay for that. But the thing is we just don't know when AGI is coming, maybe a long way off. To me though I still see even the continued prevalence of narrow AI as so destabilising that it'll affect every sector and job, even if it doesn't specifically take it over. Whether agi or this kinda paradigm shifting destabilisation happens in the next 10 years, who knows but I do see it as inevitable. We need an entirely new approach to society, jobs and capitalism.

2

22HitchSlaps t1_j9w2gdl wrote

Well that's actually fairly reasonable then. I'd say though that the idea that AI will do "some stuff sure but not MY STUFF" is shortsighted. Sufficiently advanced AI will do everything better than humans and the thing is the tech is like an avalanche that has already started. I'm 40 years younger than you, there's no job I could ever do that won't be better done by AI by the time I'm finished learning it to the level you could have in your life time. Such is life...now.

6

norbertus t1_j9w05jq wrote

This article has some problems. The biggest one -- beyond some of the more basic conceptual problems with what these machine learning systems actually do -- is the vague demand that AI be "democratized."

They never define what the mean by "democratize" though they caution that "Big corporations are doing everything in their power to stop the democratization of AI."

We have AI because of big corporations. And nobody is going to "democratize" AI by giving every poor kid in the hood a big NVIDIA card and the skills to work with Python, Bash, Linux, Anaconda, CUDA, PyTorch, and the whole slew of technologies needed to make this stuff work. You can't just "give" people knowledge and skills.

This article is kind of nonsense.

103

rottentomatopi t1_j9vy0o4 wrote

Yes, but there are very high skill, well paid jobs that AI is capable of doing cheaper than humans, primarily in the arts. It’s already tough getting into those fields, so this does make it pretty troublesome because it takes away from those opportunities that many people make a living off of and feel fulfilled by.

2

DriftMantis t1_j9vxk30 wrote

I think the nature of reality is always a self constructed byproduct of conciousness itself. In other words, reality or ultimate truth is not a fixed thing that exists outside of our grasp. It is only a truth in so far as we can percieve it as such.

In a sense this makes sense because you can assume humans have conjured some version of consensus reality, which is an abstracted version of reality as it appears to us. Therefore, we can assume to some extra terestrial alien out there they may have a different version of reality that they percieve which may overlap with ours in someway or perhaps not at all.

If you could gain the perspective of every perspective, forever, simultaneously, I would assume you would become god at that point, the experience of which is ultimate reality itself.

Your ego as a concious waking human can not percieve ultimate reality because your ego will block it from happening and only allow understanding of limited everyday constructed reality.

5

rekdt t1_j9vwz7u wrote

Why not investigate the depression and anxiety? You don't have to think to know reality, forget about philosophy for a bit, what do you see here and now. That's all there is, thoughts of what reality is are just obstructions to the truth.

4