Recent comments in /f/philosophy

Tolbek t1_j9z0kkw wrote

Thank you! So few people appreciate, or even recognize, the actual roots of what Marx was getting at with his theories, it's rather been overshadowed by the parts the Bolsheviks would go on to cherry pick for their own agenda.

Communism isn't something you can just make happen, it's a theoretical societal evolution. Violently forcing communism into being is like undergoing chemotherapy because it'd be really cool to have a third arm.

8

TheOtherMe8675309 t1_j9yy87b wrote

I put a memo in there that I recently made at work. I started with a bullet point outline I wrote and then had ChatGPT flesh out an actual document. I went back and forth with it for a couple of drafts, then I copy/pasted it into Word, rewrote a few sentences, changed a few words, and reorganized the paragraphs.

The GPTzero says it was likely written completely by a human. So I was curious what it would say about the original version, so I copy and pasted that straight out of ChatGPT. It was also likely written entirely by a human, according to that website.

So, all in all, I give it a zero.

11

AllanfromWales1 t1_j9yvqsi wrote

And yet, we live in a capitalist society. Like it or not (and I don't), profit decides what scientific developments get implemented. The only way around is that the government incentivises 'progress'. But the government isn't going to incentivise something which causes mass redundancies. At least, not until AIs get the vote.

0

lllorrr t1_j9ytdn0 wrote

Yeah, for a tech person like me it is even obvious. Take ChatGPT for example. All it is doing is tossing a many-sided dice to choose a next word to finish a text. Yes, the user prompt is considered as a text that should be extended by the neural network.

I am not downplaying the role of OpenAI engineers. They did a really amazing job to make a language model that assigns probabilities for words in a given context. But in the end, it is just a random number generator that chooses one word at a time from a given list.

5

Whiplash17488 t1_j9yt1tx wrote

We still haven’t figured out how to democratize democracy. We have an app for everything. Why can’t I tell my representatives what my opinions are on more nuanced issues? Why can’t I have an app that shows me how the city is spending money?

Most of political discourse is posturing by a political class. They should be teaching constituents about the pros and cons of an argument rather than spend money on showing the lack of virtue in each other. Who cares that the other guy is divorced. I need them to do their jobs.

Ah… i’m getting too old for this. I’m going back to my hobby. Making hand crafted guillotines.

1

Mintfriction t1_j9yskdz wrote

That's actually the premise of communism

Marx saw the massive technological strides happening in his lifetime so the question was what will happen, when efficiency due to machine will make the worker either unnecessary or easy to replace. Who will own the means of production then and how the people will be able to survive

People think communism was a about the soviet union or abolition of markets, but it's about this point in human history.

13

ardentblossom t1_j9yrsny wrote

Not sure why people are down voting you. i’m not a tech person, but as “AI” are at this current time, they are basically just regurgitating knowledge someone taught them to regurgitate- bias and all. It literally just generates whatever you teach it to exactly how you teach it to. No intelligence behind that imo

4

oramirite t1_j9yrncw wrote

What a cynical view. Cost-effectively making your service, product or content worse isn't better. "Better" is supposed to represent more than just monetary gain. Quality of life, effect on society.... hello? Just because investors treat the world like a game and think the only thing that makes something "better" is a higher number on a piece of paper does not make that reality.

When we chase nothing but profitability we forget that we are humans with lives.

2

cloake t1_j9yr90d wrote

Sorry for the delay, it's way harder for me to articulate in a digestible way my instinctual reaction. Yea I suppose I'm sounding incoherent with that initial aside. It was mainly because I looked into more unrealism after this video and was like okay, this greater body of reference is not how I would view things. However his framing and argument in the piece I do agree with a lot of. I too appreciate how we're targeting a modelling of the world with our "closure." I too agree with something approximate to this subject-object relationship.

It's hard to articulate my feelings of my contradiction, the easiest layer is that nobody 100% shares the same perspective. But deeper, and why I think the "never attaining true conceptualizing of reality" aspect isn't the right approach, is that ultimately he's appealing to perfection as an enemy of good. There is like an undercurrent of shared reality that in a sense objectifies the emergent properties that human spends their limited attention on and not only that, the way we are molding our attention patterns are also undercurrent of some objective properties. And I think it's also I personally recognize I'm a hard ass about so called closure perspective, because in my field closure gets result.

2

ValyrianJedi t1_j9yr261 wrote

There are a decent many that it just flat isn't compatible with though... And of equal importance, AIs aren't able to have accountability. Somebody's head has to be on the chopping block for major decisions made, and that can't be an A.I...

Not to mention in some jobs the human element itself is critical, and obviously can't be replaced. Like my background is in finance and sales. Sales is about as automation-proof as it gets. I have absolutely zero doubt that my job will still exist in 40 years. With finance there are some positions that are extremely suited for automation, and really have already been automated, but there are also a boatload where it would be virtually impossible for people to trust an AI with that level of responsibility and discretion...

In positions like those, the AI being capable and able to do something well enough to pass aren't really relevant to why they wouldn't work.

1

GreenTeaBD t1_j9ynka0 wrote

The human brain, as far as we can tell, requires input to be creative too. It's just our senses. Making creativity into anything else is basically calling it magic, an ability to generate something from nothing.

This does not have to be a person typing prompts for ai, it just is because that's how it's useful. I've joked before about strapping a webcam to a Roomba, running the input through clip, and dumping the resulting text into gpt. Theres nothing that stops that from working.

2

scummos t1_j9yn8cf wrote

Well, for one, human's jobs being automated has been happening for centuries. The world hasn't ended yet. People have found new things to do.

Also, while yes, the progress of AI tech is pretty impressive, I think people are prone to over-estimating both its current as well as its future capabilities. ChatGPT, for example, has some pretty severe limitations if you want to use it for anything practical, mostly because it simply makes stuff up and claims it to be true with extreme confidence. This is a fundamental problem and not easily fixable. "AI"s like this will certainly be a very powerful tools in competent hands; but they will not be self-reliant actors competing with humans any time soon.

6

GreenTeaBD t1_j9ymzr3 wrote

There are models that are open source and near GPT3. The most open are eleutherai's models, though not as big as GPT3 perform very well. You can go run them right now with some very basic python.

The problem is less that we don't have open models, it's that we haven't found good ways to run the models that big on consumer hardware. We do have open models that are about as big as GPT3 (The largest Bloom model) but the minimum requirements in GPUs would set you back about 100,000 us dollars.

Stable Diffusion didn't just democratize image gen AI by releasing SD open source, but by releasing it in a way people with normal gaming computers could use it.

We are maybe almost at this point with language models. Flexgen just came out, and if those improvements continue we might get an SD like moment. But until then it doesn't matter if GPT3 is open or not for the vast majority of people.

1