Recent comments in /f/philosophy

nothingexceptfor t1_j9y55iw wrote

I didn’t say apocalypse, as I said I’m not referencing the article, I’m not even taking about AI taking over or becoming sentient or any of that Sci-Fi nonsense of robots trying to kill us and taking over, I am talking about automation and endless efficiency and the effect it will have in the job and our current world in general (and eventually our own minds, I do believe this revolution will happen in our life time when a lot of people lose their jobs because a fraction of the same workforce can do the same job using these tools, that fraction of people is the ones who get these “new type of jobs” but those will also inevitably will go to.

People keep dismissing the impact of this because when the threats of AI are mentioned images of movies and bad robots immediately come to mind, instead of tools that essentially render a large and significant portion of the population redundant from the work force and when that happens the economical system itself collapses.

The cost effectiveness part of the equation is a matter of time, it is also not something that everyone needs, you just need one or two major service providers that provide these tools as a service to have a huge impact, you don’t need your own server farm or ai models to make use of this, just pay for the service which is a lot cheaper than a larger work force.

5

Purplekeyboard t1_j9xz3ir wrote

"Democratizing" image generation, if that means giving people access to it free, would not be difficult. Imagegen is not that expensive. You can buy unlimited AI image generation now for $25/month from NovelAI (although they only have anime models, but photorealistic models are not more expensive to run).

This also comes with unlimited text generation, although using smaller, weaker models than the best ones available. ChatGPT is currently free as well, and it is the best text generation model that's been released as of yet.

So, at least as long as you live in a first world country, these types of AI are easy to get access to.

2

AllanfromWales1 t1_j9xy217 wrote

"..very soon.." is an opinion. AI has been around a while already, but the signs of it taking over aren't there yet. Yes, it's improving and accelerating, but for now anything that's not repetitive and easily interfaced is not happening. There's still a huge gap between 'theoretically possible for AI' and 'cost-effective to implement for AI'. I'd be very surprised if the apocalypse you predict will happen in my lifetime.

−5

hamz_28 t1_j9xwuok wrote

I don't think it's the existence of reality that's in question. Maybe the existence of mind-independent reality, or which properties are intrinsic to reality, but the existence of reality itself is pretty tough to argue against.

−2

shirk-work t1_j9xvdju wrote

Do you know that puzzle game where you have to slide the blocks to remove a piece? I always thought of it like applying the right moves at the right time to unlock it. Once you know the possible moves you can get a feeling for unlocking things. In that way it's pretty similar to algebraic manipulation. Some people are amazing at that but it's not my favorite. Proofs, number theory, group theory, discrete math is moreso my jam.

3

Magikarpeles t1_j9xudeu wrote

How long before someone makes an AI that makes a website that sells ads and uses the money to buy cloud infrastructure to make more sites and sell more ads to buy more infrastructure?

Or easier: a 4chan AI that starts a cult with little incel minions doing it’s bidding?

I give it months.

1

Magikarpeles t1_j9xtz9x wrote

Stability AI “democratised” stable diffusion by releasing their models and allowing open source platforms to use them. The open source solutions are arguably better than the corpo ones like Dalle-2 now.

OpenAI do release older models of GPT but they are vastly less sophisticated than the current ones. Releasing the current models would “democratise” chatGPT but it would also kill their golden goose.

13

Krammn t1_j9xtvhc wrote

I noticed a distinct lack of pictures explaining the topics talked about in the article. I would have liked some pictures separating the different sections.

The article is quite wordy, and pictures would help to explain the concept in a visual format.

1

ilolvu t1_j9xtg0r wrote

>The text is from Peter Green's Alexander to Actium (California 1993), from Chapter 35, "The Garden of Epicurus" (618-630).

Thank you. I'll try to hunt that down.

>My original post simply expressed the direction I have come to lean concerning the preponderance of testimonia and scholarly debate.

The problem is that you're trying to evaluate Epicurus' personal behavior from those sources. Most of them are either vague or unreliable (like Plutarch) because they come from writers who were philosophically opposed to Epicurus, or wrote centuries later.

>You are of course free to weigh the evidence yourself, toss out whatever you wish, and thus lean in whatever direction you wish.

My direction is that we don't know, and probably can't know, because there are no sources from people who knew Epicurus personally.

>I hope you'll understand if I tend to weigh the opinion of Peter Green and my own over yours. :D

Of course. This is Reddit after all...

1

ibringthehotpockets t1_j9xs6bu wrote

It’s easy to be biased towards detecting that it’s an AI if you read the comments here first. There was very little in the article that made me think “nope can’t be human” - it’s a post on a Wordpress blog. I wouldn’t really hold that to NYtimes level of writing. The thing that stood out most was the jumping around topics from like Oppenheimer and Nietzsche. But still, that to me is just like a high schoolers essay lol.

So to answer your question, yes. I read a lot of books and social media and this passed the test for me. Nothing distinctly unhuman about 90% of this writing. Literally everyone in the comments thinks so too. Unless you’re promoted with “this article is written by AI,” I think most people are gonna go towards no.

5