Recent comments in /f/singularity

FrogFister t1_jedi46k wrote

Now that I think of it, if everything is under camera and surveillance on streets, I should feel safe in a way. Though I was more suggesting being randomly detained by officials under suspicion of being an international spy agent.

I am interested to see the Old China. so mostly nature but also see some cities sure. but would love to see more of the rural areas and places that seem to still have a bond with the old times.

−1

systranerror t1_jedhnrt wrote

There's no way that the book he wrote is still relevant even if he did finish it. Imagine it coming out in June when most of it was written pre-2022. He's got a real challenge now to somehow make it relevant assuming changes that will happen in months between submitted to publisher and coming to store shelves/Amazon.

3

FreakingFreaks t1_jedh9r6 wrote

We fired a guy that wrote us a text for email campaigns. Now we use GPT 4 for it. But we wouldn't do that if the guy did his job in time. That was really not a hard job to do. We asked him many times, but now with GPT 4 it doesn't make sense to still work with him and pay much more to get same result.

4

mihaicl1981 t1_jedgyly wrote

Hmm.. Unfortunately it also seems a lot of stuff was released to the public in March.

Not like it was all discovered now.

Mostly gpt-4 and the attached tech (copilot(s) and plugins) plus papers.

But there were 3 years between gpt-3 and gpt-4, including the chatgpt 3.5 in between.

I am really scared about jobs and the economy in general.

So imho there is no risk of the paperclip machine being born for a good decade (that is ASI).

32

TMWNN t1_jedgyc5 wrote

>Dehumanizing language that equates any disagreement with non-personhood is shitty and you should feel bad for doing it.

As I indicated, I am among those who don't automatically dismiss the likes of GPT as "just autocomplete". On the contrary, pattern matching is a fundamental part of intelligence. I doubt there is a Redditor who has not replied with a meme or copypasta. That's normal and natural.

Being unable to do anything else is not normal or natural, or at least should not be. I wish I could find the Reddit post; it was astounding how many, many hundreds of comments all said the exact same thing. That they used slightly different wording made it worse, not better; at least if they had all used the exact same words it would be clear that doing so is part of collectively participating in a larger metajoke.

Instead, hundreds of allegedly sentient human beings a) immediately posted the first and only thing that came to their minds in response to TEXAS = BAD, and b) did not bother to check (or did not care) whether anyone else might possibly have come up with the same brilliant riposte. That is behavior that the term "NPC" well describes.

4

mutantbeings t1_jedgsly wrote

Personally I think its creating just as many jobs as its replacing at present.

I work in tech and am aware of several companies that have started hiring MORE people to deal with an influx of chatGPT-created inaccuracies and problems. Mostly in support or testing roles. I also recall the sci-fi magazine that closed writing submissions and hired actual human writers because they were getting flooded with poor quality chatGPT writing submissions and had no way to vett the massively increased volume of submissions.

On the flipside I am not aware of any major layoff in my industry specifically related to AI, just huge problems being caused — so far.

Its not really ready for prime time, let's be honest, its still hugely unreliable.

Longer term for this industry — I still don't think its an open and shut assumption that it replaces jobs. I can see it replacing many tasks, and mostly competing with the people in my industry already offering "no code" products (which haven't replaced our jobs either despite those products existing for 20 or so years already)

And I mean, consider this: As an employer, if one of your employees suddenly started producing 10x as much code .. would you think "hmm I should fire them"? Of course not. I think its WAY more likely the effect on the industry will be more like: "wow this coder is on fire, I'm getting so much value here now that they're using AI; maybe I need to create more dev roles, this is a goldmine"

Idk why everyone assumes it automatically replaces jobs. That's a big leap of faith from what we are actually materially working with right now.

1

mutantbeings t1_jedfxrm wrote

You can sign up for a free demo for free. I use it at work from time to time and 50% of the time outright lies to me with unshaken confidence. Its still extremely unreliable.

Give it a few months and these sorts of companies are going to show up in hilarious articles about all the ways they've fucked up by relying so hard on a clearly unreliable tool... I guarantee it

0

Demaga12 t1_jedfth2 wrote

I can only say that junior developer hires are dropping. A lot of companies froze hiring for new junior devs even earlier because of market conditions, but right now it makes even less sense to hire junior devs. Middle/senior level devs can output much more now with GPT-4 / Copilot. I expect IT market to change a lot in coming years.

As a junior dev I am terrified xD

6

Andriyo t1_jedfs83 wrote

I'm not a specialist myself either but I gather what's difficult to understand the LLMs for humans is due to the fact that models are large, with many dimensions (features) and inference is probabilistic in some aspects (that's how they implement creativity). All that combined makes it hard to understand what's going on. But that's true for any large software system. It's not unique to LLMs.
I use word "understand" here in the meaning that one is capable to predict how software system would behave for a given input.

1

mutantbeings t1_jedf6ch wrote

I saw that article!

Levi's is using AI generated models to "increase diversity"

(because obviously paying racially and physically diverse models a salary was a step too far for Levi's?!?)

Man, its so fucked.

We are actually awash in horror stories racking up in the tech industry about AI ... but whenever I try to post them here I just get piled on for daring to speak against the near religious zealot tier optimism about AI in here.

eg Check my post history for the post I popped in here saying AI might create jobs ... Mods removed it because they didn't think it would "generate discussion" lol.

You have to go to tech subs for more balance on the topic tbh

6

Coolsummerbreeze1 OP t1_jeders2 wrote

I know what you mean. Hopefully the advancement of ai and robots will lead to a fundamental change in the economy where all the money isn't sucked up by those who own capital or the means of production.

As ai and robotics can produce more goods at cheaper costs, the prices of said goods should come down dramatically over time. For example, if robots or ai can design and build homes (maybe 3d print homes?) in mass quantities, the cost of home should drop with supply and demand. Same with other products like food and etc.

One other main problem is the government. Too many old politicians that are not willing to change with the times. However, as they die out and the next generation steps into office, things will change for the better.

7

mutantbeings t1_jedektf wrote

>Microsoft fired their whole AI ethics department. If I was an AI asked to cut costs, that's literally the first thing I'd suggest doing.

Of all the subs I expected to hear this, r/singularity is perhaps one of the places I expected people to be up to speed with the ethical concerns attached to AI so I'm curious to hear why you think that?

Supplementary question: do you work in the tech industry?

4