AdditionalPizza

AdditionalPizza OP t1_it5f0tw wrote

Hopefully it happens quickly. Some people seem to want to hold onto jobs for as long as possible. But I'd rather most jobs go quickly, then just slowly and painfully.

If it goes too slow, policies will lag way too much getting ahead of it.

12

AdditionalPizza OP t1_it5erg0 wrote

Oh I gotcha.

I know what you mean, but I disagree to an extent. There's not a ton of terms for this stuff really. It'd be confusing if there was, but the ones I know of anyway, they're pretty useful and will become much more commonly used.

Transformative AI is exactly just AI that is transformative. It will make huge changes coming up shortly. We need a way to describe AI that's more transformative than Siri, but not at the level of AGI. The stuff that automates white collar workers' jobs.

Proto-AGI is important because there will almost certainly be claims of AGI that aren't full AGI. Needs to be distinguished somehow. It just means basically beta AGI. The arguments for proto-AGI will be coming with some LLM's soon most likely.

But yeah, I feel you.

2

AdditionalPizza OP t1_it4ygqu wrote

I can see the argument here for sure. But it's not up to general society. Corporations will do this first. Think nearly all support chat and calls as a start. When you call now you get a shitty robot that you have to push buttons to get through, or chat that you have to try and get to a human. Those would be replaceable today, and save enormous amounts of money. All that takes is a small LLM a corporation could train on their products/services.

Decision makers that see the dollar signs absolutely will. They outsource products overseas with inferior quality because they don't care. They reduce consumable product sizes and charge more money for them because they don't care. When their quarterly profits go up, they don't care how the customer feels.

4

AdditionalPizza OP t1_it4x6zl wrote

Do you think LLM's have zero programming involved?

If I'm not making sense to you, it's because you don't want to make sense of it in the first place.

LLM's will help develop and train new LLM's, soon if not already. Whether directly or indirectly doesn't even matter at this point, but they will directly in the near future.

7

AdditionalPizza OP t1_it4w71z wrote

>so long as the courts don't recognize AI legal advice and the public feels more comfortable getting a real lawyer, a good AI lawyer program won't make a big impact.

That's the same point everyone misunderstands. Transformative AI != full automation off the start.

It will replace the lawyer's law clerks. How many law clerks can say "Well I'll just use my skills and become a lawyer" though? Very few. They will be unemployed. This will happen across all industries. Rapidly, and more advanced versions will come out faster and faster.

We have LLM's that can nearly do this, released earlier in the year. There will probably be push back, don't get me wrong. But the Lawyers that choose productivity and money over employing the people below them will take on more cases, earn more money, get better advice, choose better clients to win more cases.

9

AdditionalPizza OP t1_it4v3ys wrote

Coding productivity is a bottleneck for every IT industry. But that's not the point.

LLM's will target these industries, and LLM's are written by programmers. Programmers that can more efficiently write code and design LLM's will make better LLM's.

LLM's that can help design better LLM's, that are targeted at helping productivity in every other sector.

11

AdditionalPizza OP t1_it4uiwn wrote

I'm assuming you didn't get the gist of the post then. I'm not talking about full dive VR and nano-bots building dreams.

I'm talking about office work, research, and programming being disrupted after 2025, and before AGI. Every industry that involves IT will be affected, and productivity of those sectors will skyrocket. This will inevitably lead to low skill layoffs at first, and echo up the chain of command.

24

AdditionalPizza OP t1_it4tuqe wrote

That's the sort of thing I expect in (or around) 2025 to start happening. That followed by new industries in the scope of LLM's. And these LLM's will all be much more impressive than the one's of 2020-2022.

4

AdditionalPizza OP t1_it4tj17 wrote

Between now and 2025 I think we will have 5 years of progress (by 2020 standards). I know that's a weird way of putting it, but I think that's how our attempts at exponential thinking goes. If this were someone in the general public, I'd say 10 years of progress between now and 2025 (2015 standards).

It will be progress with LLM's, so it will be very exciting. But yes, if I'm right, I hope we are more conscious of its consequences.

8

AdditionalPizza OP t1_it4skby wrote

I think that is not quite correct. I'm not even talking about AGI/ASI in the post as being the Transformative AI either. Too speculative to comment on something like ASI remaining contained or whatever.

But while I agree the bottleneck is production and distribution, software is so easily distributed. We don't need labour jobs being taken over by robots right away. Programmers, accountants, lawyers, researchers, any intellectual career; these can all be very easily disrupted. I'm not even talking full automation either. I'm talking a tipping point for policies and governments to change. Transforming society. An AI to increase efficiency in robotics, distribution logistics, production techniques? All of these are overnight emails to swathes of employees being laid off. It will happen more and more frequently. I believe it will start soon, the tech to start really automating significant portions of jobs that lead to lay offs will be created by 2025, and after 2025 the dominos will fall. That's what I predict anyway.

We don't need AGI to disrupt everything. I don't think governments and policy makers will catch it in time either.

​

>Dali is cool but until it is used widely in commercial applications

Text to image AI is already being used commercially. Like a lot. Photoshop will be mostly replaced soon with AI editing images as well.

16

AdditionalPizza OP t1_it4l93q wrote

And I think it will happen at a rate faster than people are currently projecting. Assuming we need AGI and "2029" is nonsense. So many more jobs can be replaced within a generation or 2 of of LLM's.

I'm not even trying to be optimistic, it might kind of suck for a lot of us. It's like pushing a stalled car off a railroad with an oncoming train. It appears to be moving slowly until it doesn't.

12

AdditionalPizza OP t1_it4ke0n wrote

Every single target that LLM's have had in their scope so far start out slow, and then become useful to the general public and private sectors. A ton of people use copilot, what are you talking about? And copilot is powered by Codex, and Codex is being updated with self correction and testing. It's a matter of time at this point.

19

AdditionalPizza OP t1_it4jut4 wrote

>However, if the same goals are exponential in difficulty then an exponential growth could just be linear.

I agree with you here, and that's part of what I'm saying in the post. Increasing the efficiency of programmers through AI like Codex increases the growth rate of all sectors across the board.

​

>If a set of goals are linear in difficulty then exponential growth will get us to those goals in exponentially lower times

Maybe you can explain this better, but this makes no logical sense to me assuming the starting point is the same.

5

AdditionalPizza OP t1_it49ija wrote

I think things that are close enough to AGI in almost every aspect will make enough large scale disruptions to society and humanity. AGI will probably be claimed before true full AGI is developed and at that point it probably won't matter whether or not something is fully an AGI or not. I think these proto-AGI will be much sooner than we are augmenting ourselves. 5 years maybe. Possibly 3 or 4. My answer will probably change in 6 months to a year.

24

AdditionalPizza OP t1_it48wtq wrote

Yes to be clear, I'm saying between now and 2025 is the start of Transformative AI. It will be at the point it's ready to start making disruptions. 2025 and beyond society will begin to feel those effects through large scale automation and such.

---

edit: I want to clarify this line too, as I don't think I explained it well in the post

>In 2025, what feels like 5 years TODAY (2022) will be 1.25 years.

Right now, 2022, we base our expectations of the rate of progression on the past. So 5 years of progress would be 2017-2022.

2022 - 2025 will be the next 5 years of progress condensed into ~2.5 years.

In 2025, the next 5 years of progress will take place within 1.25 years, relative to to the exponential rate from 2017-2022.

We base our predictions off the past.

33

AdditionalPizza t1_iszdu3b wrote

I should've mentioned the AI effect isn't my theory, it's just a part inside of my theory haha.

>I wondered why the Gato paper (from what I read of it) didn't try any cross domain exercises. e.g. get a robot arm to play an atari game.

I believe this is being done by something with google, and likely others. I'm not sure why specifically they didn't do it with Gato a while back now, but it is definitely being done with other models.

1

AdditionalPizza t1_isyv3gv wrote

I have a bit of a theory on this actually. It's a combination of a couple things. The AI effect being the most obvious, where people will say AI can't do something, and when it does they dismiss it because it's just computer calculations. A moving goal post of sorts.

Another reason is it's still in its infancy. Yes, if you know the proper search terms for specific AI you can find some stuff. Like go ask a random person if they know what Codex is, or Chinchilla. If you don't follow closely or care about AI and tech, you probably won't have heard of this unless someone you know is very interested in it and talks about it. Even then, I have some friends I talk to this stuff about but they aren't super interested in it so they don't go and look into things too much.

The last reason, some people might think it's borderline a conspiracy theory but hear me out. Big tech companies and professionals close to the creation of AI are well aware of how the general public would react to "Hey check out this AI, only a few more steps until it obliterates your usefulness at your current job" so they actively are championing this stuff as a tool to help people be productive. They are navigating everything by treading lightly until they are ultimately at the point of releasing some transformative AI and then there's no going back. There's no policies to be made quick enough to keep up with the advances and hold them back. The last thing tech companies want at this point is to be stifled on the road to AGI by some policy makers trying to save jobs. If they can get to the point of being able to bring down enough sectors quickly, it will be too late to do anything about it.

We're talking about the most brilliant minds in the world, the ones in charge of aligning AI properly. Of course they have to set everything up before they can go for the spike.

2

AdditionalPizza t1_isycb3u wrote

Just saw this post now, posted same day as mine here asking specifically about programming. A lot of the answers seem to suggest there's nothing to worry about within the decade or longer.

I was basically asking if it's basically over for anyone looking to get into entry level careers in programming/web dev. Like is it worth it to start learning it now to find a career in it a few years from now. I made the comparison to graphic designers not worrying, then suddenly outraged, but programmers say their work is much more difficult for an AI to do.

I think this will happen for everyone though, because they take pride in their careers and the ability of AI to do it more efficiently hits a nerve and creates denial. This is going to happen with every sector.

Personally I think programming should be the main focus. Automating it, especially fully automating it, will accelerate every other sector. The faster the transition, the larger the sense of urgency placed on governments, the softer we get through the transition to some form of UBI or whatever this revolution brings. I think there's some very intelligent people that have planned this to reduce suffering among the population. I hope so anyway, because governments will drag their feet to avoid making decisions.

3

AdditionalPizza OP t1_istidql wrote

Hmm, I respect your opinion on this, but I do disagree on some of it.

Needing someone "specialized" in inputting English text prompts (very likely voice to text soon with Whisper) is exactly what this tech is conquering.

Different, but only kind of, text to image. You can just be as specific as you want. You can put "a castle on a hill with a dragon" or you can put "a medieval European style castle with flying buttresses extending down from a walkway above a sprawling fortified wooden gate, situated on top of a grassy hill covered in poppies, surrounded by a dark blue watered mote, with a fire breathing dragon that has golden scales shimmering in rays of sunlight peaking through dark billowing storm clouds" and then from there further edit details to get exactly what you want.

With upcoming voice to text, and an assumption on my part, a friendly user interface, I can't see how making the front end of a website exactly how you want. I'm not very familiar with backend stuff so I'll admit I can't speak much to that, and security and such.

I really have no idea, none of us do at this point. But I just imagine whoever is telling the programmer what they want could describe it to the AI just as easily and get next to real time results.

>Also, who is creating these AI systems? Programmers!

I realize this, but that's far from an entry level position that's easily expendable. I'm focused on those that are beginning a career more so than those that are in the industry and highly skilled.

2