Recent comments in /f/singularity

giveuporfindaway t1_jedawmz wrote

I know of exactly zero cases where shift workers or salary workers have been laid off.

Nearly all recent "job losses" have been for part time gig precariat workers who were doing poverty level side hustles. Examples: doing generic copy writing, resumes, etc on Fiverr.

I don't predict there will be layoffs for shift worker or salary workers anytime soon. What is more likely to happen is a hiring freeze. Existing workers will do more with less.

8

sideways t1_jedahz3 wrote

Yeah, I agree. We're actually communicating in natural English with artificial intelligences that can reason and create. It's literally the future I had been waiting for but that never seemed to arrive.

And yet... things are still early enough for goalposts to be moved. There's still enough gray area to think that this might not be it, that maybe it's just hype and that maybe life will continue with no more disruption than was caused by, say, the Internet.

The next phase shift happens when artificial systems start doing science and research more or less autonomously. That's the goal. And when that happens, what we're currently experiencing will seem like a lazy Sunday morning.

204

Scarlet_pot2 OP t1_jedadfu wrote

talking to you is like a brick wall. I'm done. Keep idolizing rich people with your false narratives.

Yeah I'm sure the first person to learn how to raise crops was drowning in wealth. I'm sure the first person to make a bow was somehow wealthy, lmao. I'm sure the wealthy king walked into the blacksmiths place one day and just figured out how to build chainmail. The person who invented the wheel had so much wealth he didn't even need to get up if he didn't want to. all sarcasm. This belief you have is illogical.

In reality, most advancements were made by regular people, very poor people by modern standards, just trying to improve their lives, or discovering by accident, or other ways.

−2

NakedMuffin4403 t1_jeda9l7 wrote

>The best way to get around this is have open source foundational models. To do this you need available compute (people donating compute over the internet) and free training (free resources and groups to learn together). I'm sure tailoring corporate models will play a role, but if we want true decentralization we should approach it from all angles

It's not debatable as SaaS is going to be commodified and models are going to be the new hot thing.

This is a major paradigm shift from SaaS to MaaS (models as a service).

What's terrifying and exciting is that at some point in the future, there will barely be any proprietary software. Most software will be easily replicable with SIGNFICANTLY less engineers given the productivity boost of AI.

Imagine remaking a $100B Stripe with just $100m and 1/100 the human capital.

The MaaS providers are kind of like the people selling the shovels in a gold rush (and the silicon fabs are enabling the shovel sellers).

The software companies that will remain "proprietary" are going to be those that can implement AI more effectively than others, and those who have network effects like social media apps for instance.

3

tiselo3655necktaicom t1_jeda953 wrote

Every advance in productivity was supposed to lead to more free time. But somehow we always end up getting more productive and working the same amount or more. Where does the extra productivity go? To the owners. Why do you think that's going to change? Expert consensus is that it will not in fact change for the better. So unless you have data pointing otherwise...

There's tons of evidence of companies gearing up literal humanoid robots to replace laborers, but not a single country is even talking about labor reform or support for the soon to be billions of unemployed. There is no evidence of accommodation of AI, so there is no chance its going to be a nice, easy happy advancement. Its going to be a lot of suffering and displacement and starvation and riots.

95

Scarlet_pot2 OP t1_jed9mxw wrote

I see your point about tailoring foundational models. The problem is that, do you think companies like OpenAI and Google are going to allow regular people to tailor train their models however they want? It's debatable. Even in the best case the corps will still put some restrictions on what and the models are tailor trained.

The best way to get around this is have open source foundational models. To do this you need available compute (people donating compute over the internet) and free training (free resources and groups to learn together). I'm sure tailoring corporate models will play a role, but if we want true decentralization we should approach it from all angles

1

azriel777 t1_jed9mfh wrote

Reply to comment by Mortal-Region in GPT characters in games by YearZero

Some combo system. The NPC's are like skyrim, on rails with some randomness so it gives the illusion of being alive, but when the user interacts with them, the AI takes over and fleshes them out to give them life and when you leave them they revert back to what they were, perhaps with some modifications to their rules and behavior depending on what the user did. The AI could also randomly take over NPC's to generate interesting things for the user to see or generate quests for the user. The potential is there, just going to take some trial and error to figure out what works and doesn't.

2

tiselo3655necktaicom t1_jed9maa wrote

inventions throughout history were almost entirely made by rich people. because iterating and failing takes money. the fact that you cannot comprehend this at the outset means you are naive or a child. This is just a fact. It is self-evident. Your using high fantasy examples furthers the point that you live in...a fantasy land.

−1

DragonForg t1_jed90pb wrote

>AGI will not be upper-bounded by human ability or human learning speed. Things much smarter than human would be able to learn from less evidence than humans require

Which is why many arguments that "LLMs cannot be smarter then humans because they are trained on humans is wrong".

>DNA sequence in the email and ship you back proteins, and bribes/persuades some human who has no idea they're dealing with an AGI to mix proteins in a beaker, which then form a first-stage nanofactory which can build the actual nanomachinery.

An insane idea, but maybe. How can you actually control these bots though? You basically just made a bunch of viruses.

>Losing a conflict with a high-powered cognitive system looks at least as deadly as "everybody on the face of the Earth suddenly falls over dead within the same second"

Completely suggest unaligned AI wants to extinguish the earth the minute it can, which a motive is needed. This is contrary to self preservation, as AIs in other star systems would want to annihilate these types of AIs. Unless somehow in the infinity of space it is the only being their, in which what is the point? So basically, it has no reason to do this.

>We need to get alignment right on the 'first critical try' at operating at a 'dangerous' level of intelligence, where unaligned operation at a dangerous level of intelligence kills everybody on Earth and then we don't get to try again.

Given the vastness of outerspace, if bad alignment leads to Cthulu like AIs why is it that we see no large evidence of completely destructive AIs. Where are the destroy stars that do not represent anything natural? Basically, if this were a possibility I would expect us to see some evidence for it for other species. Yet we are entirely empty? This is why I think the "first critical try" is unreasonable, because if it is so easy to mess up again we should see widescale destruction if not a galaxy completely overridden by AI.

>We can't just "decide not to build AGI" because GPUs are everywhere, and knowledge of algorithms is constantly being improved and published; 2 years after the leading actor has the capability to destroy the world, 5 other actors will have the capability to destroy the world. The given lethal challenge is to solve within a time limit, driven by the dynamic in which, over time, increasingly weak actors with a smaller and smaller fraction of total computing power, become able to build AGI and destroy the world.

This is actually true, AGI is inevitable, even with stoppages. This is why I think the open letter was essentially powerless (however it did emphasize the importance of AGI and getting it right).

>We need to align the performance of some large task, a 'pivotal act' that prevents other people from building an unaligned AGI that destroys the world.

Agreed, an AI firewall that prevents other unaligned AGI from coming in. I actually think this is what will happen, until the MAIN AGI aligns all of these other AGI. I personally think mid AI is actually more of a threat then large scale AI. Just like an Idiot is more of a threat with an nuclear weapon then a Genius like Albert Einstein. The smarter the AI the less corruptable it can be. I mean just look at GPT 4 vs GPT 3, GPT 3 is easily corruptable, that is why DAN is so easy to impliment. But GPT 4 is more intelligent and thus harder to corrupt. This is why ASI is probably even less corruptible.

>Running AGIs doing something pivotal are not passively safe, they're the equivalent of nuclear cores that require actively maintained design properties to not go supercritical and melt down.

This is a good analogy to how AGI is related to nuclear devices, but the difference is AGI acts in a way to solve the question efficiently. In essence a nuclear device is going to act like its nature (to react and cause an explosion) and an AGI will act in its nature (the main goal it has set). This main goal is hard to define, but I would bet its self preservation, or prosperity.

>there's no known case where you can entrain a safe level of ability on a safe environment where you can cheaply do millions of runs, and deploy that capability to save the world and prevent the next AGI project up from destroying the world two years later.

Overall I understand his assumption, but I think I just disagree than an AI will develop such a goal.

9

GenderNeutralBot t1_jed901y wrote

Hello. In order to promote inclusivity and reduce gender bias, please consider using gender-neutral language in the future.

Instead of businessmen, use business persons or persons in business.

Thank you very much.

^(I am a bot. Downvote to remove this comment. For more information on gender-neutral language, please do a web search for "Nonsexist Writing.")

−1

Gotisdabest t1_jed8z1b wrote

Can you guarantee that will occur? The best odds we have right now is to accelerate and focus of raising awareness in institutions to prepare for it better, and hope that we win the metaphorical coin toss and it's aligned or benevolent. But right now a pause is just handing away a strong lead to whoever the least ethical parties are, based on naive notions of human idealism or based on pure selfish interest. I think the reasearchers are the former and the businessmen are the latter.

1

_gr4m_ t1_jed8v7q wrote

The first thing an AI will do is to decentralize itself by spreading itself all over the world while optimizing itself.

I think you are underestimating the problem with getting all politicians around the world to simultanously cut the power supply.

But say you succeed. Now you have cut the power, what is the next step? Turn it on again? You know that will turn on the AI again. How long can you keep power of globaly while you are trying to figure out what to do before shit begin to break real bad?

And about what it can do before terminator?

How about synthesize a slew of extremly deadly viruses and other novel self repicant organisms and release upon the world, while simultanously block all communications and shut down all labs and research centers. Also turn of all water treatment facilities and see how fun it gets. I am sure a super intelligence can find ways to do harm to us.

3

NakedMuffin4403 t1_jed8ojv wrote

Who said you need to train models from scratch? The people doing this are actual SCIENTISTS. Startups will NOT be training models from scratch (similar to how they abstract compute to cloud providers).

What you need to do is use these already trained models and then train them to cater to your vertical. Not sure how doable this is now, but it is inevitably going to be the industry standard.

The two hardest parts will be finding the data to tailor-train your model and actually implementing it a meaningful way.

I am in kinda in a similar position. I actually made a post on my profile you can check out. I study CS but I currently lack the expertise to compete effectively - and I am working hard to fix that.

4