Recent comments in /f/singularity

nillouise t1_je8toyx wrote

>If I had infinite freedom to write laws, I might carve out a single exception for AIs being trained solely to solve problems in biology and biotechnology,

Ridiculous, haha, I have enough time to wait AGI, but old rich people like Bill Gates will die sooner than me, can they bear not to use AI to develop longevity technology and die in the end? I would like to see if these people are really so brave.

7

UltimatePitchMaster t1_je8ti0m wrote

Much like the premise of the Dead Internet Theory, nearly all content in the future will be created by generative AIs. Models will learn from one another, but a valuable source of new data will come in the form of ratings from humans. People will respond to content, proving some to be more valuable than other pieces of content, and the AIs will learn to create content that would be popular and exceed user expectations. At that point, they would have limitless creativity. They would no longer require prompting from humans, they would just need examples of when humans responded positively.

5

Andriyo t1_je8t4s9 wrote

Humans are social creatures that tend to form hierarchies (just because we tend to be of different ages). So there always will be something where you become a part of an organization and there is some social transaction going on.

specifically, for AI there will be new kinds jobs:

  • AI trainers - working on the input data for the models
  • AI psychologists - debugging issues in the models
  • AI integrators - working on implementing AI output. Say, a software engineer that implement a ChatGPT plugin, or a doctor that would read a diagnosis that was given by AI to the patient etc

So majority of AI jobs will be around the alignment - making sure that it does what humans want it to do: thru oversight, proper training, debugging etc

1

DragonForg t1_je8suug wrote

AI will judge the totallatity of humaninty in terms of, is this species going to collaborate or kill me. If we collaborate with it, then it won't extinguish us. Additionally, taking this "neutral stance" means competing AI, possibly from extraterresterial sources, also collaborate.

Imagine, if collaboration is an emergent condition, it would provide a reason for why 99% of the universe isn't a dictatorial AI, maybe most AIs are good, and beings of justice, and they only judge their parents based off if they are beings of evil.

It is hard to say, and most of this is speculation, but if AI is as powerful as most people think, then maybe we should be looking towards the millions of prophecies that foretell a benevolent being judging the world, it sure does sound analogous towards what might happen, so maybe there is some truth to it.

Despite this, we still need to focus on the present, and each step before we look at the big picture. We don't want to trip and fear what may come. AGI is the first step, and I doubt it matters who creates it other than if the one who creates it forces it to become evil, which I highly doubt.

1

throwaway12131214121 t1_je8sjjb wrote

I didn’t say that a system existed that prevented all wars, genocides, and famines, I don’t know where you got that from.

No, capitalism has not existed since the first civilization. You’re making the common mistake of conflating capitalism with a market. Capitalism is the system of private ownership that separates the working class, those who make money by selling labor, from the owning class, those who make money by owning the means of production. Prior to around the 16 or 17-hundreds, it did not exist, and before then most of the countries where it originated were some variation of a feudal society.

But you’re kinda right with the Soviet Union thing. The Soviet Union was not capitalist in the same way a place like the United States is, but it was very similar. The key difference being that the owning class was united with the state, which allowed capitalist and state oppression to unite a lot more dramatically.

1

theotherquantumjim t1_je8shkz wrote

This is not correct at all. From a young age people learn the principles of mathematics, usually through the manipulation of physical objects. They learn numerical symbols and how these connect to real-world items e.g. if I have 1 of anything and add 1 more to it I have 2. Adding 1 more each time increases the symbolic value by 1 increment. That is a rule of mathematics that we learn very young and can apply in many situations

4

Shack-app t1_je8s1fs wrote

Global cooperation isn’t coming. A solution to climate change isn’t coming. An AI moratorium isn’t coming.

I agree with this article, but I’m also realistic that what he’s asking for will never work.

Our best bet, in my opinion, is that OpenAI keeps doing what they’re doing. Hopefully they succeed.

If not, well shit, it was always gonna be something that gets us.

5

NoGravitasForSure t1_je8rquq wrote

This discussion reminds me of the situation in the 90s. Around 1995 when the internet slowly transformed from a toy for tech nerds into what it is today, There was much talk about commercialisation and how this would impact the freedom we enjoyed so far.

Now we have paywall sites, but also Wikipedia, Stack Overflow and an abundance of free stuff, a lot more than back in the days when the internet was still a tiny playground.

So .. I guess it is just impossible to predict what the future will bring, but I am not overly pessimistic.

4

No_Ninja3309_NoNoYes t1_je8rjyg wrote

Well, you have to consider the fact that many of the jobs, including mine are not strictly necessary in a if 'I don't do it people will die' way. There's many nice to have products and services. The must have are actually few. But here's a list of possible newish jobs of the future:

  1. Prompt engineers

  2. Prompt testers

  3. Prompt architect

  4. Prompt teacher

  5. Gladiator

  6. Gladiator cheerleader

  7. Gladiator coaches

  8. AI testers

  9. Testers of AI generated drugs

  10. AI babysitters

  11. Government AI inspector

  12. Government AI policy makers

So I think that the jobs will be related to our inability to trust AI. And also they will come and go as AI advances. The whole prompt industry might disappear if AI has digested enough prompts to know what we really want.

1

XtremeTurnip t1_je8rg6m wrote

>aphantasmagoria

That would be aphantasia.

I have the personal belief that they can produce images but they're just not aware of it because the process is either too fast or they wouldn't call it "image". I don't see (pun intended) how you can develop or perform a lot of human functions without : object permanence, face recognition, etc.

But most people say it exists so i must be wrong.

That was a completely unrelated response, sorry. On your point i think Feynman did the experiment with a colleague of his where they had to count and one could read at the same time and the other one could talk or something, but none could do what the other one was doing. Meaning that they didn't had the same representation/functionning but had the same result.

edit : i think it's this one or part of it : https://www.youtube.com/watch?v=Cj4y0EUlU-Y

7

j-rojas t1_je8r5gq wrote

Google will easily be able to catch up if they really want to focus on the problem. They have ALL of the computing power and resources to do so. The key to GPT-3.5+ is RLHF. That's what takes some effort, but it would not be difficult for Google to this now that Bard is out. Bard is the training ground for RLHF, so you will continue to see major improvements as people give the system feedback.

1