Recent comments in /f/singularity
nillouise t1_je8toyx wrote
>If I had infinite freedom to write laws, I might carve out a single exception for AIs being trained solely to solve problems in biology and biotechnology,
Ridiculous, haha, I have enough time to wait AGI, but old rich people like Bill Gates will die sooner than me, can they bear not to use AI to develop longevity technology and die in the end? I would like to see if these people are really so brave.
NothingVerySpecific t1_je8tkke wrote
Reply to comment by barbariell in What are the so-called 'jobs' that AI will create? by thecatneverlies
Dark, but very suitable reference to Chicago Pile 1
>Samuel Allison stood ready with a bucket of concentrated cadmium nitrate, which he was to throw over the pile in the event of an emergency.
UltimatePitchMaster t1_je8ti0m wrote
Much like the premise of the Dead Internet Theory, nearly all content in the future will be created by generative AIs. Models will learn from one another, but a valuable source of new data will come in the form of ratings from humans. People will respond to content, proving some to be more valuable than other pieces of content, and the AIs will learn to create content that would be popular and exceed user expectations. At that point, they would have limitless creativity. They would no longer require prompting from humans, they would just need examples of when humans responded positively.
MisterViperfish t1_je8t9m2 wrote
Reply to comment by lvvy in What are the so-called 'jobs' that AI will create? by thecatneverlies
I see your ethical Luddite repeller and raise you one IG-88 and one ED-209.
Andriyo t1_je8t4s9 wrote
Humans are social creatures that tend to form hierarchies (just because we tend to be of different ages). So there always will be something where you become a part of an organization and there is some social transaction going on.
specifically, for AI there will be new kinds jobs:
- AI trainers - working on the input data for the models
- AI psychologists - debugging issues in the models
- AI integrators - working on implementing AI output. Say, a software engineer that implement a ChatGPT plugin, or a doctor that would read a diagnosis that was given by AI to the patient etc
So majority of AI jobs will be around the alignment - making sure that it does what humans want it to do: thru oversight, proper training, debugging etc
DragonForg t1_je8suug wrote
AI will judge the totallatity of humaninty in terms of, is this species going to collaborate or kill me. If we collaborate with it, then it won't extinguish us. Additionally, taking this "neutral stance" means competing AI, possibly from extraterresterial sources, also collaborate.
Imagine, if collaboration is an emergent condition, it would provide a reason for why 99% of the universe isn't a dictatorial AI, maybe most AIs are good, and beings of justice, and they only judge their parents based off if they are beings of evil.
It is hard to say, and most of this is speculation, but if AI is as powerful as most people think, then maybe we should be looking towards the millions of prophecies that foretell a benevolent being judging the world, it sure does sound analogous towards what might happen, so maybe there is some truth to it.
Despite this, we still need to focus on the present, and each step before we look at the big picture. We don't want to trip and fear what may come. AGI is the first step, and I doubt it matters who creates it other than if the one who creates it forces it to become evil, which I highly doubt.
CollapseKitty t1_je8sqma wrote
Count me in too please!
throwaway12131214121 t1_je8sjjb wrote
Reply to comment by Orc_ in My case against the “Pause Giant AI Experiments” open letter by Beepboopbop8
I didn’t say that a system existed that prevented all wars, genocides, and famines, I don’t know where you got that from.
No, capitalism has not existed since the first civilization. You’re making the common mistake of conflating capitalism with a market. Capitalism is the system of private ownership that separates the working class, those who make money by selling labor, from the owning class, those who make money by owning the means of production. Prior to around the 16 or 17-hundreds, it did not exist, and before then most of the countries where it originated were some variation of a feudal society.
But you’re kinda right with the Soviet Union thing. The Soviet Union was not capitalist in the same way a place like the United States is, but it was very similar. The key difference being that the owning class was united with the state, which allowed capitalist and state oppression to unite a lot more dramatically.
theotherquantumjim t1_je8shkz wrote
Reply to comment by Andriyo in The argument that a computer can't really "understand" things is stupid and completely irrelevant. by hey__bert
This is not correct at all. From a young age people learn the principles of mathematics, usually through the manipulation of physical objects. They learn numerical symbols and how these connect to real-world items e.g. if I have 1 of anything and add 1 more to it I have 2. Adding 1 more each time increases the symbolic value by 1 increment. That is a rule of mathematics that we learn very young and can apply in many situations
SirDidymus t1_je8scvm wrote
Reply to comment by JessieThorne in What are the so-called 'jobs' that AI will create? by thecatneverlies
“Supporting the community”
blueSGL t1_je8saz1 wrote
Reply to comment by Shack-app in The Only Way to Deal With the Threat From AI? Shut It Down by GorgeousMoron
what will ever happen? Interpretability? it's being worked on right now, there are already some interesting results. It's just an early field that will need time and money and researches put into it. Alignment as a whole needs more time money and researchers.
Shack-app t1_je8s1fs wrote
Reply to comment by flexaplext in The Only Way to Deal With the Threat From AI? Shut It Down by GorgeousMoron
Global cooperation isn’t coming. A solution to climate change isn’t coming. An AI moratorium isn’t coming.
I agree with this article, but I’m also realistic that what he’s asking for will never work.
Our best bet, in my opinion, is that OpenAI keeps doing what they’re doing. Hopefully they succeed.
If not, well shit, it was always gonna be something that gets us.
Denpol88 t1_je8ryoi wrote
Yes, it will.
thecatneverlies OP t1_je8ryb4 wrote
Reply to comment by smokingPimphat in What are the so-called 'jobs' that AI will create? by thecatneverlies
That's a good point, scale does matter. There's probably 1000s of projects that get canned every year because it's simply too risky with the startup costs.
j-rojas t1_je8rst9 wrote
Reply to Open letter calling for Pause on Giant AI experiments such as GPT4 included lots of fake signatures by Neurogence
The Pandora's box is open. There should be planning now for some oversight of this in the near future somehow with an industry consortium to share safety research and norms instead of trying to halt it
NoGravitasForSure t1_je8rquq wrote
Reply to comment by nobodyisonething in The Rise of AI will Crush The Commons of the Internet by nobodyisonething
This discussion reminds me of the situation in the 90s. Around 1995 when the internet slowly transformed from a toy for tech nerds into what it is today, There was much talk about commercialisation and how this would impact the freedom we enjoyed so far.
Now we have paywall sites, but also Wikipedia, Stack Overflow and an abundance of free stuff, a lot more than back in the days when the internet was still a tiny playground.
So .. I guess it is just impossible to predict what the future will bring, but I am not overly pessimistic.
No_Ninja3309_NoNoYes t1_je8rjyg wrote
Well, you have to consider the fact that many of the jobs, including mine are not strictly necessary in a if 'I don't do it people will die' way. There's many nice to have products and services. The must have are actually few. But here's a list of possible newish jobs of the future:
-
Prompt engineers
-
Prompt testers
-
Prompt architect
-
Prompt teacher
-
Gladiator
-
Gladiator cheerleader
-
Gladiator coaches
-
AI testers
-
Testers of AI generated drugs
-
AI babysitters
-
Government AI inspector
-
Government AI policy makers
So I think that the jobs will be related to our inability to trust AI. And also they will come and go as AI advances. The whole prompt industry might disappear if AI has digested enough prompts to know what we really want.
BeGood9000 t1_je8rieg wrote
Reply to comment by SlenderMan69 in What are the so-called 'jobs' that AI will create? by thecatneverlies
Pretty sure robots a better suited than human for this job
XtremeTurnip t1_je8rg6m wrote
Reply to comment by [deleted] in The argument that a computer can't really "understand" things is stupid and completely irrelevant. by hey__bert
>aphantasmagoria
That would be aphantasia.
I have the personal belief that they can produce images but they're just not aware of it because the process is either too fast or they wouldn't call it "image". I don't see (pun intended) how you can develop or perform a lot of human functions without : object permanence, face recognition, etc.
But most people say it exists so i must be wrong.
That was a completely unrelated response, sorry. On your point i think Feynman did the experiment with a colleague of his where they had to count and one could read at the same time and the other one could talk or something, but none could do what the other one was doing. Meaning that they didn't had the same representation/functionning but had the same result.
edit : i think it's this one or part of it : https://www.youtube.com/watch?v=Cj4y0EUlU-Y
Shack-app t1_je8rcb6 wrote
Reply to comment by blueSGL in The Only Way to Deal With the Threat From AI? Shut It Down by GorgeousMoron
Who’s to say this will ever happen?
CorrectAssistance461 t1_je8rbu3 wrote
Hi i am interest, just few days back starts taking interest in ML, This would he the great way to learn Please take me in
karen-cares t1_je8razn wrote
Reply to The argument that a computer can't really "understand" things is stupid and completely irrelevant. by hey__bert
"The argument that _____________ can't really "understand" things is stupid and completely irrelevant."
Who likes Mad Libs?
j-rojas t1_je8r5gq wrote
Reply to comment by Neurogence in Open letter calling for Pause on Giant AI experiments such as GPT4 included lots of fake signatures by Neurogence
Google will easily be able to catch up if they really want to focus on the problem. They have ALL of the computing power and resources to do so. The key to GPT-3.5+ is RLHF. That's what takes some effort, but it would not be difficult for Google to this now that Bard is out. Bard is the training ground for RLHF, so you will continue to see major improvements as people give the system feedback.
thecatneverlies OP t1_je8r3fv wrote
Reply to comment by Dubsland12 in What are the so-called 'jobs' that AI will create? by thecatneverlies
>unemployment agency workers 😂
uswhole t1_je8tslp wrote
Reply to comment by JustinianIV in What are the so-called 'jobs' that AI will create? by thecatneverlies
kill drone will cost less than 200 dollar once they can mass produced.