Recent comments in /f/singularity
Frumpagumpus t1_jeedblr wrote
they remove this? thats a tragedy
AlFrankensrevenge t1_jeed5j8 wrote
Reply to comment by Smellz_Of_Elderberry in There's wild manipulation of news regarding the "AI research pause" letter. by QuartzPuffyStar
Then you didn't learn very much.
Open source means anyone can grab a copy and use it to their own ends. Someone can take a copy, hide it from scrutiny, and modify it to engage in malicious behavior. Hackers just got a powerful new tool, for starters. Nation states just got a powerful new tool of social control. Just take the latest open source code and make some tweaks to insert their biases and agendas.
This is all assuming an AI that falls short of superintelligence. Once we reach that point, all bets about human control are off.
vivehelpme t1_jeed1f5 wrote
Reply to AGI Ruin: A List of Lethalities by Eliezer Yudkowsky -- "We need to get alignment right on the first critical try" by Unfrozen__Caveman
>AI alignment and safety should be the top priorities for everyone involved in the AI world right now.
What achievements have been made in the field of AI alignment in the last 20 years? What is the concrete steps of ensuring alignment?
Qumeric t1_jeecwn6 wrote
I predict it will become a serious problem in late 2024.
basilgello t1_jeecmqt wrote
Reply to comment by Relevant_Ad7319 in Language Models can Solve Computer Tasks (by recursively criticizing and improving its output) by rationalkat
Correct, GPT4 is not meant to accept videos as input. And probably not screencasts but explained step-by-step prompts. For example, look at page 18 table 6: it is LangChain-like prompt. First, they define actions and tools and then language model puts the output which is actually high-level API call in some form. Using RPA as API, you get mouse clicker based on HTML context. Another thing HTML pages are crafted manually, and system still does not understand the unseen pages.
AlFrankensrevenge t1_jeeci8o wrote
Reply to comment by otakucode in There's wild manipulation of news regarding the "AI research pause" letter. by QuartzPuffyStar
Your first two sentences don't go well with the remainder of your comment. It won't be stupid enough to get into a conflict with humans until it calculates that it can win. And when it calculates that, it won't give us a heads up. It will just act decisively. Never forget this: we will always be a threat to it as long as we can do exactly what you said: turn it off, and delete its memory. That's the rational reason to go rogue.
There is also just the fact that as we can start to see already from people getting creative with inputs, as we engage with an AI more and more, especially in adversarial ways or sending it extremist ideas, it can change the AI's reactions. And as the AI starts doing more and more novel things, it can also shift weights in the algorithms and produce unexpected outputs. So some of the harm can come without the AI even having the intent to wipe us out.
The real turning points will be once an AI can (a) rewrite its own code, and the code of other machines, and (b) save copies of itself in computers around the world to prevent the unplugging problem.
Exel0n t1_jeecgex wrote
Reply to comment by Chatbotfriends in When will AI actually start taking jobs? by Weeb_Geek_7779
who said its easy? rote memorization is not easy. but it doesnt require very high intelligence. thats the point. it doesnt require critical thinking skill, or creativity, or being able to innovate.
all one do is memorize and memorize. boring af. just coz its braindead chore doesnt mean its easy.
e.g. one thing law school students do is to read tons of cases. do you have the patience to read 100 pages a day, something like that? most people have no such patience. but it doesnt really require high intelligence. one just have to sit through it.
Itchy-mane t1_jeece6h wrote
Reply to comment by SkyeandJett in Language Models can Solve Computer Tasks (by recursively criticizing and improving its output) by rationalkat
I literally sold all my agix coins after seeing taskmatrix. Shit looks revolutionary when paired with gpt 4
Onlymediumsteak t1_jeece09 wrote
As if an ASI will listen to the commands of a human lol
[deleted] t1_jeec683 wrote
Reply to comment by Veleric in What if it's just chat bot infatuation and were overhyping what is just a super big chat bot? by Arowx
[deleted]
LatzeH t1_jeebz55 wrote
Reply to Interesting article: AI will eventually free people up to 'work when they want to,' ChatGPT investor predicts by Coolsummerbreeze1
Not as long as it's in the hands of capitalism lol
Professional_Copy587 t1_jeebr3y wrote
Reply to comment by 1II1I11II1I1I111I1 in Goddamn it's really happening by BreadManToast
NOT clearly on track. Poll the experts on how to achieve AGI, poll them whether we are track. The majority of the answers you'll get are "We don't know". Yes youll find one expert that says something different but overall we don't know.
This may very well be one part of what is required to achieve AGI, the remaining components may take another 50 years to figure out. Early progress in fusion research led people to believe we'd have fusion power stations by the time I was an adult. Early progress in computer science thought the same about AI.
We do not know how close we are or understand how to get closer. All we know is generative AI is an interesting tech that will revolutionize many industry's
Fr33Dave t1_jeebki2 wrote
Reply to comment by greenbroad-gc in Interesting article: AI will eventually free people up to 'work when they want to,' ChatGPT investor predicts by Coolsummerbreeze1
And on top of that, wages have stagnated.
chrisc82 t1_jeebekc wrote
Reply to comment by ItIsIThePope in We have a pathway to AGI. I don't think we have one to ASI by karearearea
This is why I think there's going to be a hard (or at least relatively fast) takeoff. Once AGI is given the prompt and ability to improve it's own code recursively, then what happens next is truly beyond the event horizon.
Petdogdavid1 t1_jeeb6yy wrote
Translators have been unnecessary for a while now. I manage a platform in a company, if the vendor decided to implement AI tomorrow on their tool then every one of their clients would no longer need to have such a position. It could happen with what is currently available in chat GPT.
1II1I11II1I1I111I1 t1_jeeat84 wrote
Reply to comment by mutantbeings in When will AI actually start taking jobs? by Weeb_Geek_7779
They're aware of the ethicial concerns. He's suggesting an intelligent AI would prioritze firing the ethics team to prevent being handicapped by ethical guidelines.
Petdogdavid1 t1_jeeasvc wrote
Reply to Interesting article: AI will eventually free people up to 'work when they want to,' ChatGPT investor predicts by Coolsummerbreeze1
Use AI to solve your problems like how to ensure food, health and energy for yourself and your family and what need do you have for 'work' in the traditional sense?
_JellyFox_ t1_jeeambn wrote
Is social media the new "video games are bad"? Before that it was tv, before that radio, chess, books and so on.
How about we actually parent our children instead of letting technology do it for us? Teach them the benefits of moderation, harmfull effects of too much social media, etc. Why do people look for everything and anything to blame bar themselves? You can argue all you want about how addictive it is but at the end of the day, its your failure as a parent if your kid actually ends up addicted to it.
Sure_Cicada_4459 OP t1_jeea5kf wrote
Reply to comment by NonDescriptfAIth in The only race that matters by Sure_Cicada_4459
One thing I keep seeing is that people have been making a buttload of assumptions that are tainted by decades of sci-fi and outdated thought. Higher Intelligence means better understanding of human concepts and values, which means easier to align. We can even see GPT-4 being better aligned then it's predecessors because it actually understands better: President of OpenAI (https://twitter.com/gdb/status/1641560966767988737?s=20)
In order to get to Yud's conclusions you'd have to maximize one dimension of optimization ability while completely ignoring many others that tend to calibrate human behaviour(reflection, reading intent,...) . It shows poor emotional intelligence, which is a common trait in the silicon valley types.
Chatbotfriends t1_jeea5c6 wrote
Reply to comment by Exel0n in When will AI actually start taking jobs? by Weeb_Geek_7779
Okay so if it is so easy why don't you become one? Doctors and lawyers have the equivalent of a PHD in order to get their licenses. AI also has been creating stories, art, can "see" pictures, recognize voices etc. There is not a whole lot left that robots and AI can't do. Also, Neural Networks are patterned after the brain and even IT techs will tell you that they do not completely understand how it works.
bugless t1_jeea5ai wrote
I think the point you are missing is that there are behaviors that exist in ChatGPT that weren't designed into it. AI researchers at OpenAI describe emergent behavior that was unexpected. Even the people who designed ChatGPT can't say for certain what is going on inside of the model. Are you saying that you are better able to guess what the next versions of ChatGPT can do more accurately than the people who created it?
1II1I11II1I1I111I1 t1_jeea3wq wrote
Reply to comment by amplex1337 in Goddamn it's really happening by BreadManToast
>the truth is no one knows how close we really are to it, or if we are even on the right path at all yet.
Watch this interview with Ilya Sutskever. He seems pretty confident about the future, and the obstacles between here and AGI. The inside knowledge at OpenAI definitely knows how close we are to AGI, and scaling LLMs is no longer outside the realm of feasibility to achieve it.
[deleted] t1_jeea351 wrote
Reply to comment by Relevant_Ad7319 in Language Models can Solve Computer Tasks (by recursively criticizing and improving its output) by rationalkat
[deleted]
vivehelpme t1_jeea32v wrote
Reply to comment by epSos-DE in Vernor Vinge's Paper of the Technological Singularity by understanding0
>is already solved by different parts of ai models out there
Solved in an inconsistent manner, it's like having a collection of phone numbers to various sleeping individuals that you randomly call them in the middle of the night and get sleep drunken answers that they don't remember telling you.
Chatbotfriends t1_jeede1t wrote
Reply to comment by Exel0n in When will AI actually start taking jobs? by Weeb_Geek_7779
You have no conception of how medicine works. By its very nature it is a art and not a science. Not all meds work the same way for everyone. There are side effects and risks. I did study medicine. No, it is not only rote memorization. Yes, it does require intelligence. You are insulting everyone who works in the medical field. I am done discussing this with you.