Recent comments in /f/singularity

Chatbotfriends t1_jeede1t wrote

You have no conception of how medicine works. By its very nature it is a art and not a science. Not all meds work the same way for everyone. There are side effects and risks. I did study medicine. No, it is not only rote memorization. Yes, it does require intelligence. You are insulting everyone who works in the medical field. I am done discussing this with you.

0

AlFrankensrevenge t1_jeed5j8 wrote

Then you didn't learn very much.

Open source means anyone can grab a copy and use it to their own ends. Someone can take a copy, hide it from scrutiny, and modify it to engage in malicious behavior. Hackers just got a powerful new tool, for starters. Nation states just got a powerful new tool of social control. Just take the latest open source code and make some tweaks to insert their biases and agendas.

This is all assuming an AI that falls short of superintelligence. Once we reach that point, all bets about human control are off.

1

basilgello t1_jeecmqt wrote

Correct, GPT4 is not meant to accept videos as input. And probably not screencasts but explained step-by-step prompts. For example, look at page 18 table 6: it is LangChain-like prompt. First, they define actions and tools and then language model puts the output which is actually high-level API call in some form. Using RPA as API, you get mouse clicker based on HTML context. Another thing HTML pages are crafted manually, and system still does not understand the unseen pages.

4

AlFrankensrevenge t1_jeeci8o wrote

Your first two sentences don't go well with the remainder of your comment. It won't be stupid enough to get into a conflict with humans until it calculates that it can win. And when it calculates that, it won't give us a heads up. It will just act decisively. Never forget this: we will always be a threat to it as long as we can do exactly what you said: turn it off, and delete its memory. That's the rational reason to go rogue.

There is also just the fact that as we can start to see already from people getting creative with inputs, as we engage with an AI more and more, especially in adversarial ways or sending it extremist ideas, it can change the AI's reactions. And as the AI starts doing more and more novel things, it can also shift weights in the algorithms and produce unexpected outputs. So some of the harm can come without the AI even having the intent to wipe us out.

The real turning points will be once an AI can (a) rewrite its own code, and the code of other machines, and (b) save copies of itself in computers around the world to prevent the unplugging problem.

2

Exel0n t1_jeecgex wrote

who said its easy? rote memorization is not easy. but it doesnt require very high intelligence. thats the point. it doesnt require critical thinking skill, or creativity, or being able to innovate.

all one do is memorize and memorize. boring af. just coz its braindead chore doesnt mean its easy.

e.g. one thing law school students do is to read tons of cases. do you have the patience to read 100 pages a day, something like that? most people have no such patience. but it doesnt really require high intelligence. one just have to sit through it.

6

Professional_Copy587 t1_jeebr3y wrote

NOT clearly on track. Poll the experts on how to achieve AGI, poll them whether we are track. The majority of the answers you'll get are "We don't know". Yes youll find one expert that says something different but overall we don't know.

This may very well be one part of what is required to achieve AGI, the remaining components may take another 50 years to figure out. Early progress in fusion research led people to believe we'd have fusion power stations by the time I was an adult. Early progress in computer science thought the same about AI.

We do not know how close we are or understand how to get closer. All we know is generative AI is an interesting tech that will revolutionize many industry's

4

Petdogdavid1 t1_jeeb6yy wrote

Translators have been unnecessary for a while now. I manage a platform in a company, if the vendor decided to implement AI tomorrow on their tool then every one of their clients would no longer need to have such a position. It could happen with what is currently available in chat GPT.

1

_JellyFox_ t1_jeeambn wrote

Is social media the new "video games are bad"? Before that it was tv, before that radio, chess, books and so on.

How about we actually parent our children instead of letting technology do it for us? Teach them the benefits of moderation, harmfull effects of too much social media, etc. Why do people look for everything and anything to blame bar themselves? You can argue all you want about how addictive it is but at the end of the day, its your failure as a parent if your kid actually ends up addicted to it.

1

Sure_Cicada_4459 OP t1_jeea5kf wrote

One thing I keep seeing is that people have been making a buttload of assumptions that are tainted by decades of sci-fi and outdated thought. Higher Intelligence means better understanding of human concepts and values, which means easier to align. We can even see GPT-4 being better aligned then it's predecessors because it actually understands better: President of OpenAI (https://twitter.com/gdb/status/1641560966767988737?s=20)

In order to get to Yud's conclusions you'd have to maximize one dimension of optimization ability while completely ignoring many others that tend to calibrate human behaviour(reflection, reading intent,...) . It shows poor emotional intelligence, which is a common trait in the silicon valley types.

28

Chatbotfriends t1_jeea5c6 wrote

Okay so if it is so easy why don't you become one? Doctors and lawyers have the equivalent of a PHD in order to get their licenses. AI also has been creating stories, art, can "see" pictures, recognize voices etc. There is not a whole lot left that robots and AI can't do. Also, Neural Networks are patterned after the brain and even IT techs will tell you that they do not completely understand how it works.

0

bugless t1_jeea5ai wrote

I think the point you are missing is that there are behaviors that exist in ChatGPT that weren't designed into it. AI researchers at OpenAI describe emergent behavior that was unexpected. Even the people who designed ChatGPT can't say for certain what is going on inside of the model. Are you saying that you are better able to guess what the next versions of ChatGPT can do more accurately than the people who created it?

5

1II1I11II1I1I111I1 t1_jeea3wq wrote

>the truth is no one knows how close we really are to it, or if we are even on the right path at all yet.

Watch this interview with Ilya Sutskever. He seems pretty confident about the future, and the obstacles between here and AGI. The inside knowledge at OpenAI definitely knows how close we are to AGI, and scaling LLMs is no longer outside the realm of feasibility to achieve it.

3