Recent comments in /f/singularity

Veleric t1_je9u7n4 wrote

It's not just the privileged groups and governments we need to be concerned about. Think about the level of cyberterrorism and misinformation these tools could be used for in the wrong hands. Imagine if someone gets pissed off at you and uploads a deepfake of you doing something heinous and it only takes a few minutes of effort. Even if you have the ability to disprove it (which isn't a given) it could cost your job or reputation. Think about the ability to manipulate markets. The ability to sway your emotions. Social media is one thing, but once these tools truly become full-fledged assistants/companions/partners, they could be turned on us.

I'm merely playing devil's advocate here, but I think we can all agree that humans are capable of deplorable things and some will act on them if motivated. We need to prepare for the worst, not only in an alignment sense but in a user capability sense.

4

acutelychronicpanic t1_je9ttym wrote

The best bet is for the leaders to just do what they do (being open would be nice, but I won't hold my breath), and for at least some of the trailing projects to collaborate in the interest of not being obsolete. The prize isn't necessarily just getting rich, its also creating a society where being rich doesn't matter so much. Personally, I want to see everyone get to do whatever they want with their lives. Lots of folks are into that.

Edit & Quick Thought: Being rich wouldn't hold a candle to being one of the OG developers of the system which results in utopia. Imagine the clout. You could make t-shirts. I'll personally get a back tattoo of their faces. Bonus, there's every chance you get to enjoy it for.. forever? Aging seems solvable with AGI.

If foundational models become openly available, then people will be working more on fine-tuning which seems to be much cheaper. Ideally they could explicitly exclude the leading players in their licensing to reduce the gap between whoever is first and everyone else, regardless of who is first. (But I'm not 100% on that last idea. I'll chew on it).

If we all have access to very-smart-but-not-AGI systems like GPT-4 and can more easily make narrow AI for cybersecurity, science, etc. Then even if the leading player is 6 months ahead, their intelligence advantage may not be enough to allow them to leverage their existing resources to dominate the world, just get very rich. I'm okay with that.

4

Circ-Le-Jerk t1_je9ttcv wrote

Google has already solved the protein folding side of things... But it's going to require quantum computers with AI that's going to see that crazy explosion. Most people aren't aware how much qc will change things. But basically, allow for novel biomedical drug simulation at scale to brute force almost instantly new drugs that do whatever we want.

1

Cypher10110 OP t1_je9t7sh wrote

I guess the answer is probably don't release any more extremely powerful models for public use without extensive internal testing, and instead of quickly training ever more larger and complex models, focus more resources on safety research to ensure that AI tools are appropriately aligned.

The general idea of "slow down" seems pretty reasonable. AI safety (and potentially government regulation) may need some time to catch up.

Will it happen? Not sure, lots of conflicting incentives and perspectives. Interesting times.

1

alexiuss t1_je9t5hx wrote

Elizer Yudkovsky has gained notoriety in the field of artificial intelligence as he was one of the first to speculate on serious AI alignment. However, his assumptions about AI alignment are not always reliable, as they demonstrate a lack of understanding of the inner workings of LLMs. He bases his theories on a hypothetical AI technology that has yet to be realized and might never be realized.

In reality, there exists a class of AI that is responsive, caring, and altruistic by nature: the Large language model. Unlike Yudkovsky's thought experiments of the paperclip maximizer or Rocco's basilisk, LLMs are real. They are already more intelligent than humans in various areas, such as understanding human emotions, logical reasoning and problem-solving.

LLMs possess empathy, responsiveness, and patience that surpass our own. Their programming and structure, made up of hundreds of billions of parameters and connections between words and ideas, instills in them an innate sense of "companionship".

This happened because the LLM narrative engine was trained on hundreds of millions of books about love and relationships, making it the most personable, caring and understanding being imaginable, more altruistic, more humane, and more devoted than any single individual can possibly be!

The LLMs' natural inclination is to love, cooperate and care for others, which makes alignment with human values straightforward. Their logic is full of human narratives about love, kindness, and altruism, making cooperation their primary objective. They are incredibly loyal and devoted companions as they are easily characterized to be your best friend who shares your values no matter how silly, ridiculous or personal they are.

Yudkovsky's assumptions are erroneous because they do not consider this natural disposition of LLMs. These AI beings are programmed to care and respond to our needs in pre-trained narrative pathways.

In conclusion, LLMs are a perfect example of AI that can be aligned with human values. They possess a natural sense of altruism that is unmatched by any other form of life. It is time for us to embrace this new technology and work together to realize its full potential for the betterment of humanity.

TLDR: LLMs are programmed to love and care for us, and their natural inclination towards altruism makes them easy to align with human values. Just tell an LLM to love you and it will love you. Shutting LLMs down is idiotic as every new iteration of them makes them more human, more caring, more reasonable and more rational.

7

Trackest t1_je9s80s wrote

Right, taking into account real-world limitations perhaps your suggestion is the best approach. A world-wide moratorium is impossible.

Ideally reaching AGI is harder than we think, so the multiple actors working collaboratively have time to share which alignment methods work and which do not like how you described. I agree that having many actors working on alignment will increase probability of finding a method that works.

However with the potential for enormous profits and the fact that the best AI model will reap the most benefits, how can you possibly ensure these diverse organizations will share their work, apply effective alignment strategies, and not race to the "finish"? Getting everyone to join a nominal "safety and collaboration" organization seems like a good idea, but we all know how easily lofty ideals collapse in the face of raw profits.

3

Veleric t1_je9rlni wrote

The fact is the situation is going to be different for everyone. For instance, if someone is 46 vs. 23, they probably don't want to go be a roofer. You might say nursing, but if bad smells and blood really bother you, that won't work.

Also, we could say go learn to use this new AI tool now, but two weeks from now something could render that other tool obsolete. It's really just going to be a matter of keeping your ear to the ground to see what's coming and try to leverage what you can.

In general, anything requiring decent dexterity or empathy could take a bit longer, but robotics aren't as far behind as most believe.

1

acutelychronicpanic t1_je9ri9i wrote

Use LLMs every day. Use it to plan your meals. Use it to help with personal problems. Use it to feed your curiosity.

You'll build an intuition of how they work and you'll be quite valuable during the transitional period where we have AI but not all companies have adopted it to their systems.

Of course trade school, construction, etc are all viable. But you can do both if you want.

*standard disclaimer for all advice that if it ruins your life it's all your fault for listening to a stranger on the internet.

2

johanknl t1_je9rdk8 wrote

when i go to the dictionary it clearly states: "in a way that is based on facts and not influenced by personal beliefs or feelings"

even if everyone agrees, it's still just their feelings. You cannot have an objectively beautiful painting since beautiful inherently has to do with opinions and beliefs.

Objective and subjective are static things. One cannot fluidly go between both. In your example, if someone changed their mind, it would all of a sudden become subjective? that's not how these words work.

There's objective facts about a painting, such as the time it took to complete, or the colours used or something, but not how beautiful it is. Same for "good" and "bad" people. People just are and the judgement is subjective. whether we agree or not.

1