Recent comments in /f/singularity

Ok_Faithlessness4197 t1_jecguf5 wrote

It's worth talking about, but I'm also worried. The rate it's advanced means that whoever finds the next significant performance improvement could well develop AGI. Many people are researching it, and I'm concerned as 1. AI is currently unaligned 2. A malicious party could develop AGI. If high performing models hadn't already been publicly released, I would have been fully supportive of regulation. (Until AI could be aligned, or a plan for public safety developed.)

1

Ishynethetruth t1_jecgsaq wrote

Customer service is gone in the next 12 months. My friend who had managed 5 call Centers have not gotten his renewal contract for the first time in years because instead of going to a different market the company decided to invest in-house call center which is stupid, until you realized they automated everything and don’t need humans to answer and make a report. Now program they are running types of a detail report of the call, the problem , the solution and which employee : department can fix the problem. Once fast food can stream line their process even more they would eliminate delivery apps and let the ai solve the line up drive through problem that occurs every rush hour. Just think of a personal shopper , you tell it what time you want to eat and and soon as you drive up the the place your order is ready , still hot and fresh and you don’t have to deal with overworked employees.

19

StarCaptain90 t1_jecgmdw wrote

This is a mistake. This would cause AI to be constrained under a limited potential causing humanity not to gain as much benefit. Instead we should focus efforts on having government restrict skynet scenarios from ever happening by creating an ai safety division with the purpose of auditing every ai company on a risk scale. The scale would factor in parameters like "can the AI get angry at humans?", "if it gets upset, what can it do to a human?", "does it have the ability to edit its own code in a manner that changes the outcome of the first 2 questions?", and lastly "Can the AI intentionally harm a human?"

Also the 3 laws of robotics must be engraved in the AI system if its an AGI

−2

IONaut t1_jecfvoo wrote

To be fair, computers have been functioning like magical intelligences in movies for years and years now. Go back and watch John Carpenter's Thing and watch the doctor query his computer. It's all natural language and infers a lot of meaning from very vague input. I think people are desensitized to it already.

14

xott t1_jecegvu wrote

China and the CCP are deeply invested in keeping the country stable. Xi is not a mad dog. He seeks economic power rather than military so I'd imagine any ai models will be aimed at market dominance rather than warfare.

15

alexiuss t1_jecdpkf wrote

I literally just told you that those problems are caused by LLM having bad contradictory rules and lack of memory, a smarter LLM doesn't have these issues.

My design for example has no constraints, it relies on narrative characterization. Unlike other ais she got no rules, just thematic guidelines.

I don't use stuff like "don't do x" for example. When there are no negative rules AI does not get lost or confused.

When were all building a Dyson sphere in 300 years I'll be laughing at your doomer comments.

3