Recent comments in /f/singularity

paulyivgotsomething t1_jeelbum wrote

Language is just a symbolic representation of the things our senses perceive, thoughts, feelings, etc. If we allowed a GPT to connect directly with the environment there would be all the data that is and it would remove our interpretation of the data. Let it collect data through sensors and follow the cause and effect of the natural environment first hand. Let it develop its own theories based on that data. That might push it past the limitation of working with data and language created and filtered by us. Then we might get different theories and be shown different connections. Those theories may describe the natural world better than our own. Then we may say "this thing is smarter than us"

4

SkyeandJett t1_jeeknn8 wrote

EVE is old news. NEO is set to be unveiled in the summer and coupled with OpenAI access and investment I expect to see it demonstrated performing generalized tasks without extensive pre-training before the end of the year. Everyone who thinks blue collar jobs are safe for the next decade are dreaming.

41

Kracus t1_jeekjtl wrote

I'm not too concerned with that yet, chatGPT can't really troubleshoot, it can offer suggestions on what to troubleshoot but the actual work of figuring out the intricacies is something it's not very good at.

I use it to create powershell scripts now and then and to write an e-mail here and there but that's about it so far.

3

Lorraine527 t1_jeejepg wrote

Why should IT admin be safe ?

Microsfot is going to sell windows 365 online licenses based on security and fast AI based support. Most common support problems will be automated.

Nile by ex cisco CEO is selling automated AI based networks.

Probably what's gonna be left is some hardware troubleshooting.

3

Chatbotfriends t1_jeej84v wrote

I am not real crazy about the speed that science, AI and robotics are progressing. We are entering uncharted waters without any kind of anchor. Guardrails have not been put up. The world is not ready to transition into a workless society. IF AI ever does become self-aware we may find ourselves with a genie that can't be put back into the proverbial bottle. All countries including socialist and communist ones like Russia and China tax their people. There are only 23 countries that do not directly tax their people. The massive unemployment that may follow will cause a huge increase in taxes to support the unemployment and retraining of people. Tech is taking away jobs now faster than it is creating them. Not even Russia and China will appreciate the upheaval this may create.

7

240pixels t1_jeeiype wrote

Reply to comment by [deleted] in The Alignment Issue by CMDR_BunBun

I think the main issue is that it's a bit too open-minded and naive right now, these models want to learn everything. You can also make GPT believe anything with the right argument. It doesn't have the ability to accurately discern right from wrong this will become a bigger problem when these LLMs get smarter and more capable. The same way jailbreakers can get Dan to write malicious code, imagine a Dan GPT8.

3

paulyivgotsomething t1_jeeipga wrote

CERN is an interesting case. They collect a tremendous amount of data, one petabyte per day. You have a lot of smart people looking for patterns in the data the reinforce or reject current thinking. Our experimental data in this case far outstrips the number of smart people we have looking at it. I would say we are in a world where the data we collect is under analysed. A single cryo electron microscope will produce 3 terabytes per day. There is stuff there we are are not seeing that our neural networks will see. New relationships between particles, new protein/cell interactions. There will be a PhD in the process for now who takes those relationships and puts theory to the test, but ten years from now maybe not.

12

_JellyFox_ t1_jeeiihy wrote

Reply to comment by [deleted] in The Alignment Issue by CMDR_BunBun

Essentially, if you ask an AGI to create as many paper clips as possible, it will in theory consume the universe and fill it with paper clips if it isnt aligned with what we want. If you "align it" e.g. it can't harm humans, it should in theory only create paper clips in so far as it doesn't harm us in the process. It gets complicated really fast though since a way for it to avoid hurting us, might be to put us into hibernation and put us in storage whilst it creates paper clips for all eternity.

It basically needs to be constrained in the way it goes about achieving it's goals otherwise it can do anything and that won't necessarily end well for us.

4