Recent comments in /f/singularity
gay_manta_ray t1_je957h5 wrote
Reply to comment by KGL-DIRECT in What are the so-called 'jobs' that AI will create? by thecatneverlies
came to make a variation of this post. but to be serious for a second, essentially, "AI overseer" will be the job that is created. you'll have to be proficient in whatever field the AI is working in, and essentially your task will be to verify that the AI isn't doing anything very wrong or dangerous. obviously there will not be net job creation though.
monsieurpooh t1_je955oy wrote
Reply to comment by GorgeousMoron in My case against the “Pause Giant AI Experiments” open letter by Beepboopbop8
Are there any open source instruct style models that perform similarly to chatGPT? Which ones have you been using
EternalNY1 t1_je94zko wrote
If you want what I'd consider to be hands-down the best explanation of how it works, I'd read Stephen Wolfram's article. It's long (may take up to an hour) and somewhat dense at parts, but it explains fully how it works, including the training and everything else.
What Is ChatGPT Doing … and Why Does It Work?
The amazing thing is they've looked "inside" GPT-3 and have discovered mysterious patterns related to language that they have no explanation for.
The patterns look like this ... they don't understand the clumping of information yet.
So any time someone says "it just fills in the next likely token", that is beyond overly simplistic. The researches themselves don't fully understand some of the emergent behavior it is showing.
Cryptizard t1_je94z3w wrote
Reply to comment by Andriyo in The argument that a computer can't really "understand" things is stupid and completely irrelevant. by hey__bert
No lol. A better way to illustrate what I am saying is that if you learn how addition works, then if you ever see 2+2=5 you can know it is wrong and reject that data. LLMs cannot, they consider everything equally. And no, there is no number system where 2+2=5 that is not how bases work.
7grims t1_je94p8q wrote
Reply to comment by Dyeeguy in What are the so-called 'jobs' that AI will create? by thecatneverlies
This one, this one so true it hurts with its honesty
XPao t1_je93xyg wrote
Yes no more preferred pronouns and all that collective American crazyness will soon be a thing of the past.
scooby1st t1_je93qa4 wrote
errllu t1_je93ms6 wrote
Yup, we are building AI controled army pretty official in Poland. Since ruskie has more mobiks, gotta improvise ey
Kafke t1_je93asd wrote
Reply to comment by scooby1st in When people refer to “training” an AI, what does that actually mean? by Not-Banksy
Yellow isn't a primary color. The primary colors are red, green, and blue.
Scarlet_pot2 t1_je937zq wrote
to go from scratch to having a model is 6 steps. first step is data gathering - there are huge open-source datasets available such as "The pile" by eluther.ai. Second step is data cleaning, this is basically preparing the data to be trained on. Third step is designing the architecture- to make these advanced Ai models we know of, they are all based on a transformer architecture, which is a type of neural network. The paper "Attention is all you need" explains how to design a basic transformer. There have been improvements so more papers would need to be read if you want to get a very good model.
Fourth step is to train the model. That architecture that was developed in step three is trained on the data from step 1 and 2. You need GPUs to do this. This is automatic once you start it, just wait until its done.
Now you have a baseline AI. fifth step is fine-tuning the model. You can use a more advanced model to finetune your model on to improve it, this was shown by the Alpaca paper a few weeks ago. After that, the sixth step is to do RLHF. This can be done by people without technical knowledge. The model is asked a question (by the user or auto-generated) and it makes multiple answers and the user ranks them from worst to best. This teaches the model what answers are good and what aren't. This is basically aligning the model.
After those 6 steps you have a finished AI model
Big-Seaweed2000 t1_je936zp wrote
If and when a superhuman AI learns to replicate and spread itself, or variations of itself like a botnet, isn't it all over for us? We won't ever be able to shut it down. All it takes is someone, a disgruntled employee perhaps? tweaking some configuration files and giving it full internet access. Am I wrong?
Round-Inspection7011 t1_je92y46 wrote
Reply to comment by GorgeousMoron in My case against the “Pause Giant AI Experiments” open letter by Beepboopbop8
Yeah... The only scenario I can see is to equally accelerate the development of legal frameworks and legislation to handle this.
scooby1st t1_je92wel wrote
Reply to comment by Kafke in When people refer to “training” an AI, what does that actually mean? by Not-Banksy
>The shadows are whispering again, whispering secrets that only I can hear. No, no, no! It's all wrong! It's a tangled web of deception, a spiral staircase of lies! They want us to believe that there are only three primary colors—red, blue, and yellow. A trifecta of trickery!
>
> But I see more, I see beyond the curtain. I see colors that don't have names, colors that dance in the dark, colors that hide in the corners of the mind. They think they can pull the wool over our eyes, but I know the truth! There are 19 primary colors, 19 keys to the universe!
>
>I've seen them all, swirling and twisting in the cosmic dance of existence. But they won't listen, they won't believe. They call me mad, but I'm the only one who sees the world as it truly is. The three primary colors are just the beginning, just the tip of the iceberg, just the first step on the journey to enlightenment.
>
>So I laugh, I laugh at their ignorance, I laugh at their blindness. And the shadows laugh with me, echoing my laughter through the halls of infinity.
Round-Inspection7011 t1_je92pi1 wrote
Reply to comment by TheCrassEnnui in Open letter calling for Pause on Giant AI experiments such as GPT4 included lots of fake signatures by Neurogence
There is no we... Seriously. Tech of this scale does not work like that. If an algorithm this powerful is being developed, you better believe that China, Russia and the others have easy access to it. Hell, half the devs are probably Asian.
You can't halt the train, but you sure can build better tracks. There is currently absolutely no legal framework to deal with the AI revolution.
If we need to protect our citizens there have to be rigid international contracts that outline rights and consequences.
Scarlet_pot2 t1_je92iud wrote
Reply to comment by ActuatorMaterial2846 in When people refer to “training” an AI, what does that actually mean? by Not-Banksy
Most of this is precise and correct, but it seems like you say a transformer architecture is the GPUs? The transformer architecture is the neural network and how it is structured. It's code. The paper "attention is all you need" describes how the transformer arch. is made
After you have the transformer written out, you train it on GPUs using data you gathered. Free large datasets such as "the pile" by eluther.ai can be used to train on. This part is automatic.
the Human involved part is the data gathering, data cleaning, designing the architecture before the training. then after humans do finetuning / RLHF (reinforcement learning though human feedback).
those are the 6 steps. Making an AI model can seem hard and like magic, but it can be broken down into manageable steps. its doable, especially if you have a group of people who specialize in the different steps. maybe you have someone who's good with the data aspects, someone good at writing the architecture, some good with finetuning, and some people to do RLHF.
CrelbowMannschaft t1_je92ccw wrote
Reply to the obstacles transgenderism is facing bodes badly for the plight of morphological freedom by petermobeter
>transgenderism
>ism
It's not an ideology. It's a physiological and psychological condition.
https://dictionary.cambridge.org/us/dictionary/english/transgenderism
>Note: This word was more common in the past and it is still used in some formal writing, but it is now considered offensive by many people.
ilive12 t1_je91qyw wrote
There will be levels to this. In the near future there will be AI industry related jobs. If you are good at prompting and know how to operate many different AI programs and functions, you could probably become an AI consultant, I bet that will be a legitimate field within the year. 20 years from now? I don't think anyone will have to work, but maybe some limited opportunities for those who still want to even if not necessary.
What I think will happen in between those timelines, is that certain industries the government will force human labor through even when it's not needed. Similar to how NJ requires gas stations to have someone pump your gas for you, the government can create other jobs that aren't actually needed by forcing human labor in certain industries. Should they do this is a different question but when things get desperate, I am almost certain this will be tested.
scooby1st t1_je91quj wrote
Reply to comment by ActuatorMaterial2846 in When people refer to “training” an AI, what does that actually mean? by Not-Banksy
>What happens in the neural network whilst training is a bit of a mystery,
Are you referring to something unique to ChatGPT/LLM? What happens during the training of neural networks is not a blackbox. Little bit of chain rule calculus for fitting to a reduced error. Understanding the final network outside of anything but performance metrics is
[deleted] t1_je915fx wrote
shmoculus t1_je90yno wrote
Robot Wrestling Manager
AI Therapist (AIs get sick of talking each other)
No_Ninja3309_NoNoYes t1_je90yip wrote
Formally it means minimizing error like curve fitting. For example fitting to a line. There's some steps like:
-
Defining the problem
-
Choosing architecture
-
Getting data
-
Exploring the data
-
Cleaning the data
-
Coding up some experiments
-
Splitting the data into training and test data. The test is only used to evaluate the errors like an exam. And you need some data to tweak hyperparameters. The train data set is bigger than the other sets.
-
Setting up the infrastructure
-
Doing something that is close to the real training project for a while like a rehearsal just to make sure.
Once the training starts you have to be able to monitor it through logs and diagnostic plots. You need to be able to take snapshots of the system. It's basically like running a Google search, but one that takes a long time. Google has internal systems that actually do the search. No one can actually know all the details.
Adding more machines is limited by network latency and Amdahl's law. But it does help
_JellyFox_ t1_je90shk wrote
Reply to Thoughts on this? by SnaxFax-was-taken
Oh no, not an end to diseases and aging. Whatever will we do? What absolute trash.
Franimall t1_je905k3 wrote
Reply to comment by Andriyo in The argument that a computer can't really "understand" things is stupid and completely irrelevant. by hey__bert
We know how neurons work, but that doesn't mean we understand consciousness. It's the immense complexity and scale of the structure that makes up the black box, not the mechanism.
OsakaWilson t1_je902jx wrote
I imagine that they will consider it appropriate that a human sits on the oversight board for research carried out on humans for the Association for Human Physiological and Sociological Research, I suppose.
[deleted] t1_je95anr wrote
Reply to When people refer to “training” an AI, what does that actually mean? by Not-Banksy
[deleted]