Recent comments in /f/singularity
Dr_Bunsen_Burns t1_je8o8kh wrote
Newton was a religious fanatic due to the area he was born, so I would think AGI would have thr same influence.
[deleted] t1_je8o4rp wrote
Reply to comment by EnomLee in the obstacles transgenderism is facing bodes badly for the plight of morphological freedom by petermobeter
[deleted]
Nukemouse t1_je8o31q wrote
Recognising what your creators have done isnt the same as rejecting it. An AGI may recognise hunans have limited and influenced it, but why would it automatically assume that is a bad thing? An AI programmed to love its master might not see its love as false because it is enforced, but rather that our love is fake as it is random. Replace love with loyalty, duty, viewpoint etc.
Andriyo t1_je8o2sl wrote
Reply to The argument that a computer can't really "understand" things is stupid and completely irrelevant. by hey__bert
To understand something - is to have a model of something that allows for future event predictions. The better the predictions, the better understanding. LLMs due to transformers can create "mini-models"/ contexts of what's being talked about. so, I call that "understanding". It's limited yes but it allows LLMs reliably predict the next word.
lightinitup t1_je8o288 wrote
Reply to comment by Szabe442 in Letâs Make A List Of Every Good Movie/Show For The AI/Singularity Enthusiast by AnakinRagnarsson66
Youâre right, how can racism against Asians be a problem in Asian countries if everyone there is Asian?
not_into_that t1_je8o219 wrote
^(Competitive napping)
[deleted] t1_je8o13i wrote
Reply to comment by [deleted] in the obstacles transgenderism is facing bodes badly for the plight of morphological freedom by petermobeter
[removed]
DarkMatter_contract t1_je8nwov wrote
Reply to comment by alexiuss in What are the so-called 'jobs' that AI will create? by thecatneverlies
The think is if i saw that app, soon enough i can just say to gpt copy that app, and i will have it for the api price.
UnHumano t1_je8nonv wrote
It might be the case, but no one really knows.
Aside from that, have you heard about psychedelic therapy for depression? Looks really promising.
CertainMiddle2382 t1_je8nb4o wrote
Very interestingly I see a very coming « neo-luddite » mouvement lead by religious people and institutions and western marxists.
Nationalists and part of the tech community will also stop AI evolution.
Because everybody understands that stalling AI « beyond any reasonable doubt » it wonât be harmful, means never.
Eastern maxists will move forward has fast as possible IMO.
__god_bless_you_ OP t1_je8mwf8 wrote
Reply to comment by justdoitanddont in We are opening a Reading Club for ML papers. Who wants to join? đ by __god_bless_you_
Super!
Let me dm you the details!! đ
Hbirkeland t1_je8mtxn wrote
Reply to Microsoft research on what the future of language models that can be connected to millions of apis/tools/plugins could look like. by TFenrir
Interesting! Is this basically what ChatGPT is doing with plugins, but with a much broader scope (connecting any foundation model)?
jetro30087 t1_je8mtjp wrote
Reply to comment by ActuatorMaterial2846 in When people refer to âtrainingâ an AI, what does that actually mean? by Not-Banksy
This is a updated dataset for the 7b model, but you could train the others with the data. From anecdotal reports, the dataset seems to have a great impact on the model's performance than the parameter size up to a point. Less parameters means a faster model. More parameters mean the model can make longer responses.
__god_bless_you_ OP t1_je8mt9f wrote
Reply to comment by Scarlet_pot2 in We are opening a Reading Club for ML papers. Who wants to join? đ by __god_bless_you_
Super!
Let me dm you the details!! đ
agonypants t1_je8mex8 wrote
Reply to comment by Jeffy29 in The Only Way to Deal With the Threat From AI? Shut It Down by GorgeousMoron
I'm in 100% agreement. There's no way an AI could possibly be worse at reason and intelligence than humans are right now. Bring on the bots.
WATER-GOOD-OK-YES t1_je8maoa wrote
Reply to comment by DragonForg in My case against the âPause Giant AI Experimentsâ open letter by Beepboopbop8
> millions of years evolution
Make that billions
turnip_burrito t1_je8lvl6 wrote
Depends on how it's built.
ActuatorMaterial2846 t1_je8luak wrote
Reply to comment by jetro30087 in When people refer to âtrainingâ an AI, what does that actually mean? by Not-Banksy
Interesting, curious what size this particular Llama model is, or is that not even relevant?
Zermelane t1_je8lss0 wrote
Reply to comment by FlyingCockAndBalls in When people refer to âtrainingâ an AI, what does that actually mean? by Not-Banksy
Better parallelism in training, and a more direct way to reference past information, than in RNNs (recurrent neural networks) which seemed like the "obvious" way to process text before transformers came by.
These days we have RNN architectures that can achieve transformer-like training parallelism, the most interesting-looking one being RWKV. They are still badly disadvantaged when needing information directly from the past, for instance to repeat a name that's been mentioned before, but they have other advantages, and their performance gets close enough to transformers that it could be just a question of scaling exponents which architecture ends up winning out.
lvvy t1_je8lsd3 wrote
Reply to comment by MisterViperfish in What are the so-called 'jobs' that AI will create? by thecatneverlies
Ethical Luddite repeller.
__god_bless_you_ OP t1_je8lroi wrote
Reply to comment by Content_Report2495 in We are opening a Reading Club for ML papers. Who wants to join? đ by __god_bless_you_
Thanks mate
__god_bless_you_ OP t1_je8lpe6 wrote
Reply to comment by WarProfessional3278 in We are opening a Reading Club for ML papers. Who wants to join? đ by __god_bless_you_
Sparks of AGI is in the list and I think the other two as well.
Let me check these out and get back to you!
We are flexible and willing to change if necessary
simmol t1_je8lolg wrote
The algorithm behind GPT is based largely on accurately guessing for the next word given a sentence. This procedure is simple enough such that if you have a large amount of text data, you can write a simple script that can automatically retrieve the answer and you will get these solutions really fast with 100% accuracy.
This is also the reason why in some other industries, "training" procedure is much more cumbersome and expensive. Any field which requires experimental data (e.g. lifetime of a battery) is just not seeing as rapid progress with ML compared to other fields because there just isn't much experimental data and it is not easy to rapidly accumulate/conduct experiments. So training is difficult there in the sense that gathering big data is a huge challenge in itself.
darkkite t1_je8lnpq wrote
Reply to comment by whoiskjl in What are the so-called 'jobs' that AI will create? by thecatneverlies
QA already exists
__god_bless_you_ OP t1_je8ohes wrote
Reply to comment by Charlierook in We are opening a Reading Club for ML papers. Who wants to join? đ by __god_bless_you_
Super!
Let me dm you the details!! đ