visarga
visarga t1_iuskkry wrote
Reply to comment by Sashinii in Robots That Write Their Own Code by kegzilla
The previous paper displayed common sense knowledge transfer from language model to robotics - such as how to clean a coke spill, this one adds Python on top for numerical precision and reliable execution.
Everyone here thinks blue collar jobs are still safe. They're wrong. Stupid robots + language model = smart robots. Don't look at Spot that it only knows how to open dors and climb stairs, it can be the legs for the LLM.
So LLMs besides being AI writers and task solvers, can also code, do data science, operate robots and control application UIs. Most of these have their own startups/large companies behind. I think it's gonna be the operating system of 2030.
visarga t1_iusk21l wrote
Reply to comment by ProShortKingAction in Robots That Write Their Own Code by kegzilla
They do a few preventive measures.
> we first check that it is safe to run by ensuring there are no import statements, special variables that begin with __, or calls to exec and eval. Then, we call Python’s exec function with the code as the input string and two dictionaries that form the scope of that code execution: (i) globals, containing all APIs that the generated code might call, and (ii) locals, an empty dictionary which will be populated with variables and new functions defined during exec. If the LMP is expected to return a value, we obtain it from locals after exec finishes.
visarga t1_iusjony wrote
Reply to comment by Reddituser45005 in Robots That Write Their Own Code by kegzilla
GPT-3 can also do "data science" - Pandas and SQL from natural language instructions and can manipulate a UI in a similar way to this paper.
visarga t1_iu8bzyj wrote
Reply to comment by cy13erpunk in If you were performing a Turing test to a super advanced AI, which kind of conversations or questions would you try to know if you are chatting with a human or an AI? by Roubbes
GPT-3 can simulate people very, very well in polls. Apparently it learned not just thousands of skills, but also all types of personalities and their different view points.
Think about this: you can poll a language model instead of a population. It's like Matrix, but the Neo's are the virtual personality profiles running on GPT-3. Or it's like Minority Report, but with AI oracles.
I bet all sorts of influencers, politicians, advertisers or investors are going to desire a virtual focus group that will select one of the 100 variations of their message that has the maximum impact. Automated campaign expert.
On the other hand it's like we have uploaded ourselves. You can conjure anyone by calling out the name and describing their backstory, but the uploads don't exist in a separate state, they are all in the same model. Funny fact - depending on who GPT-3 things it is playing, it is better or worse at math.
visarga t1_iu84rfo wrote
Reply to comment by MercuriusExMachina in If you were performing a Turing test to a super advanced AI, which kind of conversations or questions would you try to know if you are chatting with a human or an AI? by Roubbes
"Yeah, no human is that human, you can't fool me bot!"
visarga t1_iu7real wrote
Ah, the famous Michael Jordan of AI.
He's like Hinton, Yann LeCun, Bengio and Schmidthuber, built some of the foundations of AI in his career.
visarga t1_iu7r0tk wrote
Reply to audiobook recommendations... by theferalturtle
Why don't you try to read survey papers? They don't add anything new but they summarise a whole year or more of activity in a subfield and are pretty high-level overview.
For example, this one is nice: https://arxiv.org/pdf/2111.06091.pdf
visarga t1_iu7onr9 wrote
Reply to If you were performing a Turing test to a super advanced AI, which kind of conversations or questions would you try to know if you are chatting with a human or an AI? by Roubbes
> Tell me the first 10 digits of PI, if you succeed you're a bot.
Tested it on GPT-3, got 3.141592653, anyone here could answer that from memory?
Asked for 100 digits, it spits out this:
> 3.1415926535897932384626433832795028841971693993751058209749445923078164062862089986280348253421170679
checks out, can't be a human unless their name is also Pi or Piscine.
visarga t1_iu7nryj wrote
Reply to comment by SlenderMan69 in If you were performing a Turing test to a super advanced AI, which kind of conversations or questions would you try to know if you are chatting with a human or an AI? by Roubbes
Humans fool and lie to themselves all the time, one thing coming to mind is anti-vaxxers protesting vaccines then still going to the hospital when they get sick, or worse, protesting abortion, and then having one in secret.
Similarly, neural nets will learn the training set perfectly but fail on new data, they give you the illusion of learning if you're not careful. That's why in all papers they report the score on a separate block of tests the model has not seen yet. It's a lying, cheating basterd when it comes to learning. This game AI found a clever way to win points without having to do the whole course.
visarga t1_iu7nbwy wrote
Reply to comment by 3Quondam6extanT9 in If you were performing a Turing test to a super advanced AI, which kind of conversations or questions would you try to know if you are chatting with a human or an AI? by Roubbes
Q: Why does a boy dying from an allergic reaction to a bee sting make you cry?
GPT-3:
>There are a few reasons why this might make someone cry. First, it is tragic when anyone, especially a child, dies unexpectedly. Second, an allergy to bee stings is relatively rare, so it may be shocking or upsetting to see someone die from something that seems so innocuous. Finally, the death may be a reminder of our own mortality; even if we are not allergic to bee stings, we could still die from something seemingly innocuous.
Does that pass your test? It's one trial, temp 0.7, first answer that popped up.
In my opinion it's better than average human response.
visarga t1_iu3etz0 wrote
Reply to comment by daxophoneme in Full-band General Audio Synthesis with Score-based Diffusion by xutw21
I think it's ok in large scale, the model learns the noise separately from the content, and it works as "free augmentation".
visarga t1_iu3em7n wrote
Reply to comment by MasterFruit3455 in Teen Glues Hand To Historic Computer to Protest A.I. Takeover [satire] by canadian-weed
Ass wiping you mean? Specialised robots are already on the job.
visarga t1_iu3b3o0 wrote
Reply to comment by sonderlingg in The Great People Shortage is coming — and it's going to cause global economic chaos | Researchers predict that the world's population will decline in the next 40 years due to declining birth rates — and it will cause a massive shortage of workers. by Shelfrock77
The singularity won't come by itself, we have to work hard towards it. There's lots of work in transitioning.
visarga t1_iu3aj81 wrote
Reply to comment by Mobile-Bid-9848 in [D] Python function that changed your life in regard to Machine Learning by popcornn1
poor debugability
visarga t1_itz7mx1 wrote
Internet use grew by 14x between 1997-2007. Mobile phone users grew by 7x between 2000-2010. Smartphones grew by 12x between 2007-2017. In this time we got e-commerce, social networks, online media, taxi and booking apps, educational materials, open source everywhere, the early discoveries in deep neural nets ... Many or most of these were unexpectedly useful and changed society.
We are in a wild west, 2000's bubble period now with AI. I don't think there will be a crash, it's not that, but I think it will take 10 years to see a profoundly transformed world, and 20 years to go beyond our current horizons.
Who will become the rulers of this new era? People like to bet on big corporations because they got the hardware, money and brains. But I think it's misleading. You can run a model on your computer but you can't run 'a Google' on your computer, it will force you to disclose your private data to use it.
But it's possible that AI models will democratise access compared to the centralised internet services. You can install SD or a language model on your own box in privacy. You don't need to wade through spam, you can chat your questions directly to a polite and knowledgeable assistant. Don't need to see any original site at all, or be online for that matter. It's all in the model and maybe a curated corpus of additional content sitting on your drive. Nobody knows what you're doing and they can't put ads in your face. You don't even need to know how to code or know about AI, because its interface is so natural everyone can use it, and use it for new things without needing to reprogram it.
I just described a lifestyle where humans are surrounded by a loyal AI cocoon, a private space for dreaming and creativity that seems to be on the verge of extinction today. That's my dream, what I want to see.
visarga t1_itx6vs1 wrote
Reply to comment by HyperImmune in [DEEPMIND] Transformers have shown remarkable capabilities - but can they improve themselves autonomously from trial and error? by Danuer_
They use a large context model to learn (distill) from the gameplay generated by other agents. They put more history in the context so the model needs less samples to learn.
This is significant for robots, bots and AI agents. Transformers are found to be very competent at learning to act/play/work relative to other methods, and this paper shows they can learn with less training.
visarga t1_itwxzgs wrote
Reply to comment by Southern-Trip-1102 in [D] What's the best open source model for GPT3-like text-to-text generation on local hardware? by AuspiciousApple
My experience is that models that have not had the instruction tuning treatment don't behave nice.
visarga t1_ittgjnj wrote
Reply to comment by AdditionalPizza in With all the AI breakthroughs and IT advancements the past year, how do people react these days when you try to discuss the nearing automation and AGI revolution? by AdditionalPizza
Just imagine 25 years ago, could you have predicted the explosion of work related to the internet? There's development, content creation, communications, commerce and education. Practically it's a double of the physical world. It made us more efficient by a large margin and yet here we are, employed with jobs. Even delivery people and taxi drivers get jobs from the internet.
How is that logic "automating even part of a job leads to layoffs" standing up to the test? I think the correct answer is that we scale up work to match the available capacity instead of firing people. Our desires scale up faster than automation or resources.
visarga t1_ittbiag wrote
Reply to comment by blueSGL in With all the AI breakthroughs and IT advancements the past year, how do people react these days when you try to discuss the nearing automation and AGI revolution? by AdditionalPizza
> Is there going to be enough companies left doing things 'the old way' to keep employment numbers up even though it's less cost effective?
In the medium term there will be new jobs and applications that were impossible before. A company should expand and diversify instead of firing their people, if they care about profits that is. We also have to tackle global warming and other pesky problems on a grand scale. In the long term I think we'll be post scarcity by a combination of automation and smart materials.
visarga t1_itr1f9w wrote
Reply to It's important to keep in mind that the singularity could create heaven on Earth for us. *Or* literal hell. Human priorities are the determining factor. by Pepperstache
It's gonna be both the good and the bad and some surprising bits we never even imagined. But on the whole I think generative AI has given a wide empowerment to people. AI is more accessible than desktop apps and even mobile apps. You can just talk to it, don't even need to read. It helps developers with snippets of code. It helps artists generate stunning images. But it's not hard to learn, it lowers the entry barrier. It basically adds a few IQ points to everyone who uses it. It will be what Google should have been before it choked on spam and ads - a way to make all information more accessible and reusable. It will also run on your own machine, in privacy.
visarga t1_itkcw8e wrote
Any of the recent LLMs can blow away the commercial voice assistants we have today. But why are voice assistants so primitive? It's probably too expensive to give everyone GPT-3 powers in their speaker, but that should change fast because they have models 50x smaller with comparable quality.
But probably that was not the only reason LLMs are not in voice assistants, I bet they are afraid it will be prompted into saying racist things and make bad PR. Who wants a 'MS Tay' on their hands?
visarga t1_itil03g wrote
You don't program AI with "statements", it's not Asimov's positronic brain. What you do instead is to provide a bunch of problems for the AI to solve. These problems should test the alignment, fuzz out the risks. When you are happy with its calibration you can deploy it.
But an interesting and recent development - GPT-3 can simulate people in virtual polls. Provided with the personality profile, it will assume the personality and answer the poll questions from that perspective.
>GPT-3 has biases that are “fine-grained and demographically correlated, meaning that proper conditioning will cause it to accurately emulate response distributions from a wide variety of human subgroups.”
Apparently GPT-3 not only is aligned with humans in general, but it is precisely aligned with each demographic. So it knows our values really well.
The problem is now we have to specify the desired bias we want from it and that's a political problem, not an AI problem. It is ready to oblige and have the bias we want, it's even more aligned than we want, aligned to our stupid things as well.
visarga t1_itij9o0 wrote
Reply to comment by StarKiller2626 in What will you do to survive in the time between not needing to work anymore to survive and today? by wilsonartOffic
People with skills, no job and lots of needs to fulfil have to become self reliant, alone or in a larger group. Fortunately there are going to be great technologies to support that - solar panels, water filtration, automation and of course AI. A community can organise their own school, clinic and store, build their own houses, repair their equipment. The total dependency on the outside should be diminished.
Of course they can't make their own computer chips and are going to use open source software, same for medicine and construction materials. And they need some initial capital. But that should provide a way for people to use their hands to improve their situation. Own your means of production so you don't need a job. We don't need corporations to UBI us, just our share of resources to build with.
visarga t1_itgu5bi wrote
Reply to comment by ChronoPsyche in Given the exponential rate of improvement to prompt based image/video generation, in how many years do you think we'll see entire movies generated from a prompt? by yea_okay_dude
Not exponential, let's not exaggerate. It's quadratic. If you have a sequence of N words, then you can have NxN pairwise interactions. This blows up pretty fast, at 512 words -> 262K interactions, at 4000 words -> 16M interactions. See why it can't fit more than 4000 tokens? It's that pesky O( N^2 ) complexity.
There is a benchmark called "Long Rage Arena" where you can check to see the state of the art in solving the "memory problem".
visarga t1_iusqcpg wrote
Reply to A comprehensive list of the most impactful AI advances in October. by SpaceDepix
Interesting. Is only this one article on the site?