AdditionalPizza
AdditionalPizza t1_iu5rm91 wrote
Reply to If you were performing a Turing test to a super advanced AI, which kind of conversations or questions would you try to know if you are chatting with a human or an AI? by Roubbes
Turing test as in, you wouldn't be able to tell which subject you're conversing with is an AI and which is human? An AI today could probably pass that test if you programmed it that way and prompting was required. It might need a more robust memory though. Honestly I feel like it would be obvious which is the AI because it would "outclass" the human conversation. You can try and trick them with things like looping back to previous parts of a conversation, telling them they said something they didn't, call them a liar, all sorts of things. But it'd be pretty easy now to fool most people if someone wanted to create an AI to do that, assuming it's a blind test through text with subject A and subject B on the other side of a wall or whatever. If someone online asked you to prove you're human through text, good luck.
If you mean a test whether or not the AI is conscious, I don't think that will be absolutely provable. Possibly ever, depending on definitive proof in the future. I'm of the belief that when a certain threshold of intelligence is reached, 1 or maybe 2 different senses, and total autonomy; You reach consciousness. So long as someone/something has an ability to communicate with itself through thought, and has the ability to imagine; Then it should be considered conscious.
AdditionalPizza t1_iu5m3zi wrote
Reply to comment by phriot in If you were performing a Turing test to a super advanced AI, which kind of conversations or questions would you try to know if you are chatting with a human or an AI? by Roubbes
To do this, it would need a sense like vision. If it had vision, we could easily program an AI to speak when someone is present.
AdditionalPizza t1_iu048nq wrote
Reply to comment by Akimbo333 in [DEEPMIND] Transformers have shown remarkable capabilities - but can they improve themselves autonomously from trial and error? by Danuer_
A large language model is a transformer. An LM has tokens which are basically parts of words, like syllables and punctuation/spaces. During training it forms parameters from data. The data isn't saved, just the way it relates tokens to other tokens. If it were connect the dots, the dots are tokens and parameters are the lines. You type out a sentence, which is made of tokens and it spits out tokens. It predicts what tokens to return to you by the probability it learned of one token most likely following another. So it has reasoning based on the parameters during training, and some "policies" its given during pre-training.
I think that's a valid way to describe it in simple terms.
AdditionalPizza t1_itz6vch wrote
Reply to comment by sonderlingg in AGI staying incognito before it reveals itself? by Ivanthedog2013
And certain subreddits
AdditionalPizza t1_itz5i2x wrote
I don't think there will be a huge relative difference between the generation or 2 of AI preceding AGI or the generations directly following it.
The proto-AGI will probably be claimed to be AGI and it will make headlines, but people will argue it isn't. However it will be more than general enough to displace a lot of jobs. Even AI long before it, 2023-2025 will be good enough to automate a lot of jobs with specific fine tuning, but it will take another generation of models before mass adoption by corporations takes place and deploying them, sometime between 2025 and 2027. Models are already working behind the scenes at the background of major companies like Netflix, meta, nvidia, Google, Amazon, you name it they're most likely using them. 2023 generations will start being used in non-tech focused companies in the background more. Healthcare breakthroughs will start to be realized by 2024/2025, but I can't speak to how long that will take to trickle down to the public.
When true AGI is created, there will still be people claiming it isn't AGI, but in hindsight we will confirm it. It will be murky though because even before AGI our models will be self improving themselves in increments. I think we might define AGI as the first model that doesn't require human intervention to train, or possibly the first model with a general agent in a capable robotic body.
I believe predictions beyond 2025/2026 are pretty much impossible to make at this point for the general public.
Everyone (myself included) keeps recycling this notion of creative and intellectual jobs going first because it doesn't neccissarily require robotics to replace but I think that's only partially true. Those jobs will see layoffs first and already have, but full automation requires robotics anyway. I think we were sort of wrong before in thinking labour and low skill jobs would go first. But I think we may not have been totally wrong. Or at least not by decades or anything.
Robotics is going to make massive strides after 2025, I don't know how quickly but I think 2025-2026 will be for robotics what 2022 was for language models. Probably after a couple more years robots with AI will be an expensive proposition, but ultimately worth it for large corporations to replace human workers with. I can't imagine predicting details about this though.
AdditionalPizza t1_ityza30 wrote
Reply to comment by Akimbo333 in [DEEPMIND] Transformers have shown remarkable capabilities - but can they improve themselves autonomously from trial and error? by Danuer_
By adding RL algorithms into pre-teaining, the model is able to learn new tasks without having to offline fine tune it. So it's combining reinforment learning with a transformer. And another benefit is the transformer sometimes makes more efficient RL algorithms than the originals that it was trained with.
RL is reinforment learning, a machine learning technique, which is like giving a dog a treat when it does the right trick.
It's kind of hard to explain it simply, and I'm not qualified haha. But it's a pretty big deal. It's makes it way more "out of the box" ready.
AdditionalPizza t1_itx7tn0 wrote
Reply to comment by visarga in [DEEPMIND] Transformers have shown remarkable capabilities - but can they improve themselves autonomously from trial and error? by Danuer_
"AD learns a more data-efficient RL algorithm than the one that generated the source data"
This part of the paper is very interesting. The transformer is able to improve upon the original RL algorithms used during pre-training.
AdditionalPizza t1_itwyz3z wrote
Reply to Current state of quantum computers by ryusan8989
This guy does yearly updates on quantum computing on youtube. He provides all the links for further reading.
It's a pretty confusing subject, but what I gather is scaling up number of qubits is going slowly but surely so far. There's been a few cool discoveries, though I don't have enough general knowledge to explain them well. No current sign of them being used for any new super exciting thing at the moment that I'm aware of.
AdditionalPizza OP t1_itwcla3 wrote
Reply to comment by Quealdlor in With all the AI breakthroughs and IT advancements the past year, how do people react these days when you try to discuss the nearing automation and AGI revolution? by AdditionalPizza
So your original prediction from 10 years ago went from 10 years to up to 35 years? Interesting.
AdditionalPizza t1_itw9irw wrote
Reply to comment by Recent-Fish-9233 in Lots of posts here talk about how AI advancements and automation are going to inevitably replace jobs. As someone without interest or acumen in programming or IT, what sort of "future-proof" field(s) should I be looking into as a way to maintain (for lack of a better term) viability? by doctordaedalus
Yeah this is what I'm saying. People will argue that you can just keep outputting more and more with extra productivity but that doesn't make sense economically. Shareholders don't care where the profit comes from for that quarter, and paying fewer wages is a good boost to net profit.
AdditionalPizza OP t1_itvwfwd wrote
Reply to comment by Thelmara in With all the AI breakthroughs and IT advancements the past year, how do people react these days when you try to discuss the nearing automation and AGI revolution? by AdditionalPizza
I'm not so sure engineers and CEO's have been this optimistic about AI before, but they have for sure about other things. I could be wrong though.
What they're saying should, theoretically, get people looking into it themselves and reading the research, and seeing that they're onto something this time. Though I'll admit, presuming anyone would ever do that would be foolish on my part.
I'm just wondering how in-your-face this stuff has to be before people open their eyes, but I think I've came to the conclusion most people won't open their eyes until it hits them in the face.
AdditionalPizza OP t1_itvtpx0 wrote
Reply to comment by Thelmara in With all the AI breakthroughs and IT advancements the past year, how do people react these days when you try to discuss the nearing automation and AGI revolution? by AdditionalPizza
Are you implying people have just been desensitized to the optimism from the past? Or do you mean we will continue being 10 years out for decades to come?
AdditionalPizza OP t1_itvt9hd wrote
Reply to comment by augustulus1 in With all the AI breakthroughs and IT advancements the past year, how do people react these days when you try to discuss the nearing automation and AGI revolution? by AdditionalPizza
But that's not really the discussion. I don't care so much about what the minority that refuses technology does or doesn't do. They could go start their own low-tech society and pay taxes to their elected officials, but that doesn't help me in a world where I want to live with new technologies and strive to not have to work meaningless jobs ever again.
AdditionalPizza t1_itvm445 wrote
Reply to [DEEPMIND] Transformers have shown remarkable capabilities - but can they improve themselves autonomously from trial and error? by Danuer_
Here's the arxiv link for anyone interested.
AdditionalPizza OP t1_itvjr9k wrote
Reply to comment by augustulus1 in With all the AI breakthroughs and IT advancements the past year, how do people react these days when you try to discuss the nearing automation and AGI revolution? by AdditionalPizza
To be honest, I don't ever think about Amish people's way of life or how that applies to 99.9% of the rest of the world.
So the solution is go become part of a luddite religion?
AdditionalPizza t1_itve5j8 wrote
Reply to comment by Professional-Song216 in Lots of posts here talk about how AI advancements and automation are going to inevitably replace jobs. As someone without interest or acumen in programming or IT, what sort of "future-proof" field(s) should I be looking into as a way to maintain (for lack of a better term) viability? by doctordaedalus
The shitty thing about learning programming now, is by the time you're job ready entry level positions will be either gone or much less skilled leading to competition and lower wage. I was relearning it myself and when Codex was shown to correct its own errors and test, I gave up. Maybe I'm wrong and it's foolish to move on, but you only get one shot at life and I'm not wasting that amount of time on something AI has a direct scope on today.
AdditionalPizza OP t1_itutqs9 wrote
Reply to comment by User1539 in With all the AI breakthroughs and IT advancements the past year, how do people react these days when you try to discuss the nearing automation and AGI revolution? by AdditionalPizza
I can agree to that
AdditionalPizza OP t1_itutn3p wrote
Reply to comment by Saratustrah in With all the AI breakthroughs and IT advancements the past year, how do people react these days when you try to discuss the nearing automation and AGI revolution? by AdditionalPizza
Everyone just a few years ago assumed AI would start at the bottom of the pyramid and we would work up to creating human equivalent intelligence. But it seems like the opposite is true, and the basic functions are more difficult to simulate than the "higher" functions reserved for humans. Like creativity, intellect, language, reasoning, etc. Those seem to be easier to do than basic traits like fear, motor skills, and other basic things we think of as less unique to humans.
Humans are special when compared to other biological creatures, but we don't even fully know the intelligence of some other species. We just have the advantage of having evolved with thumbs and the ability to walk upright.
AdditionalPizza OP t1_itus577 wrote
Reply to comment by Saratustrah in With all the AI breakthroughs and IT advancements the past year, how do people react these days when you try to discuss the nearing automation and AGI revolution? by AdditionalPizza
So far, that's very true.
AdditionalPizza OP t1_itulozg wrote
Reply to comment by User1539 in With all the AI breakthroughs and IT advancements the past year, how do people react these days when you try to discuss the nearing automation and AGI revolution? by AdditionalPizza
>I don't think it's a matter of foresight.
What you describe thereafter is exactly foresight, just not on an individual scale. Governmental foresight with implementing security nets.
The US has mega rich corporations, but a lot of countries don't. However the US also has a pretty large population compared to other fully developed countries. Social security has been the target for stripping down over the years, and with the generation currently reaping its benefits, projections show younger generations will be with less. But that's more political than I care to dive into. And may not be the case in the US, I'm not from the states I'm north of the border.
I think cracks will form though, sure we have systems for unemployment, but those systems haven't been tested for crises levels of unemployment. It also begs the question of UBI being available, while some people continue to work.
AdditionalPizza OP t1_ituigzl wrote
Reply to comment by User1539 in With all the AI breakthroughs and IT advancements the past year, how do people react these days when you try to discuss the nearing automation and AGI revolution? by AdditionalPizza
I have no doubt we will make it, it's just a matter of how much foresight we have to reduce suffering among the people that don't make the decisions.
AdditionalPizza OP t1_itucodj wrote
Reply to comment by Yuli-Ban in With all the AI breakthroughs and IT advancements the past year, how do people react these days when you try to discuss the nearing automation and AGI revolution? by AdditionalPizza
Meanwhile jobs are being automated already.
Part of me is anxious about automation affecting me, but a larger part of me is anxious in how the general population will react if their oblivious world is shattered. In theory it sounds great saying "I told you so" but that's never as sweet as it's imagined when people are frustrated and suffering.
AdditionalPizza OP t1_itubzy3 wrote
Reply to comment by Dramatic_Credit_1500 in With all the AI breakthroughs and IT advancements the past year, how do people react these days when you try to discuss the nearing automation and AGI revolution? by AdditionalPizza
I can't think of a better solution. I don't know how UBI would be implemented, especially in countries without giant trillion dollar corporations.
AdditionalPizza OP t1_itubvps wrote
Reply to comment by Smoke-away in With all the AI breakthroughs and IT advancements the past year, how do people react these days when you try to discuss the nearing automation and AGI revolution? by AdditionalPizza
I think before even AGI we will be caught off guard. I think 2023 language models will surprise us.
AdditionalPizza OP t1_ivuzq1w wrote
Reply to comment by blueSGL in Let's assume Google, Siri, Alexa, etc. start using large language models in 2023; What impact do you think this will have on the general public/everyday life? Will it be revolutionary? by AdditionalPizza
>People already talk to their phones and 'smart home' devices it'd just be bumping up the abilities a notch.
So you think it will be just "a notch" rather than substantially useful? I believe they will become more useful than googling things yourself. When I talk about voice assistants or virtual or whatever you want to call them, I mean the ability to type queries as well. So in that case, it could be for most people the "middle man" between user and internet.
On top of that, they could blast productivity and general knowledge off from those that don't use it. Compare say an elderly person that hasn't touched a smart phone to a 20 something year old college student in terms of technology know-how. I think the difference between someone that accesses their future-assistant and that college student today is a greater gap than that college student and the elderly person. I also think it will likely make the internet much more accessible to people that currently don't use it extensively, and it will have a greater affect on the average person's life than the internet itself did over the past ~20 years. It will hopefully be like conversing directly with the entirety of the internet.
I agree with your last 2 paragraphs. But the business side of things won't really show everday average people AI capabilities.