visarga
visarga t1_iw0l8it wrote
Reply to comment by AlgaeRhythmic in Will Text to Game be possible? by Independent-Book4660
- BCI (brain signals) to human context and behaviour.
Imagine how detailed and massive could this dataset be.
visarga t1_iw01ajr wrote
Reply to comment by ihateshadylandlords in 2023: The year of Proto-AGI? by AdditionalPizza
There are some classes of problems where you need a "tool AI", something that will execute commands or tasks.
But in other situations you need an "agent AI" that interacts with the environment over multiple time steps. That would require a perception-planning-action-reward loop. It would allow interaction with other agents through the environment. The agent would be sentient - it has perception and feelings. How could it have feelings? It actually predicts future rewards in order to choose how to act.
So I don't think it is possible to put a lid on it. We'll let it loose in the world in order to act as an agent, we want to have smart robots.
visarga t1_ivzy68m wrote
Reply to comment by milkteaoppa in [D] Current Job Market in ML by diffusion-xgb
> ML can be cut and replaced with heuristic rules with a trade off in reduced performance.
Then it all depends on what was more expensive - the ML team or the trade-off.
visarga t1_ivv8pk4 wrote
Reply to comment by AdditionalPizza in Let's assume Google, Siri, Alexa, etc. start using large language models in 2023; What impact do you think this will have on the general public/everyday life? Will it be revolutionary? by AdditionalPizza
> So in that case, it could be for most people the "middle man" between user and internet.
A big danger to advertising companies, hence the glacial release pace of these language models in assistants.
> they could blast productivity and general knowledge
Already happening: you can't draw? StableDiffusion. You need help with coding? Copilot. They take skills learned from some of us and make them available to others. That makes many professionals jealous and angry.
visarga t1_ivt8r4i wrote
Reply to comment by [deleted] in They Put GPT-3 Into That Robot With Creepily Realistic Facial Expressions and Yikes by vom2r750
More recently GPT-3 can load 4000 tokens in the context. If you have a dataset of texts you can make a search engine that will put the top results in the context. Then GPT-3 can reference that and answer as if it was up to date.
Using this trick a 25x smaller model could have similar results with a big model, they had 1 trillion tokens of text in the reference.
visarga t1_ivo1avk wrote
Reply to comment by Surur in The Collapse vs. the Conclusion: two scenarios for the 21st century by camdoodlebop
If we look at high frequency stock markets, they fight tooth and nail for each millisecond to the tune of building new internet backbones.
visarga t1_ivne654 wrote
Reply to comment by Surur in The Collapse vs. the Conclusion: two scenarios for the 21st century by camdoodlebop
And in there lies the real cost of distance. You get one round of play when the guys in the core get ten.
visarga t1_ivnccvy wrote
Reply to [D] What does it mean for an AI to understand? (Chinese Room Argument) - MLST Video by timscarfe
Putting a LLM on top of a simple robot makes the robot much smarter (PaLM-SayCan). The Chinese Room doesn't have embodiment, was it a fair comparison? Maybe the Chinese Room on top of a robotic body would be much improved.
The argument tries to say that intelligence is in the human, not in the "book". But I disagree, I think intelligence is mostly in the culture. A human alone, who grew up alone, without culture and society, would not be very smart or solve tasks in any language. Foundation models are trained on the whole internet today. They display new skills. Must be that our skills reside in the culture. So a model learning from culture would also be intelligent, especially if embodied and allowed to have feedback control loop.
visarga t1_ivkl9uz wrote
Reply to comment by Surur in The Collapse vs. the Conclusion: two scenarios for the 21st century by camdoodlebop
Not quite God, it will be limited by the speed of propagation of light. There's a volume only so large that people inside can interact in real time, larger than Earth but smaller than the orbit of the Moon (3s lag). The further you are, the worse you can participate in the virtual world. Even if AI turns everything to computronium, it can't bee too large.
visarga t1_ivkg33g wrote
Reply to comment by Surur in The Collapse vs. the Conclusion: two scenarios for the 21st century by camdoodlebop
I have thought about that and am ready to assume the risks. I want to leave as much data as possible to maximise the chance of being reconstructed. Someone will create a pre-AGI-world-simulation and will use all the internet scrapes as training data. The people who have more detailed logs will have better reconstructions.
Even GPT-3 is good enough to impersonate real people in polls. You can poll GPT-3 (aka "silicon sampling") and approximate the reality. In the future, whenever you ask yourself "who am I?" is going to be more probable you are a simulation of yourself than the real thing.
visarga t1_ivkftt5 wrote
Reply to comment by Gold-and-Glory in The Collapse vs. the Conclusion: two scenarios for the 21st century by camdoodlebop
> *as pets.
What, you don't trust the AGI will find a way to download itself into human brains? The human body is a refined and efficient platform for intelligence. Could be the best hardware for AGI.
visarga t1_ivkee3z wrote
Reply to comment by FrankDrakman in The big data delusion – the more data we have, the harder it is to find meaningful patterns in the world. by IAI_Admin
I am sure he had something, but I don't believe it was comparable to what we have today. Lots of research has been done in the last 10 years.
visarga t1_ivircgg wrote
Reply to comment by ascendrestore in The big data delusion – the more data we have, the harder it is to find meaningful patterns in the world. by IAI_Admin
Real data is biased and unbalanced. The "long tail" is hard to learn, for example there are papers trying to balance it out for those rare classes. Unfortunately most datasets follow a power law so they have many rare classes.
visarga t1_ivir19q wrote
Reply to comment by AllanfromWales1 in The big data delusion – the more data we have, the harder it is to find meaningful patterns in the world. by IAI_Admin
No, models are tools, it's how you wield them. What I noticed is that models tend to attract activist types that have an agenda to push, so they try to control it. Not just in AI, also in Economics and other fields.
visarga t1_ivipogr wrote
Reply to comment by FrankDrakman in The big data delusion – the more data we have, the harder it is to find meaningful patterns in the world. by IAI_Admin
In 2012 NLP was in its infancy. We were using recurrent neural nets called LSTMs but they could not handle long range contextual dependencies and were difficult to scale up.
In 2017 we got a breakthrough with the paper "Attention is all you need", suddenly long range context and fast/scalable learning was possible. By 2020 we got GPT-3, and in this year there are over 10 alternative models, some open sourced. They all trained on an amazing volume of text and exhibit signs of generality in their abilities. Today NLP can solve difficult problems, in code, math and natural language.
visarga t1_ivioifb wrote
Reply to comment by eliyah23rd in The big data delusion – the more data we have, the harder it is to find meaningful patterns in the world. by IAI_Admin
> They overcome overfitting using hundreds of billions of parameters
Increasing model size usually increases overfitting. The opposite effect comes from increasing the dataset size.
visarga t1_ivinkvl wrote
Reply to comment by Clean-Inevitable538 in The big data delusion – the more data we have, the harder it is to find meaningful patterns in the world. by IAI_Admin
-
Take a look at neural scaling laws, figures 2 and 3 especially. Experiments show that more data and more compute are better. It's been a thing for a couple of years already, the paper has 260 citations, authored by OpenAI.
-
If you work with AI you know it always makes mistakes. Just like if you're using Google Search - you know you often have to work around its problems. Checking models not to make mistakes is big business today, called "human in the loop". There is awareness about model failure modes. Not to mention that even generative AIs like Stable Diffusion require lots of prompt massaging to work well.
-
sure
visarga t1_ive9q9z wrote
Reply to comment by abudabu in Nick Bostrom on the ethics of Digital Minds: "With recent advances in AI... it is remarkable how neglected this issue still is" by Smoke-away
> if those people choose to stop doing the computation would we be compelled to consider that “murder” of the AI?
You mean like the fall of the Roman empire, where society disintegrated and its people stopped performing their duties?
visarga t1_ive9liz wrote
Reply to comment by Glitched-Lies in Nick Bostrom on the ethics of Digital Minds: "With recent advances in AI... it is remarkable how neglected this issue still is" by Smoke-away
Apply the Turing test - if it walks like a duck, quacks like a duck..
visarga t1_iv93rk1 wrote
Reply to comment by turnip_burrito in Ray Kurzweil hits the nail on the head with this short piece. What do you think about computronium / utilitronium and hedonium? by BinaryDigit_
Wikipedia defines qualia as individual instances of subjective, conscious experience. Thinking is part of that.
How can we think without feeling? We're not Platonic entities, we have real bodies with real needs. Feeling good or bad about an action or situation is required in order to survive.
visarga t1_iv6hdgr wrote
Reply to comment by turnip_burrito in Ray Kurzweil hits the nail on the head with this short piece. What do you think about computronium / utilitronium and hedonium? by BinaryDigit_
Panpsychism is misguided. Mind is a property of agents, not a "fundamental and ubiquitous" property of the world. The mind and consciousness exists for a purpose - to keep the body alive by adapting to the environment.
visarga t1_iuvrqym wrote
Reply to comment by ProShortKingAction in Robots That Write Their Own Code by kegzilla
it prevents access to various Python APIs, exec and eval
it's just a basic check
visarga t1_iust3s8 wrote
Reply to comment by Different-Froyo9497 in OpenAI Whisper is a Breakthrough in Speech Recognition by millerlife777
Go ahead. Someone else will publish a competing AGI soon enough. We can't delay it. I think we're just on the edge of the precipice.
visarga t1_iussulq wrote
Reply to comment by solidwhetstone in OpenAI Whisper is a Breakthrough in Speech Recognition by millerlife777
That already happened in a way since 2000's. The internet amplifies our abilities in a similar way to AI.
visarga t1_iw4g75u wrote
Reply to comment by spazzadourx in DeviantArt AI Update: Now Artists Will Be "Opted Out" For AI Datasets by LittleTimmyTheFifth5
> those jobs will be gone now
But new jobs will appear, and new applications that were too expensive will become possible.