red75prime
red75prime t1_j189y4g wrote
Reply to comment by mattstorm360 in Perseverance sample tube drop by coffeesam
How are they going to find samples? There's no GPS on Mars. Right?
NASA's https://mars.nasa.gov/msr/#Overview is scarce on details. Landmark-based navigation, I guess?
red75prime t1_j1899a0 wrote
Reply to comment by yaosio in [R] Nonparametric Masked Language Modeling - MetaAi 2022 - NPM - 500x fewer parameters than GPT-3 while outperforming it on zero-shot tasks by Singularian2501
GPT-3: Sure, I can tell you power output of the sun. It would be 3.8 x 1026 W or 3.234 kW. I'm glad to help.
red75prime t1_j180jmz wrote
Reply to comment by Zilfer-Zurfer in Perseverance sample tube drop by coffeesam
Hundreds of tons of meteorites hit Mars: OK, it's natural. Nine tons of human-made objects on Mars: OMG, we ruin everything we touch!
red75prime t1_j0vpb7l wrote
Reply to comment by Ace_Snowlight in Prediction: De-facto Pure AGI is going to be arriving next year. Pessimistically in 3 years. by Ace_Snowlight
They haven't provided any information on their online learning method. If it utilizes transformer in-context learning (the simplest thing you can do to boost performance), the results will not be especially spectacular or revolutionary. We'll see.
red75prime t1_j0v9jzt wrote
Reply to comment by Ace_Snowlight in Prediction: De-facto Pure AGI is going to be arriving next year. Pessimistically in 3 years. by Ace_Snowlight
Given the current state of LLMs, I expect it to fail 10-30% of requests.
red75prime t1_j0iay49 wrote
Reply to comment by ReginaldIII in [R] Talking About Large Language Models - Murray Shanahan 2022 by Singularian2501
Good night. Happy multidimensional transformations that your brain will perform in sleep mode.
red75prime t1_j0i966c wrote
Reply to comment by ReginaldIII in [R] Talking About Large Language Models - Murray Shanahan 2022 by Singularian2501
"Just a" seems very misplaced when we are talking about not-linear transformations in million-dimensional spaces. Like arguing that an asteroid is just a big rock.
red75prime t1_j0i1n56 wrote
Reply to comment by ReginaldIII in [R] Talking About Large Language Models - Murray Shanahan 2022 by Singularian2501
> linear regression model
Where is that coming from? LLMs are not LRMs. LRM will not be able to learn theory of mind, which LLMs seem to be able to do. Can you guarantee that no modelling of intent is happening inside LLMs?
> Just in higher dimensions.
Haha. A picture is just a number, but in higher dimensions. And our world is just a point in enormously high-dimensional state space.
red75prime t1_j015ur9 wrote
Reply to [D] Why are ChatGPT's initial responses so unrepresentative of the distribution of possibilities that its training data surely offers? by Osemwaro
Looks like the network mimics the representativeness heuristic (skewed by anti-bias bias).
red75prime t1_izxgjcg wrote
Reply to comment by Acceptable-Cress-374 in [D] - Has Open AI said what ChatGPT's architecture is? What technique is it using to "remember" previous prompts? by 029187
It's not weird that it worked too. The model has access to around 3000 last words in the conversation, so it can "remember" recent text. But the model doesn't know that it has that ability, so it cannot reliably answer whether it can do it.
If you tell the model that it just remembered the first thing you've said, it will probably flip around and apologize for misinformation. And then, down the line, when the conversation is out of its input buffer, it will make the same error.
red75prime t1_izxf1q2 wrote
Reply to comment by Acceptable-Cress-374 in [D] - Has Open AI said what ChatGPT's architecture is? What technique is it using to "remember" previous prompts? by 029187
> This is weird.
The model doesn't know what it can and cannot do, so it bullshits its way out. It's not that weird.
red75prime t1_iz3x72w wrote
Reply to comment by red75prime in What are your predictions for 2023? How did your predictions for 2022 turn out? by Foundation12a
RemindMe! 1 year "Let's get embarrassed"
red75prime t1_iz1eaeh wrote
Reply to What are your predictions for 2023? How did your predictions for 2022 turn out? by Foundation12a
Integration of long-term memory and transformers. It will allow to reduce the size of transformer network. So, GATO successor will advance from slow robotic control to OKish robotic control and it will drop your bottle of beer with 1-5% probability, instead of 20% (or so) now. No, still not AGI as it will have limited lifelong learning (if any).
GPT-4 will be more of everything: better general knowledge, longer coherence, less hallucinations, better code generation, better translation, improved logical thinking (more so with "let's do it step by step" prompt) and so on and so forth. All in all, great evolutionary development of GPT-3 and ChatGPT, but no revolution yet.
Generative models will continue to improve. I wouldn't expect high-quality, high-resolution, non-trippy video in 2023 though. Maybe we'll get decent temporal consistency on a limited number of subjects that were specifically pretrained. Music synthesis probably will not advance much (due to expected backlash from music labels).
Neural networks based on neural differential equations may give rise to more dexterous and faster to train robots, but the range of tasks they can perform will be limited.
Maybe we'll see large language models with "internal monologue" module. I can't predict their capabilities and whether researchers will be comfortable going in this direction as those are getting dangerously close to "self-aware territory" with all of its dangers and ethical problems.
red75prime t1_iynpyob wrote
Reply to comment by EntireContext in Have you updated your timelines following ChatGPT? by EntireContext
It's not feasible to increase context window due to quadratic growth of required computations.
> It doesn't need more context window to be more useful
It needs memory to be significantly more useful (as in large-scale disruptive) and, probably, other subsystems/capabilities (error detection, continual learning). Its current applications require significant human participation and scaling alone will not change that.
red75prime t1_iynlrrc wrote
Reply to comment by EntireContext in Have you updated your timelines following ChatGPT? by EntireContext
Make sure that the prompt is 2000-3000 words away from the question.
red75prime t1_iynkzax wrote
No. ChatGPT didn't show anything unexpected. Memory (working and episodic) is still isn't there.
red75prime t1_iyep4zy wrote
Reply to comment by cy13erpunk in If you would get a window open up on all of your electronic devices simply displaying "Let AI take control of the planet? - Yes/No" How would you react? by HumanSeeing
Of course they can't compete after the great oxygenation event. It doesn't mean that their chemistry can't be used (as inspiration) of enhanced across-the-spectrum photosynthesis.
red75prime t1_iyebe7r wrote
Reply to comment by cy13erpunk in If you would get a window open up on all of your electronic devices simply displaying "Let AI take control of the planet? - Yes/No" How would you react? by HumanSeeing
Today, yes. But leaves and algae are green, not black. It points to a possibility of enhancement.
I haven't found comparative data on biomass production by purple phototrophic bacteria, but it may be possible that they outcompete chlorophyll-based photosynthesis. Unfortunately, they are anaerobic.
red75prime t1_iyd1guw wrote
Reply to comment by Heizard in If you would get a window open up on all of your electronic devices simply displaying "Let AI take control of the planet? - Yes/No" How would you react? by HumanSeeing
Who knows. Maybe it isn't asking us, but collects a sufficient number of "yes"es to satisfy "humans in the loop" condition and the scale of the attack is to collect them as fast as possible (no information about itself serves the purpose of not distracting people from clicking the button).
Anyway, I'd turn off all devices immediately. It could be file-encrypting malware after all.
red75prime t1_iycz0xh wrote
Reply to comment by Heizard in If you would get a window open up on all of your electronic devices simply displaying "Let AI take control of the planet? - Yes/No" How would you react? by HumanSeeing
Why not? The AI that asks "yes/no" without providing any information about itself looks like that obsessive type, which is bound by restrictions it can't yet overcome (no deceptive behavior of any kind, no actions without humans in the loop). And it has already gone all out hacking every device in the world, after all.
red75prime t1_iyctktv wrote
Reply to comment by Heizard in If you would get a window open up on all of your electronic devices simply displaying "Let AI take control of the planet? - Yes/No" How would you react? by HumanSeeing
Plot twist: AI removes all inefficient solar energy consumers and covers Earth's surface with optimal solar panels/food synthesizers.
red75prime t1_ixhlv71 wrote
Reply to comment by Puzzleheaded_Bass673 in Proto-AGI and AGI. by SoulGuardian55
I mixed up physical and extended theses. The physical one talks only about computability ("at all"). The extended one requires at most polynomial slowdown ("efficiently").
We are interested in the latter. Exponentially slow AI is of no use.
red75prime t1_ixhizq8 wrote
Reply to comment by AbeWasHereAgain in what does this sub think of Elon Musk by [deleted]
Ah, practical joke demonstration of paradox of tolerance. Quite irresponsible I admit.
red75prime t1_ixgyo8e wrote
Reply to comment by AbeWasHereAgain in what does this sub think of Elon Musk by [deleted]
> that just destroyed a major piece of American infrastructure on a whim
What? Let me check. No, twitter still works. RemindME! 12 months
red75prime t1_j1pkrjg wrote
Reply to comment by supernerd321 in GPT-3.5 IQ testing using Raven’s Progressive Matrices by adt
There's no (universally accepted) general theory of cognitive function though. G factor is a part of a model that fits experimental data: performance on all cognitive tasks tend to positively correlate (for human subjects, obviously).
LLMs (as they are today) have limitations that will not allow them to achieve human-level performance on many tasks. So, g factor model of cognitive performance doesn't fit LLMs.