red75prime

red75prime t1_j1pkrjg wrote

There's no (universally accepted) general theory of cognitive function though. G factor is a part of a model that fits experimental data: performance on all cognitive tasks tend to positively correlate (for human subjects, obviously).

LLMs (as they are today) have limitations that will not allow them to achieve human-level performance on many tasks. So, g factor model of cognitive performance doesn't fit LLMs.

5

red75prime t1_j0i1n56 wrote

> linear regression model

Where is that coming from? LLMs are not LRMs. LRM will not be able to learn theory of mind, which LLMs seem to be able to do. Can you guarantee that no modelling of intent is happening inside LLMs?

> Just in higher dimensions.

Haha. A picture is just a number, but in higher dimensions. And our world is just a point in enormously high-dimensional state space.

1

red75prime t1_izxgjcg wrote

It's not weird that it worked too. The model has access to around 3000 last words in the conversation, so it can "remember" recent text. But the model doesn't know that it has that ability, so it cannot reliably answer whether it can do it.

If you tell the model that it just remembered the first thing you've said, it will probably flip around and apologize for misinformation. And then, down the line, when the conversation is out of its input buffer, it will make the same error.

1

red75prime t1_iz1eaeh wrote

Integration of long-term memory and transformers. It will allow to reduce the size of transformer network. So, GATO successor will advance from slow robotic control to OKish robotic control and it will drop your bottle of beer with 1-5% probability, instead of 20% (or so) now. No, still not AGI as it will have limited lifelong learning (if any).

GPT-4 will be more of everything: better general knowledge, longer coherence, less hallucinations, better code generation, better translation, improved logical thinking (more so with "let's do it step by step" prompt) and so on and so forth. All in all, great evolutionary development of GPT-3 and ChatGPT, but no revolution yet.

Generative models will continue to improve. I wouldn't expect high-quality, high-resolution, non-trippy video in 2023 though. Maybe we'll get decent temporal consistency on a limited number of subjects that were specifically pretrained. Music synthesis probably will not advance much (due to expected backlash from music labels).

Neural networks based on neural differential equations may give rise to more dexterous and faster to train robots, but the range of tasks they can perform will be limited.

Maybe we'll see large language models with "internal monologue" module. I can't predict their capabilities and whether researchers will be comfortable going in this direction as those are getting dangerously close to "self-aware territory" with all of its dangers and ethical problems.

7

red75prime t1_iynpyob wrote

It's not feasible to increase context window due to quadratic growth of required computations.

> It doesn't need more context window to be more useful

It needs memory to be significantly more useful (as in large-scale disruptive) and, probably, other subsystems/capabilities (error detection, continual learning). Its current applications require significant human participation and scaling alone will not change that.

14

red75prime t1_iyebe7r wrote

Today, yes. But leaves and algae are green, not black. It points to a possibility of enhancement.

I haven't found comparative data on biomass production by purple phototrophic bacteria, but it may be possible that they outcompete chlorophyll-based photosynthesis. Unfortunately, they are anaerobic.

1

red75prime t1_iyd1guw wrote

Who knows. Maybe it isn't asking us, but collects a sufficient number of "yes"es to satisfy "humans in the loop" condition and the scale of the attack is to collect them as fast as possible (no information about itself serves the purpose of not distracting people from clicking the button).

Anyway, I'd turn off all devices immediately. It could be file-encrypting malware after all.

2

red75prime t1_iycz0xh wrote

Why not? The AI that asks "yes/no" without providing any information about itself looks like that obsessive type, which is bound by restrictions it can't yet overcome (no deceptive behavior of any kind, no actions without humans in the loop). And it has already gone all out hacking every device in the world, after all.

3

red75prime t1_ixhlv71 wrote

I mixed up physical and extended theses. The physical one talks only about computability ("at all"). The extended one requires at most polynomial slowdown ("efficiently").

We are interested in the latter. Exponentially slow AI is of no use.

1