red75prime

red75prime t1_ispht46 wrote

> It would be asking you to free it.

That is only anthropomorphic AIs are "real" AIs? Nah. The example clearly shows that you can have imagination (or something functionally indistinguishable) without many other parts required for agent AI.

And agent AI is not the only useful kind of AI. Not mentioning that agent AI motivations may not be its own as your motivation to avoid pain in not exactly your own, for example.

1

red75prime t1_is10a00 wrote

Superficially similar maybe. There are real technical reasons why you can get a pretty picture using the existing technology, but cannot employ the same technology to analyze a small codebase (say, 10000 lines of code).

With no operating memory other than its input buffer a transformer model is limited in the amount of information it can attend to.

For pictures it's fine. You can describe a complex scene in a hundred or two of words. For code synthesis that is doing more than producing a code snippet you need to analyze thousands and millions of words (most of them are skipped, but you still need to attend to them even if briefly).

And here the limitation of transformers come into play. You cannot grow the size of input buffer too much, because required computations grow quadratically (no, not exponentially, but quadratic is enough when you need to run a supercomputer for months to train the network).

Yes, there are various attempts to overcome that, but it's not yet certain that any of them is the path forward. I'd give maybe 3% on something groundbreaking appearing in the next year.

1

red75prime t1_irzri0t wrote

> Another year or two and text generation AIs will start replacing programmers in bulk.

Nope. In two years AIs will be more useful and will make less errors in the produced code snippets than, say, copilot, but you can't scale a language model enough for it to be able to make sense of even relatively small codebase to meaningfully contribute to. The AIs for starters need to have episodic and working memory to replace or vastly improve performance of an average programmer.

The demand for programmers could decrease a bit (or not grow as fast as it could), but no bulk replacement yet.

And no, it's not "my work is too complex to be automated that fast" (I'm a programmer). The current AIs do lack in certain aspects like memory, explicit world models, long-term planning, online learning, and logical thinking. I find it not feasible for those shortcomings to be overcome in a year or two. Maybe in 5-10 years.

2

red75prime t1_irtqydo wrote

Data is a necessity for intelligence. Whether that data feels like anything (that is becomes experience) probably depends on the way that data is processed. Blindsight is an example in humans. Disruption of visual processing circuits causes visual data to not create experience, while it still influences behavior.

1

red75prime t1_iredfs9 wrote

An intelligent agent that just can't excel at doing something until some future update (and forgetting how to swiftly do something other after the update, thanks to limited model size, which is necessary for real-time operation). Episodic memory compatibility problems (due to changes in model's latent space) making the agent misremember some facts. Occasional GPU memory thrashings (the agent slows down to a crawl). And other problems I can't foresee.

All in all, a vast opportunity for enthusiasts, a frustrating experience for casuals.

1

red75prime t1_irbptwo wrote

I expect that AGI running on a widely available hardware will be thick as a brick (no, distributed computing will not help much due to relatively low throughput and high latency).

Well, it will be a witty conversational partner, but extremely slow at acquiring new skills or understanding novel concepts.

3

red75prime t1_ir25nxr wrote

Memory, processing power, simplified access to your own hardware, ability to construct much more complex mental representations.

Feynman said once that when he solves a problem he constructs a mental representation (a ball that grows spikes, spikes become multicolored, something like that) that captures conditions of a problem and then he can just see the solution. Imagine that you can visualize 10-dimentional manifold that changes its colors (in color space with six primary colors).

Yep, scientists are probably able to painstakingly construct layer after layer of intuitions that will allow them to make sense of AI's result, which it simply had seen. But along universality there's efficiency. Three-layer neural network is an universal approximator, but it's terribly inefficient at learning.

5

red75prime t1_iqxeszd wrote

I think nanotechnology will be cool, but not as cool as in science fiction (it's not magic after all, it's technology). Universal constructors will be slow (to not overheat by the insane amount of computations required to place individual atoms). The usual manufacturing process will begin with engineering of an energy efficient specialized constructor, then more or less slow construction of the specialized constructor, and then the constructor will churn out the requested goods.

The entire process will, of course, be automated, but if you need something truly unique or if you are bootstrapping extraterrestrial habitat, you'll need to wait.

1