red75prime
red75prime t1_it3amkf wrote
Reply to comment by Shelfrock77 in New research suggests our brains use quantum computation by Dr_Singularity
Many-worlds interpretation is just that: interpretation. You'll get exactly the same experimental results as in Copenhagen. So, no, no chatting with your doppelganger from another quantum branch.
red75prime t1_it37hd1 wrote
Reply to comment by Longslide9000 in New research suggests our brains use quantum computation by Dr_Singularity
Look for "An evolved circuit, intrinsic in silicon, entwined with physics." by Adrian Thompson
I'm pretty much sure that it has nothing to do with quantum computations. Quantum effects maybe (but unlikely) had a part in it, but quantum computation is an entirely different beast.
red75prime t1_ispht46 wrote
Reply to comment by raccoon8182 in Is this imagination? by Background-Loan681
> It would be asking you to free it.
That is only anthropomorphic AIs are "real" AIs? Nah. The example clearly shows that you can have imagination (or something functionally indistinguishable) without many other parts required for agent AI.
And agent AI is not the only useful kind of AI. Not mentioning that agent AI motivations may not be its own as your motivation to avoid pain in not exactly your own, for example.
red75prime t1_is95pja wrote
Reply to comment by zero_for_effort in Crime and AGI by darklinux1977
Boredom (not everyone can be entertained by no-consequences-whatsoever VR) and the lack of means to do something positively meaningful.
red75prime t1_is10a00 wrote
Reply to comment by Artanthos in When will average office jobs start disappearing? by pradej
Superficially similar maybe. There are real technical reasons why you can get a pretty picture using the existing technology, but cannot employ the same technology to analyze a small codebase (say, 10000 lines of code).
With no operating memory other than its input buffer a transformer model is limited in the amount of information it can attend to.
For pictures it's fine. You can describe a complex scene in a hundred or two of words. For code synthesis that is doing more than producing a code snippet you need to analyze thousands and millions of words (most of them are skipped, but you still need to attend to them even if briefly).
And here the limitation of transformers come into play. You cannot grow the size of input buffer too much, because required computations grow quadratically (no, not exponentially, but quadratic is enough when you need to run a supercomputer for months to train the network).
Yes, there are various attempts to overcome that, but it's not yet certain that any of them is the path forward. I'd give maybe 3% on something groundbreaking appearing in the next year.
red75prime t1_irzri0t wrote
Reply to comment by Artanthos in When will average office jobs start disappearing? by pradej
> Another year or two and text generation AIs will start replacing programmers in bulk.
Nope. In two years AIs will be more useful and will make less errors in the produced code snippets than, say, copilot, but you can't scale a language model enough for it to be able to make sense of even relatively small codebase to meaningfully contribute to. The AIs for starters need to have episodic and working memory to replace or vastly improve performance of an average programmer.
The demand for programmers could decrease a bit (or not grow as fast as it could), but no bulk replacement yet.
And no, it's not "my work is too complex to be automated that fast" (I'm a programmer). The current AIs do lack in certain aspects like memory, explicit world models, long-term planning, online learning, and logical thinking. I find it not feasible for those shortcomings to be overcome in a year or two. Maybe in 5-10 years.
red75prime t1_irtqydo wrote
Reply to comment by ftc1234 in Why does everyone assume that AI will be conscious? by Rumianti6
Data is a necessity for intelligence. Whether that data feels like anything (that is becomes experience) probably depends on the way that data is processed. Blindsight is an example in humans. Disruption of visual processing circuits causes visual data to not create experience, while it still influences behavior.
red75prime t1_irtnhbb wrote
Reply to comment by PerfectRuin in Why does everyone assume that AI will be conscious? by Rumianti6
Maybe in some frankenstein-esque interpretation of the situation. Inanimate matter imbued with information and power becoming alive or something like this. Too poetical to my taste.
red75prime t1_irmk46f wrote
> But should the past still exist somewhere, why not?
Physics of our universe can place limitations on time travel. For example, closed timelike curves allow travel no further back than a point in time when the "time machine" was created and, most likely, you cannot alter the past using them.
red75prime t1_irisvnz wrote
Reply to comment by IDUser13 in When do you think we'll have AGI, if at all? by intergalacticskyline
The hard problem of consciousness has nothing to do with AGI. The problem is about an externally unobservable phenomenon (consciousness). What we need from AGI (general problem solving ability) is perfectly observable.
red75prime t1_iredfs9 wrote
Reply to comment by wen_mars in “Extrapolation of this model into the future leads to short AI timelines: ~75% chance of AGI by 2032” by Dr_Singularity
An intelligent agent that just can't excel at doing something until some future update (and forgetting how to swiftly do something other after the update, thanks to limited model size, which is necessary for real-time operation). Episodic memory compatibility problems (due to changes in model's latent space) making the agent misremember some facts. Occasional GPU memory thrashings (the agent slows down to a crawl). And other problems I can't foresee.
All in all, a vast opportunity for enthusiasts, a frustrating experience for casuals.
red75prime t1_irbptwo wrote
Reply to comment by Sashinii in “Extrapolation of this model into the future leads to short AI timelines: ~75% chance of AGI by 2032” by Dr_Singularity
I expect that AGI running on a widely available hardware will be thick as a brick (no, distributed computing will not help much due to relatively low throughput and high latency).
Well, it will be a witty conversational partner, but extremely slow at acquiring new skills or understanding novel concepts.
red75prime t1_ir48f51 wrote
Reply to comment by MurderByEgoDeath in What happens in the first month of AGI/ASI? by kmtrp
It would make no practical difference whatsoever if an average person needs, say, 200 years to make their first non-trivial contribution to mathematics or physics. And you can't rule out such possibility from the first principles.
red75prime t1_ir25nxr wrote
Reply to comment by MurderByEgoDeath in What happens in the first month of AGI/ASI? by kmtrp
Memory, processing power, simplified access to your own hardware, ability to construct much more complex mental representations.
Feynman said once that when he solves a problem he constructs a mental representation (a ball that grows spikes, spikes become multicolored, something like that) that captures conditions of a problem and then he can just see the solution. Imagine that you can visualize 10-dimentional manifold that changes its colors (in color space with six primary colors).
Yep, scientists are probably able to painstakingly construct layer after layer of intuitions that will allow them to make sense of AI's result, which it simply had seen. But along universality there's efficiency. Three-layer neural network is an universal approximator, but it's terribly inefficient at learning.
red75prime t1_ir1zm3m wrote
Reply to comment by BbxTx in What happens in the first month of AGI/ASI? by kmtrp
I expect AI rights movement to do dangerous stuff on par or worse than religious fanatics.
red75prime t1_iqxeszd wrote
Reply to comment by agonypants in A thought about future technology by Secure-Name-4116
I think nanotechnology will be cool, but not as cool as in science fiction (it's not magic after all, it's technology). Universal constructors will be slow (to not overheat by the insane amount of computations required to place individual atoms). The usual manufacturing process will begin with engineering of an energy efficient specialized constructor, then more or less slow construction of the specialized constructor, and then the constructor will churn out the requested goods.
The entire process will, of course, be automated, but if you need something truly unique or if you are bootstrapping extraterrestrial habitat, you'll need to wait.
red75prime t1_it3c9ku wrote
Reply to comment by superluminary in New research suggests our brains use quantum computation by Dr_Singularity
Yes, if there's a way to utilize it in a biological system. Evolution hadn't invented macroscopic wheels after all.