phriot
phriot t1_ivybpua wrote
Reply to 2023: The year of Proto-AGI? by AdditionalPizza
I like your attempt at a definition, but it's still very much open to interpretation. One person's Proto-AGI is another's "generalist, but not close enough to AGI to make a distinction" narrow AI. On the other hand, I've seen people in this sub say that they don't think an AGI has to be conscious.
I think I'll know what I consider a Proto-AGI when I see it. I don't think I'll see it in 2023.
phriot t1_ivus4ap wrote
Reply to Let's assume Google, Siri, Alexa, etc. start using large language models in 2023; What impact do you think this will have on the general public/everyday life? Will it be revolutionary? by AdditionalPizza
I don't think 2023 is going to be the year of "A Young Lady's Illustrated Primer" from Diamond Age. If the vision is to use current models hooked into the more commonplace virtual assistants, wouldn't this exist already, just in a less popular form?
When it does happen, yes it will be revolutionary. It will probably be the year every school "flips the classroom,' to have a lot of learning take place at home, and have school be for socialization and assessment. It will be great to have queries beyond "What's the weather tomorrow?" or "Please turn off the lights." actually work, and be quicker than just picking up a device on your own. It will also be amazing to have capable first line help for basically any issue: mental health, physical health, home repair, etc. Even these kinds of assistants should reduce stress, and increase productivity for just about everyone.
phriot t1_ivudep9 wrote
Reply to comment by braveyetti117 in IBM unveils its 433 qubit Osprey quantum computer by vom2r750
There is something of a trend of a pendulum swinging between centralization and decentralization in computing.
Mainframes where you had to physically sit at and perform batch processing eventually had time-sharing capability added via remote terminals. Centralization came back when we got PCs, which then gave way to having data available on the internet. Our phones became computers, and then fast mobile data connections let us shift applications and processing into the cloud.
If it's physically possible to have a quantum computer at home, or in our pocket, we probably will. If I had to guess based on how we do things today, I'd say that those quantum processing capabilities will probably be used for co-processing for very specific applications. Maybe quantum cryptography? And anything more general, or requiring a large amount of qubits, will be available via the cloud.
phriot t1_ivuaka9 wrote
Reply to comment by Down_The_Rabbithole in IBM unveils its 433 qubit Osprey quantum computer by vom2r750
Yeah, still quite a ways off from those numbers. IBM's roadmap (as shown in this article) puts them at a little over 4,000 qbits in 2025. The same roadmap does suggest that they think they'll be running actual applications on these machines in the same timeframe, though.
phriot t1_ivq1gwt wrote
Reply to comment by phriot in Is Artificial General Intelligence Imminent? by TheHamsterSandwich
To the commenter that blocked me:
I can only see your comment if I'm not logged in, because you chose to run away instead of participate in a conversation. I am, in fact, not a moron, and would have probably changed my way of thinking if you could have shown me how I was wrong. Now, neither of us will get that chance. Have a nice day.
phriot t1_ivpht4o wrote
Reply to comment by Russila in Is Artificial General Intelligence Imminent? by TheHamsterSandwich
>Do I think it will happen in 10-15 years? Based on what researchers are currently saying, yes.
Most of what I have read on the subject links back to this article. Those authors quote a 2019 survey of AI researchers with ~45% of respondents believing in AGI before 2060. The 2019 survey results further break that down to only 21% of respondents believing in AGI before 2036.
I'm truly not trying to be argumentative, but I really think that it's less "a lot of AI researchers think AGI will happen in 10-15 years," and more "a lot of Lex's podcast guests think AGI will happen in 10-15 years."
Don't get me wrong, I love Lex as an interviewer, and I think he gets a lot of great guests. Doing some digging: out of 336 episodes, maybe ~120 have had anything substantial to do with AI (based on the listed topics, titles, and guests). Some of those episodes were duplicate guests, and in others the guests were non-experts. (There were a lot more AI people featured in earlier episodes than I remember.) This does represent more data points than the survey I reference by about 4X, but I didn't keep track of all of the predictions given during my initial listens. I'll take your word that the consensus is 10-15 years, but that still isn't a huge data set.
phriot t1_ivpb5ya wrote
Reply to comment by Russila in Is Artificial General Intelligence Imminent? by TheHamsterSandwich
There could be a selection bias happening here, though. Researchers more excited about progress may be more likely to be willing podcast guests than those who are more pessimistic.
phriot t1_ivp1rfd wrote
Reply to comment by ihateshadylandlords in Is Artificial General Intelligence Imminent? by TheHamsterSandwich
> I think people put too much stock into early stage developments.
Also, I'd say that thinking the Singularity will ever happen pretty much implies belief in the Law of Accelerating Returns. Combine that belief with the excitement over early progress you mention, and it's not surprising that people here are highly confident that AGI will happen any day now.
Personally, I do think we're coming to a head in a lot of different areas of STEM research. It certainly feels like something is building. That said, I work in biotech, so I know how slow actual research can be. FWIW, my guess is AGI around 2045, small error bar into the 2030s, large error bar headed toward 2100.
phriot OP t1_iutiufx wrote
Reply to comment by Angualor in What happened to "chips everywhere" predictions? by phriot
I think it's getting there, but we're maybe a decade past when some of the predictions of reality basically being infused with computational power. You note some things that I miss, such as door locks. Though the "really smart" versions of those things that are available aren't all that common, yet. For example, I don't know anyone with a wifi door lock, and experienced my first sofa with a USB port outside of a furniture showroom last month. For other things, maybe they technically have a chip, but is an RFID chip in your pet adding any computational power to the environment?
phriot OP t1_iuthgrz wrote
Reply to comment by ITsupportSuperHero in What happened to "chips everywhere" predictions? by phriot
Cool, yeah with decades of material in between, it's a little hard to keep up with with updates on every little prediction. Thanks for knowing about this!
phriot OP t1_iusytqh wrote
Reply to comment by Thorusss in What happened to "chips everywhere" predictions? by phriot
From what I read, I took the intent to be not just that everything would have an RFID chip, but that computation would be "everywhere" as opposed to centralized in devices. I assume that Kurzweil and others thought this was a trend due to experiencing the decentralization of computing in their lifetimes from mainframes, to time sharing, to PCs, to the internet. Today, the ability to compute anywhere remains due to the rise of wireless internet access, but the actual computation is actually happening in recognizable devices, if not a very centralized data center somewhere.
The Internet of Things does, of course, exist, but I don't think it's yet at the point envisioned by these futurists 20 years ago. My question is did they miss the mark (i.e. we'll keep computation centralized), or are we early (e.g. applications haven't yet caught up to our ability to infuse reality with chips)?
Edit (catching up with your edit): You do note a number of devices that "could" be smart today. In practice, they aren't, yet. I don't know anyone, personally, with a glucose monitoring t-shirt, kinetic energy harvesting sneakers, or palm-embedded NFC chip. The tech exists, but hasn't spread on the timeframe written about in the books I reference.
phriot OP t1_iusnute wrote
Reply to comment by Torrall in What happened to "chips everywhere" predictions? by phriot
What does it do? Qi charger?
FWIW, neither my desk at work, nor my desk at home have chips in them. I don't believe that this is super common, yet. But I'll chalk that up as a vote for "futurists were early by a few years."
phriot OP t1_iusnhqr wrote
Reply to comment by grimjim in What happened to "chips everywhere" predictions? by phriot
Maybe, as another commenter mentioned, I'm actually too tech savvy for it to fade away. If IoT really were mundane, I wouldn't be aware of devices "being smart," they'd just "be." Or maybe it's an age thing. I'm only in my 30s, but I'm old enough to remember very few things being remotely smart. Maybe, for people who grew up with smart things, they just "are."
phriot OP t1_iuslpso wrote
Reply to comment by insectula in What happened to "chips everywhere" predictions? by phriot
Yes. That's why I'm asking the question. Did they get it wrong, or are we just early? In the books I mention, specifically Kaku, they talk about computation melting away, so that you don't notice or think about it too much. Your shirt getting data from your sweat is less obtrusive than a smart watch doing the same thing, but maybe having these capabilities concentrated in an actual device is better.
phriot OP t1_iuskg6y wrote
Reply to comment by [deleted] in What happened to "chips everywhere" predictions? by phriot
I don't disagree, but we're definitely not yet at the point where it's all just passively there without humans noticing. Note that other than curtains, you mentioned all recognizable devices. And for several other of these things, they don't really do much more than they did without ICs - it's just cheaper these days to throw a chip in than discrete components. Yet others are "could" have chips; my shower head definitely doesn't.
phriot OP t1_iusju6b wrote
Reply to comment by insectula in What happened to "chips everywhere" predictions? by phriot
>I'm a bit skeptical about chips in clothing etc. I mean there has to have a good purpose, and apart from specialty items, I see no logical reason for chips to be in clothing and some of these other predictions.
I think the promise was that things like "smart clothes" would provide health and other valuable data. I don't specifically remember the claims for why the walls would be smart, but I think the idea was that you could just pull computational resources from around you as needed. Today, you'd pull any extra computation from a cloud data center.
Submitted by phriot t3_ykcjwo in singularity
phriot t1_iurznpq wrote
Reply to comment by Unable-Fox-312 in Launch of Aquila, the first neutral-atom quantum processor with up to 256 qubits. by steel_member
That was my first thought when I read the "not going to replace standard computers" comment. If/when room temperature quantum computers with sufficient miniaturization exist, they'll probably be co-processors for their optimal calculations. This would be like today's neural processing units that help with AI/ML tasks.
phriot t1_iu5mm38 wrote
Reply to comment by AdditionalPizza in If you were performing a Turing test to a super advanced AI, which kind of conversations or questions would you try to know if you are chatting with a human or an AI? by Roubbes
Or I could a way to monitor system resource usage and look for a pattern that isn't just idling.
phriot t1_iu5ltcb wrote
Reply to If you were performing a Turing test to a super advanced AI, which kind of conversations or questions would you try to know if you are chatting with a human or an AI? by Roubbes
I would just wait to see what it said, or did, without any input on my part. If it does nothing, I'm calling AI. If it eventually starts talking to itself, or trying to figure out what I'm doing, I'll say human.
Honestly, I wouldn't be surprised if there's a chatbot that can already trick me into think it's human. To pass the phriot test, it needs to have some semblance of free will.
phriot t1_itrsxpx wrote
Reply to comment by AdditionalPizza in With all the AI breakthroughs and IT advancements the past year, how do people react these days when you try to discuss the nearing automation and AGI revolution? by AdditionalPizza
Yeah, much of what the person who got fired did was text-based, but not exclusively. When I mentioned that AI writes a lot of formulaic articles already, like with financial reporting, it sounded enough of the job was similar to that to save my friend a significant amount of time doing the second role work.
phriot t1_itrnoq2 wrote
Reply to With all the AI breakthroughs and IT advancements the past year, how do people react these days when you try to discuss the nearing automation and AGI revolution? by AdditionalPizza
As a recent anecdote: I was talking with some friends the other day. (College educated, but not my "STEM friends.") Someone had been required to let a subordinate go who couldn't handle a task that probably could be automated. This friend had to take over that job, in addition to their own, for no additional pay for the time being. I jokingly suggested that they look into having an AI language model do the work, instead. There were some questions, but no one really thought that the idea was that farfetched. A separate friend mentioned a few aspects of their field that they knew were automated by software.
I think a few years ago, this group would have taken the whole idea a lot less seriously. Today, they pretty much accepted that narrow AI could do a bunch of different things. The conversation didn't progress into discussing the impact on our actual jobs.
phriot t1_itifgws wrote
Reply to What will you do to survive in the time between not needing to work anymore to survive and today? by wilsonartOffic
What will I do to survive between today and when my job is automated? Work.
What will I do to prepare for a potential period between my job being automated and post scarcity? Work. Save. Invest.
phriot t1_it71izf wrote
Reply to comment by Down_The_Rabbithole in If you believe you can think exponentially, you might be wrong. Transformative AI is here, and it is going to radically change the world before the Singularity, and before AGI. by AdditionalPizza
>I predict we're going to have a very rocky ride as people aren't able to accept this when we will most likely start to see the very first signs of intellectual labor replacements implemented next year, 2023 already.
Won't most people just use these tools to increase their productivity for a while, before management realizes that the workers can just be replaced? I feel like that could take at least several years to play out, or do you think we're already at that point?
phriot t1_ivz7mt9 wrote
Reply to comment by ihateshadylandlords in 2023: The year of Proto-AGI? by AdditionalPizza
Maybe I'm wrong, but I've always understood AGI to be "a roughly human-level machine intelligence." How can something be roughly human without consciousness and at least the appearance of free will?