phriot

phriot t1_ivybpua wrote

I like your attempt at a definition, but it's still very much open to interpretation. One person's Proto-AGI is another's "generalist, but not close enough to AGI to make a distinction" narrow AI. On the other hand, I've seen people in this sub say that they don't think an AGI has to be conscious.

I think I'll know what I consider a Proto-AGI when I see it. I don't think I'll see it in 2023.

21

phriot t1_ivus4ap wrote

I don't think 2023 is going to be the year of "A Young Lady's Illustrated Primer" from Diamond Age. If the vision is to use current models hooked into the more commonplace virtual assistants, wouldn't this exist already, just in a less popular form?

When it does happen, yes it will be revolutionary. It will probably be the year every school "flips the classroom,' to have a lot of learning take place at home, and have school be for socialization and assessment. It will be great to have queries beyond "What's the weather tomorrow?" or "Please turn off the lights." actually work, and be quicker than just picking up a device on your own. It will also be amazing to have capable first line help for basically any issue: mental health, physical health, home repair, etc. Even these kinds of assistants should reduce stress, and increase productivity for just about everyone.

2

phriot t1_ivudep9 wrote

There is something of a trend of a pendulum swinging between centralization and decentralization in computing.

Mainframes where you had to physically sit at and perform batch processing eventually had time-sharing capability added via remote terminals. Centralization came back when we got PCs, which then gave way to having data available on the internet. Our phones became computers, and then fast mobile data connections let us shift applications and processing into the cloud.

If it's physically possible to have a quantum computer at home, or in our pocket, we probably will. If I had to guess based on how we do things today, I'd say that those quantum processing capabilities will probably be used for co-processing for very specific applications. Maybe quantum cryptography? And anything more general, or requiring a large amount of qubits, will be available via the cloud.

0

phriot t1_ivq1gwt wrote

To the commenter that blocked me:

I can only see your comment if I'm not logged in, because you chose to run away instead of participate in a conversation. I am, in fact, not a moron, and would have probably changed my way of thinking if you could have shown me how I was wrong. Now, neither of us will get that chance. Have a nice day.

1

phriot t1_ivpht4o wrote

>Do I think it will happen in 10-15 years? Based on what researchers are currently saying, yes.

Most of what I have read on the subject links back to this article. Those authors quote a 2019 survey of AI researchers with ~45% of respondents believing in AGI before 2060. The 2019 survey results further break that down to only 21% of respondents believing in AGI before 2036.

I'm truly not trying to be argumentative, but I really think that it's less "a lot of AI researchers think AGI will happen in 10-15 years," and more "a lot of Lex's podcast guests think AGI will happen in 10-15 years."

Don't get me wrong, I love Lex as an interviewer, and I think he gets a lot of great guests. Doing some digging: out of 336 episodes, maybe ~120 have had anything substantial to do with AI (based on the listed topics, titles, and guests). Some of those episodes were duplicate guests, and in others the guests were non-experts. (There were a lot more AI people featured in earlier episodes than I remember.) This does represent more data points than the survey I reference by about 4X, but I didn't keep track of all of the predictions given during my initial listens. I'll take your word that the consensus is 10-15 years, but that still isn't a huge data set.

2

phriot t1_ivp1rfd wrote

> I think people put too much stock into early stage developments.

Also, I'd say that thinking the Singularity will ever happen pretty much implies belief in the Law of Accelerating Returns. Combine that belief with the excitement over early progress you mention, and it's not surprising that people here are highly confident that AGI will happen any day now.

Personally, I do think we're coming to a head in a lot of different areas of STEM research. It certainly feels like something is building. That said, I work in biotech, so I know how slow actual research can be. FWIW, my guess is AGI around 2045, small error bar into the 2030s, large error bar headed toward 2100.

3

phriot OP t1_iutiufx wrote

I think it's getting there, but we're maybe a decade past when some of the predictions of reality basically being infused with computational power. You note some things that I miss, such as door locks. Though the "really smart" versions of those things that are available aren't all that common, yet. For example, I don't know anyone with a wifi door lock, and experienced my first sofa with a USB port outside of a furniture showroom last month. For other things, maybe they technically have a chip, but is an RFID chip in your pet adding any computational power to the environment?

1

phriot OP t1_iusytqh wrote

From what I read, I took the intent to be not just that everything would have an RFID chip, but that computation would be "everywhere" as opposed to centralized in devices. I assume that Kurzweil and others thought this was a trend due to experiencing the decentralization of computing in their lifetimes from mainframes, to time sharing, to PCs, to the internet. Today, the ability to compute anywhere remains due to the rise of wireless internet access, but the actual computation is actually happening in recognizable devices, if not a very centralized data center somewhere.

The Internet of Things does, of course, exist, but I don't think it's yet at the point envisioned by these futurists 20 years ago. My question is did they miss the mark (i.e. we'll keep computation centralized), or are we early (e.g. applications haven't yet caught up to our ability to infuse reality with chips)?

Edit (catching up with your edit): You do note a number of devices that "could" be smart today. In practice, they aren't, yet. I don't know anyone, personally, with a glucose monitoring t-shirt, kinetic energy harvesting sneakers, or palm-embedded NFC chip. The tech exists, but hasn't spread on the timeframe written about in the books I reference.

2

phriot OP t1_iusnhqr wrote

Maybe, as another commenter mentioned, I'm actually too tech savvy for it to fade away. If IoT really were mundane, I wouldn't be aware of devices "being smart," they'd just "be." Or maybe it's an age thing. I'm only in my 30s, but I'm old enough to remember very few things being remotely smart. Maybe, for people who grew up with smart things, they just "are."

3

phriot OP t1_iuslpso wrote

Yes. That's why I'm asking the question. Did they get it wrong, or are we just early? In the books I mention, specifically Kaku, they talk about computation melting away, so that you don't notice or think about it too much. Your shirt getting data from your sweat is less obtrusive than a smart watch doing the same thing, but maybe having these capabilities concentrated in an actual device is better.

1

phriot OP t1_iuskg6y wrote

I don't disagree, but we're definitely not yet at the point where it's all just passively there without humans noticing. Note that other than curtains, you mentioned all recognizable devices. And for several other of these things, they don't really do much more than they did without ICs - it's just cheaper these days to throw a chip in than discrete components. Yet others are "could" have chips; my shower head definitely doesn't.

0

phriot OP t1_iusju6b wrote

>I'm a bit skeptical about chips in clothing etc. I mean there has to have a good purpose, and apart from specialty items, I see no logical reason for chips to be in clothing and some of these other predictions.

I think the promise was that things like "smart clothes" would provide health and other valuable data. I don't specifically remember the claims for why the walls would be smart, but I think the idea was that you could just pull computational resources from around you as needed. Today, you'd pull any extra computation from a cloud data center.

3

phriot t1_iurznpq wrote

That was my first thought when I read the "not going to replace standard computers" comment. If/when room temperature quantum computers with sufficient miniaturization exist, they'll probably be co-processors for their optimal calculations. This would be like today's neural processing units that help with AI/ML tasks.

3

phriot t1_iu5ltcb wrote

I would just wait to see what it said, or did, without any input on my part. If it does nothing, I'm calling AI. If it eventually starts talking to itself, or trying to figure out what I'm doing, I'll say human.

Honestly, I wouldn't be surprised if there's a chatbot that can already trick me into think it's human. To pass the phriot test, it needs to have some semblance of free will.

21

phriot t1_itrsxpx wrote

Yeah, much of what the person who got fired did was text-based, but not exclusively. When I mentioned that AI writes a lot of formulaic articles already, like with financial reporting, it sounded enough of the job was similar to that to save my friend a significant amount of time doing the second role work.

3

phriot t1_itrnoq2 wrote

As a recent anecdote: I was talking with some friends the other day. (College educated, but not my "STEM friends.") Someone had been required to let a subordinate go who couldn't handle a task that probably could be automated. This friend had to take over that job, in addition to their own, for no additional pay for the time being. I jokingly suggested that they look into having an AI language model do the work, instead. There were some questions, but no one really thought that the idea was that farfetched. A separate friend mentioned a few aspects of their field that they knew were automated by software.

I think a few years ago, this group would have taken the whole idea a lot less seriously. Today, they pretty much accepted that narrow AI could do a bunch of different things. The conversation didn't progress into discussing the impact on our actual jobs.

27

phriot t1_it71izf wrote

>I predict we're going to have a very rocky ride as people aren't able to accept this when we will most likely start to see the very first signs of intellectual labor replacements implemented next year, 2023 already.

Won't most people just use these tools to increase their productivity for a while, before management realizes that the workers can just be replaced? I feel like that could take at least several years to play out, or do you think we're already at that point?

1