Recent comments in /f/singularity

Cartossin t1_jeacqha wrote

If we look at the 1990s promise of a pocket computer with 500 channels of TV and access to all human knowledge, the expectatons were not inflated. They undershot the smartphone revolution if anything.

The rise of AI will be like that. Most people will completely underestimate how much it will do.

Not everything follows this curve. I'd bet money that this is one of the things that doesn't follow the curve.

5

Sure_Cicada_4459 t1_jeace5e wrote

Actually no, https://twitter.com/summerlinARK/status/1599196885675544576?lang=de. And this is still an underestimate because predicting 10 years in algorithmic advances in the field of AI is silly. And that doesn't even account for distillation, more publicly available datasets and models, multi-LLM systems,... There are so many dimensions in which this train is running, it makes you dizzy thinking abt it and makes regulation look like nothing more then pure cope.

1

aWildchildo t1_jeacd2n wrote

You seen the movie Ex Machina? Humans can have high standards, be needy, volatile, etc., But what makes you think a sentient AI partner would be different? Would it not expect you to bring something to the table, or change its mind about things down the road? The movie Her is also pertinent here because by the end of the movie, the AIs basically become so advanced that humans offer them nothing. You may want to ask yourself why a robo gf would want you, assuming they have a choice in the matter. Or are you thinking more of a subservient robot that isn't truly sentient?

Not trying to be derisive, I think this is a very interesting topic. I think talking through some of these questions may actually help you with your own personal journey with human relationships; relationships that don't start with you purchasing a partner.

1

peterflys t1_jeabr2h wrote

There are also far too many national security and, unfortunately, global political consequences to stopping or pausing. Politicians and other heads of state know that the stakes are too high.

1

not_into_that t1_jeablxv wrote

THIS.

I am opposed to the gatekeeping, but they need to mash the throttle not pump the brakes. Our potential adversaries aren't going to have these concerns. We need to implement all our tech including automation and AI to keep our people safe, even if it means changing the idea of the value of work and welfare of the state.

2

Borrowedshorts t1_jeabhvm wrote

ITER is a complete disaster. If people thought NASA's SLS program was bad, ITER is at least an order of magnitude worse. I agree AI development is going extremely fast. I disagree there's much we can do to stop it or even slow it down much. I agree with Sam Altman's take, it's better these AI's to get into the wild now, while the stakes are low, than to have to experience that for the first time when these systems are far more capable. It's inevitable it's going to happen, it's better to make our mistakes now.

8

ShowerGrapes t1_jeaaln0 wrote

>we are just getting more and more efficient by the day meaning anyone will be able able to run GPT-n perf on their hardware soon.

yes, anyone will be able to train neural networks but not the kind to make simps like musk tremble with fear. open ai spent 7 million on cloud computing costs alone to train gpt. it would be a trivial (and misguided) task to shut down future ai development.

1