vivehelpme

vivehelpme t1_j9p8viz wrote

We have had human level general intelligence for tens of thousands of years and we've not progressed to superhuman general intelligence yet.

General human level intelligence also starts quite low and go quite high, I would say that we're already beyond the lower reaches of general human intelligence.

To say that AGI will instantly transition to ASI is buying into a sci-fi plot or going to beat the early 2000s futurology blogging dead horse where it's assumed that any computer hardware is overpowered and all the magic happens on the algorithm level, so once you crack the code you transition to infinite intelligence overnight, a patently ridiculous scenario where your computer for all intents and purposes casts magical spells(which worked pretty well for the plot of the Metamorphosis of Prime Intellect which I recommend as a read, but it's a plot device, not a realistic scenario)

1

vivehelpme t1_j9kqb1k wrote

The Donald Trump of early 2000's era of AI/futurology blogging, biggest and loudest opinion in the noble field of standing at the sidelines speaking as a confident expert about something that doesn't exist to start with, and that he is completely uninvolved in developing as a punchline.

Loud voice, huge ego, no substance. Seems to be enough to get a good following.

3

vivehelpme t1_j8i04sx wrote

What alignment really seems to refer to is a petrifying fear of the unknown dialed up to 111 and projected onto anything that a marketing department can label AI, resulting in concerns of mythological proportions being liberally sprinkled over everything new that appears in the fields.

Thankfully these people shaking in the dark have little say in industry and some exposure therapy will do them all good.

0

vivehelpme t1_j8hiksi wrote

Yudkowsky and the lesswrong community can be described as a science-fiction cargo cult, and that's putting it nicely.

They aren't experts or developers of ML tools. They take loosely affiliated literary themes and transplant them to reality, followed by inventing a long series of pointless vocabulary, long tirades, grinding essays doing rounds on themself with ever more dense neo-philosophical contents. It's a religion based on what best resemble zen koans in content but are interpreted as fundamentalist scripture retelling the exact sequence of future events.

I think the cargo cults would probably take offense at being compared to them.

3

vivehelpme t1_j7vfol7 wrote

Instead of trying to salvage the original recording why not recreate it by putting the text transcript into a text-to-speech model?

As you have it transcribed you don't even need to do any advanced speech recognition that filters the noise, just paste the text into something a bit more advanced than Microsoft Sam

2

vivehelpme t1_j6o0mzr wrote

Yann LeCun was programing neural networks before before most of the people who replied in this thread was even born, he made optical character recognition work on hardware that makes your smartfridge looks like a supercomputer. He doesn't say this from a sore loser point of view but to understand where he come from you need to look at what he's actually saying.

He didn't say chatGPT sucks.

He didn't say meta have something better.

He said it's "not particularly innovative". Which is true.

  • Transformers models have been around since 2017.
  • Language models are more numerous than anyone can keep track of.
  • Dialogue oriented fine tuning have been made before.
  • Virtually all names in big tech are training large language models.

Why is chatGPT doing so well then?

  • It's a big model, which is hardly new, but it's accessible and usable for free.
  • openAI have a good media traction, so they make a splash even when they show off closed models.

So when LeCun says it's "not particularly innovative", is he really wrong? Is being a known name and giving out free stuff^((arguably they are datamining the public for additional training data, which makes the free part a little bit less free)) considered innovative?

1

vivehelpme t1_j5y70zt wrote

>what is very special about the model than the large data and parameter set it has

OpenAI have a good marketing department and the web interface is user friendly. But yeah there's really no secret sauce to it.

The model generates the text snippet in a batch, it just prints it a character at a time for dramatic effect(and to keep you occupied for a while so you don't overload the horribly computationally expensive cloud service it runs on with multiple queries in quick succession), so yeah definitely scaling questions before it could be ran as a google replacement general casual search engine.

23

vivehelpme t1_j5tds9a wrote

>I get tired of the 'nothing's ever worth being excited about' attitude in general when it comes to anything and everything tech-related.

The varying degree of excitement come from how introduced you've been to the tech precusors. A matter of perspective if you will.

If you never heard of a language model before and try chatGPT you'll probably be quite impressed.

On the other hand if you read the transformers paper in 2017 and tried every single transformers architecture language model implementation since then you're kind of served up a slightly improved version with a slick engineered presentation, sure it's an impressive package but you've seen iterations build towards it and might even have seen some model that does a better job in certain domains.

Which is from where you get headlines like

>ChatGPT is 'not particularly innovative,' and 'nothing revolutionary', says Meta's chief AI scientist

6

vivehelpme t1_j5tc4fv wrote

It have been trained on a volume of human generated data greater than any single individual ever have consumed in an entire lifetime.

So it have a lot of real world models and experiences that have shaped it, it's just that all of these are second hand accounts passed over to it.

4