Recent comments in /f/MachineLearning

MarmonRzohr t1_jdlyfub wrote

>artificial wombs are basically done or very close

Bruh... put down the hopium pipe. There's a bit more work to be done there - especially if you think "artifical womb" as in from conception to term, not artifical womb as in device intended from prematurely born babies.

The second one was what was demonstrated with the lamb.

−1

tdgros t1_jdlxy8a wrote

There are versions for NLP (and a special one for vision transformers), here is the BERT one from some of the same authors (Frankle & Carbin) https://proceedings.neurips.cc/paper/2020/file/b6af2c9703f203a2794be03d443af2e3-Paper.pdf

It is still costly, as it includes rewinding and finding masks, we probably need to switch to dedicated sparse computations to fully benefit from it.

6

SmLnine t1_jdlxego wrote

There are complex mammals that effectively don't get cancer, and there are less complex animals and organisms that effectively don't age. So I'm curious what your opinion is based on.

2

SeymourBits t1_jdlwrgi wrote

This is the most accurate comment I've come across. The entire system is only as good and granular as the CLIP text description that's passed into GPT-4 which then has to "imagine" the described image, often with varying degrees of hallucinations. I've used it and can confirm it is currently not possible to operate anything close to a GUI with the current approach.

1

SmLnine t1_jdlwhtu wrote

>but unless you take the philosophical stance that "if we just made AGI they'd be able to solve every problem we have, so everything is effectively an ML problem", it doesn't seem like it'd be fair to say the bottlenecks to solving either of those are even related to ML in the first place. It's essentially all a matter of bioengineering coming up with the tools required.

We're currently using our brains (a general problem solver) to build bioengineering tools that can cheaply and easily edit the DNA of a living organism. 30 years ago this would have sounded like magic. But there's no magic here. This potential tool has always existed, we just didn't understand it.

It's possible that there are other tools in the table that we simply don't understand yet. Maybe what we've been doing the last 60 years is the bioengineering equivalent of bashing rocks together. Or maybe it's close to optimal. We don't know, and we can't know until we aim an intellectual superpower at it.

3

shanereid1 t1_jdlt38a wrote

Have you read about the lotto ticket hypothesis? It was a paper from a few years ago that showed that within a fully connected neural network there exists a smaller sub network that can perform equally as well, even when the subnetwork is as low as a few % of the size of the original network. AFAIK they only proved this for MLP and CNNs. Its almost certain that the power of these LLMs can be distilled in some fashion without significantly degrading performance.

32