TFenrir
TFenrir t1_j1kx3bc wrote
Reply to Which Sci-fi book/Movie/Short Story has technology closest to something in our future. (What would be your guess) by Ortus12
Her, I think does a great job.
It's not perfect though, there are lots of weird things you have to do in movies/stories to move the plot forward, and to help the audience engage. Like, Joaquin Phoenix has a weird job. Spoilers he writes personal letters on behalf of other people? - but that job is a great setup to highlight the encroachment of Samantha's capabilities in creative roles. She never takes over for his job, she starts off by mostly just helping and giving feedback/notes. She even helps compile his best stuff into a book, which feels increasingly close to the capabilities of language models today. But she also does art, makes music - the sort of stuff in traditional sci fi that is almost taboo to position AI as being able to excel at.
I also really liked the video game Joaquin plays, the controls look... Dumb, but the idea of characters that are "alive" feeling and that you converse with seems like something people are actively currently working on with the beginnings of success.
And just the idea of an OS AI seems increasingly likely. But I think we'll exceed that in the next decade. We'll have AI that writes customized applications for us, on the fly. You want a new reddit app that also integrates Twitter? Sure. Anyway, tangents aside, Her is just a great movie to rewatch with the rise of chatbots.
TFenrir t1_j0z6mm4 wrote
Reply to comment by ihateshadylandlords in Printing atom by atom: Lab explores nanoscale 3D printing by Dr_Singularity
I think I understand what you're saying a bit more!
There are still lots of things that need to be figured out for what is often referred to as "atomically precise manufacturing". APM, coined by Eric Drexler, is often focused on the part of the process that assembles from already ready material, and the value propositions that come from that - for example, literally no waste in the manufacturing process, and shapes/structures that would not be possible otherwise.
However, it also requires a process that can break down/recycle objects into those base materials. A unique and separate challenge, but one that has direct symbiosis with the end result.
I'd recommend if you are curious, reading some of Eric Drexler's work. He's really level headed about the topic, and is extremely well versed - he has a blog, last I remember, but also has written great books - I think he coins the term "APM" in that book, it's been almost as decade since I've read it though.
TFenrir t1_j0yygws wrote
Reply to comment by ihateshadylandlords in Printing atom by atom: Lab explores nanoscale 3D printing by Dr_Singularity
Okay well first - that's a hard thing to quantify, who knows how close we are - this thread is about a technique that is about assembling atoms/molecules into useful products.
Second, that's immaterial to the original question you were asking.
> We don’t have anything that can change the molecular structure of dirt to the molecular structure of gold.
I'm highlighting that work like this is aiming to move towards printing atom by atom, which could theoretically create all kinds of molecular structures - eg, graphene from carbon.
I don't know how long it will take, but as you were asking how something like this could be useful, it's pretty straight forward.
TFenrir t1_j0xxcor wrote
Reply to comment by ihateshadylandlords in Printing atom by atom: Lab explores nanoscale 3D printing by Dr_Singularity
I'll give you an example.
Carbon, extremely plentiful, essentially dirt on earth. With carbon you can build everything from cpus to diamonds.
TFenrir OP t1_j0dk5rr wrote
Reply to comment by Kinexity in Riffusion: Stable diffusion fine tuned on spectrograms (image representations of music) creates prompt based music, in real time by TFenrir
I mostly agree, but I think there is some opportunity here. Using img2img in real time to extend audio forever, and the relationship between images and audio in general are quite interesting - would a model that is only trained on these images provide a "better" result? Would different fine tuned models give different experiences? How is this impacted by other improvements to models?
TFenrir t1_izyrsaj wrote
Reply to comment by Practical-Mix-4332 in I don't want AI to do all our jobs for us by [deleted]
They might, but in this theoretical world, it wouldn't give anyone anything. "Money" wouldn't be a thing, and they wouldn't be able to talk about their technique and what they've learned with their peers - which I assume would be part of the pull.
Like anything, people might lie for status, and take credit for things they didn't do - but the incentives to do so are less in the world of the future.
TFenrir t1_izylz0s wrote
Reply to comment by Practical-Mix-4332 in I don't want AI to do all our jobs for us by [deleted]
My partner has taken up pottery. She's very good, but she's not the best potter - not yet, and probably not ever. But she loves it, and people appreciate her work, and she's even sold a few of her best. You can get cheaper bowls - why do you think people buy it?
TFenrir t1_izy4v4j wrote
Reply to I don't want AI to do all our jobs for us by [deleted]
"Work" is a pretty loosey goosey idea, and can be replicated with a sophisticated enough post-scarcity system. You can make artisanal things for "money" or some other social credit - maybe the equivalent of upvotes? You can play games, or start book clubs, or join cooking classes - whatever you like to feel fulfilled and to learn and grow.
The "grind" of work though, is profoundly unfulfilling to many people, and beyond that, is inherently an impediment to pursuing your own goals - this is especially true for people who are living in poverty.
On a global scale, the "best case scenario" of AI making work obsolete, also includes providing for all people around the world, enough so that they aren't forced to do things they don't want to do, to survive and thrive.
TFenrir t1_ixdcbvj wrote
Reply to comment by visarga in When they make AGI, how long will they be able to keep it a secret? by razorbeamz
> GPT-3 was published in May 2020, PaLM in Apr 2022. There were a few other models in-between but they were not on the same level.
> Dall-E was published in Jan 2021, Google's Imagen is from May 2022.
Yes but the research that allowed for GPT itself came out of Google, GPT3 didn't invent the language model, and things like BerT are still the open source standard.
Even the research on image generation, that goes back all the way to 2013 or so with Google and deep dreaming. They had lots and lots is research papers on how to generate realistic images from text for years and years before even the first Dalle model.
On top of that, in present day that have shown the highest quality models. Which going back to my original point, highlights that if we're talking about organizations that will achieve AGI first - Google, with it's software talent, research, and hardware strengths (TPUs) are very very likely to achieve AGI first.
> Yes, they are. But do a search and you'll see how poor the results are in reality. They don't want us to actually find what we're looking for, not immediately. They stand to lose money.
This is essentially conspiracy theory, as well as subjective opinion.
> Look at Google Assistant - the language models can write convincing prose and handle long dialogues, in the meantime Assistant defaults to web search 90% of the questions and can't hold much context. Why? Because Assistant is cutting into their profits.
It's because they can't risk anything as hallucinatory and unpredictable as language models yet - this is clear from the research being done, not even just by Google. Alignment isn't just about existential risk.
> I think Google wants to monopolise research but quietly delay its deployment as much as possible. So their researchers are happy and don't make competing products, while we are happy waiting for upgrades.
Again more conspiracy theories. Take a look at the work Jeff Dean does out of Google, not even for the content, but for the intent of what he is trying to build. Your expectations from Google are based on this idea that they should already just be using language models in production, but they just aren't really ready yet, at least not for search, and Google can't risk the backlash that happens when these models come out undercooked. Look at what happened with Facebook's most recent model and the controversy around that. No conspiracy theories necessary.
TFenrir t1_ixcp3ka wrote
Reply to comment by visarga in When they make AGI, how long will they be able to keep it a secret? by razorbeamz
>They have a bunch of good models but they are 1-2 years late.
I have absolutely no idea what you mean by "1-2 years late", in what way are they late?
> Also Google is standing to lose from the next wave of AI, from a business-wise perspective. The writing on the wall is that traditional search is on its way out, now more advanced AI can do direct question answering. This means ads won't get displayed. They are dragging their feet for this reason, this is my theory. The days of good old web search are limited.
Mmm, maybe - but Google is already looking at integrating language models into transitional search. They showed this off years ago with MuM. They also have written hands down the most papers on methodologies to improve the accuracy of language models, connecting language models to the internet/search, and have SOTA on all accuracy metrics that I've seen at least, for LLMs.
> But hey, you could say they might ask the language model to shill for various products. True, but language models can also run on the edge, so we could have our own models that listen to our priorities and wishes.
> That was not something possible to do with web search, but accessible through AI. The moral of the story is that Google's centralised system is getting eroded and they are losing control and ad impressions.
Eh I mean this is a lot of somewhat interesting speculation, in my mind the most relevant of which is how Google is going to manage to get inference costs small enough to scale any sort of language model architecture (their work on inference is also bleeding edge), but while there is opportunity to replace search with language models, Google has probably been working specifically on that for longer than anyone else - heck we heard them talking about it almost 3 years ago at I/O.
But back to the core point, Google is still easily, easily the leader in AI research.
TFenrir t1_ixae9ra wrote
Reply to comment by visarga in When they make AGI, how long will they be able to keep it a secret? by razorbeamz
Google is still leading in AI, not even including DeepMind - they have the most advanced language model (PaLM) - the most advanced examples of language models in robots (SayCan) - the most advanced examples of image models and even video models , and that doesn't go into the host of papers that they release. If you asked OpenAI folks, they would probably say Google is the most advanced still, easily.
TFenrir t1_ix8vbwx wrote
It would be hard if a company like Google created AGI, to keep it secret at all. There are teams of people working on their newer models, and many of them have strong, ideological positions on AGI - disparate from each other as well as Google. That's not an environment where a secret can live very long.
And I'm of the opinion that if it happens anywhere, it'll happen in Google.
TFenrir t1_itsqltk wrote
Reply to comment by manOnPavementWaving in Where does the model accuracy increase due to increasing the model's parameters stop? Is AGI possible by just scaling models with the current transformer architecture? by elonmusk12345_
I'm curious what trends have been predicting 5-10 trillion parameter models?
And additionally, more recent work has fundamentally increased the value of scaling.
https://twitter.com/YiTayML/status/1583514524836978689?t=Xxm_NYIQvGr5743ZdQzaqA&s=19
You can see that here for example.
But I have heard that finding data is the hard part now, and inference speeds on models in the trillions are going to restrict it's capabilities - but there is a lot of great work being done on inference speed ups.
TFenrir t1_itni82q wrote
Reply to comment by gibs in Large Language Models Can Self-Improve by xutw21
I don't necessarily disagree, but I also think sometimes we romanticize the brain a bit. There were a lot of things we increasingly are surprised about achieving with language model and scale, and different training architecture. Like Chain of Thought seems to have become not just a tool to improve prompts, but to help with self regulated fine tuning.
I'm reading papers where Google combines more and more of these new techniques, architectures, and general lessons and they still haven't finished smushing them all together.
I wonder what happens when we smush more? What happens when we combine all these techniques, UL2/Flan/lookup/models making child models, etc etc.
All that being said, I think I actually agree with you. I am currently intrigued by different architectures that allow for sparse activation and are more conducive to transfer learning. I really liked this paper:
TFenrir t1_itmb3es wrote
Reply to comment by gibs in Large Language Models Can Self-Improve by xutw21
Hmmm, I would say that "prediction" is actually a foundational part of all intelligence, from my layman understanding. I was listening to a podcast (Lex Fridman) about the book... Thousand minds? Something like that, and there was an compelling explanation for why prediction played such a foundational role. Yann LeCun is also quoted as saying that prediction is the essence of intelligence.
I think this is fundamentally why we are seeing so many gains out of these new large transformer models.
TFenrir t1_it8bwvx wrote
Look, when the matrix came out, there were a lot of people who thought we were in the matrix. It was a big todo until eventually most people just grew out of it.
Think about it this way - either we are in a simulation and we just will never know, so it's effectively a useless train of thought - or we aren't, and it's not a useful train of thought.
I recommend trying to ground yourself in the material understanding of the world, using occams razor in situations like this, and focusing on the future we could build, instead of the solipsistic path you're walking down.
TFenrir t1_isab99m wrote
Reply to comment by CleaverIam in Have we reached a technological plateau? by CleaverIam
When did we get used to a faster pace? We have bio science literally creating new mechanisms that we are using now to cure genetic disease, and to create mrna treatments. That would have been science fiction a decade ago.
Boring material science improvements have allowed for rechargable batteries that are now not only in all our phones, and electric cars, but in a quickly growing industry of electric bikes or even vehicles that just didn't exist a decade ago. I see them every day.
We just created a replacement to the Hubble telescope, have internet slowly being made available via satellite everywhere, and knocked a meteor off track by shooting a rocket at it.
Our internet speeds and advancements in software and hardware allow for things like music and video streaming everywhere, where a decade ago we consumed all our content via disks or downloads. We have AR/VR rapidly making it into normal every day use society. We have smart homes where I literally shout to the ether to control my home or ask where my phone is.
I haven't even touched AI advancements since 2012. Or 2017. Both large milestones. We're now creating AI that I use every day at work, that people are making brand new kinds of applications with, that are generating text, code, and images - and we are on the cusp of AI that can control apps on your computer just by talking to it - don't believe me? Look into Adept and their action transformer.
I actually could go on and on. Almost every single company today has to be a tech company first. Why do you think that is, if tech advancements are slowing down?
TFenrir t1_is5nno3 wrote
Reply to comment by CleaverIam in Have we reached a technological plateau? by CleaverIam
So literally self-driving cars that you describe exist - level 4 autonomous vehicles that you can pay for, get in the back seat, with no driver and get taken places.
To the rest of your points, it's fundamentally illogical. A plateau implies an inability to continue to climb, but we can list hundreds of advancements that have happened. That you dismiss them does not mean that we have plateau'd.
Like, watching the advancements of Boston Dynamics robots shows a clear improvement. Watching, but the fact that you are even looking at humanoid robots to be out of the lab, ignores all the other actual practical advancements we've made in automation in warehouses and factories with robotics.
TFenrir t1_ird090m wrote
Reply to comment by [deleted] in Google & TUAT’s WaveFit Neural Vocoder Achieves Inference Speeds 240x Faster Than WaveRNN by Dr_Singularity
The nature of these research papers are
- 
That they often come out of private companies research. So much of the data used to train or the models that come out of the research are, understandably, not just available for anyone to use. 
- 
The research papers often are reproducible - the mechanisms and the architecture from them directly contribute to open source efforts. Take a look at the stable diffusion subreddit, a research paper came out a few weeks ago, from Google, and now the techniques from that are being applied. I just saw another research paper that came out less than a week ago from Google being applied to create 3d models. 
TFenrir t1_ircxo8d wrote
Reply to comment by [deleted] in Google & TUAT’s WaveFit Neural Vocoder Achieves Inference Speeds 240x Faster Than WaveRNN by Dr_Singularity
I think if you expect every paper that is shared here to result in a model that's accessible or an app, you are going to be perpetually disappointed. That's... Just not how it works
TFenrir t1_irbos42 wrote
Reply to comment by ihateshadylandlords in “Extrapolation of this model into the future leads to short AI timelines: ~75% chance of AGI by 2032” by Dr_Singularity
> If the AGI model can be applied to programs that the public can use (like GPT3), then that would be great.
AGI just wouldn't be possible for quite a while after invention for publicly available models though. I don't even really call GPT publicly available - you have API access but you don't actually have access to the model itself. We do have other publicly available models though; stable diffusion, gpt-j, Roberta, etc.
Regardless, think of it this way... Imagine a group of scientists standing around a monitor, attached to a private, heavily secured internal network, which utilizes a distributed super computer specifically just to run inference on a model that they just finished training in a very secure facility. At this point the models before have been so powerful, that containment is a legitimate concern.
They are there to evaluate whether or not this model constitutes an AGI, if it has anything resembling consciousness, if it's an existential threat to the species.
They're not going to just... Release that model into the wild. They're not even going to give the public access, or awareness of this model in any way shape or form, for a while.
That doesn't even get into the hardware requirements that would probably exist for this first model.
TFenrir t1_irbgq74 wrote
Reply to comment by ihateshadylandlords in “Extrapolation of this model into the future leads to short AI timelines: ~75% chance of AGI by 2032” by Dr_Singularity
What do you mean by proof of concept? These are real models, and real insights that we gain. Those insights are applied sometimes almost immediately on models the public have access to.
Do you mean, in this 2032 prediction, are they talking about AGI being something that's available to the public or only available to people behind closed doors? It would be the latter, because the nature of this prediction is that it would be emerging in the bleeding edge super computers that Google is using in their research.
Honestly I'm not even sure how AGI could be turned into a product, it would just be too... Disruptive? The nature of how regular people would interact with it is a big question mark to me.
TFenrir t1_ir5x4wo wrote
Woah this is a pretty impressive achievement, I've only skimmed, but fundamentally new algorithms discovered opens the door for a lot of really impressive advancements - potentially something that is based on this could eventually create new algorithms to usurp the transformer
TFenrir t1_j1rrbr5 wrote
Reply to Genuine question, why wouldn’t AI, posthumanism, post-singularity benefits etc. become something reserved for the elites? by mocha_sweetheart
Two things:
The only reason that the rich would have for keeping it out of the hands of the non rich is that it would somehow negatively impact them if the non rich had this technology.
The nature of much of this AI doesn't lend itself to being restricted very easily. It's too reproducible, given enough time. It would require the synchronized efforts of people and governments across the world to keep it out of the hands of the select few
The reason why the rich have things that we (I'm putting myself in the category of non rich, but compared to some in the world I'm incredibly wealthy) don't have isn't primarily because they don't want us to have it, but because those things cost more money than we can afford, and they have no problems affording it.
Removing the barrier of cost generally makes things accessible to everyone - this is why smartphones are prolific, for example, even in the developing world.