visarga
visarga t1_irllww0 wrote
Reply to comment by [deleted] in [D] Why can't language models, like GPT-3, continuously learn once trained? by SejaGentil
Oh yes we do. A whole internet of expired text will not help when you need factually correct recent data.
visarga t1_irdctls wrote
Is the library specific to CV? How does it compare to https://sbert.net/docs/package_reference/losses.html
visarga t1_ir12jh9 wrote
Reply to comment by fignewtgingrich in AI Generated Movies/TV by fignewtgingrich
I agree it's gonna be "abused" by humans as well.
visarga t1_iqzerze wrote
Reply to AI Generated Movies/TV by fignewtgingrich
> If we assume AI can eventually create a movie that is oscar nomination worthy every 10 seconds for essentially no cost
It's not gonna be a "movie" but more like a sim or a game, and we're not going to make it for entertainment but as a training ground for AI. Simulation goes hand in hand with AI because real world data is expensive and limited but sims only cost electricity to run.
We are already seeing generative models as source of training data. link
visarga t1_iqxstx5 wrote
Reply to comment by dalledoeswalle in When will our lives get better collectively. The clock is ticking!! by ObjectiveDeal
Just remember how you use your phone and explain that to a person from 200 years ago, I bet they'll think you are already deep into the singularity by their standards.
Having food, water, toilet, electricity and internet is nothing to brag about, even the poorest of us should have them. But just a couple of centuries ago these things would have been off the scale.
If you look back over decades or a couple of centuries life has been getting steadily better. It wasn't fake progress, but we're busier than ever.
Many people think after the singularity we'll have nothing to do anymore. On the contrary, I think we'll have more than before. We'll still compete and we'll often be unhappy like before.
Who said the purpose of AI should be to improve our lives? The purpose of life is to expand and exist despite the challenges it meets. That means competition and exploration, not peace and detachment. We didn't come out on top of nature by being nice, we exploited every advantage and knowledge along the way.
visarga t1_iqujej4 wrote
Reply to comment by DoneM1 in What are the best web-based AI tools accessible right now? by [deleted]
GPT-3 for more open-ended tasks, like generating baby names or business ideas. Here's a large list of applications.
visarga t1_iqsob64 wrote
Reply to comment by Kolinnor in Self-Programming Artificial Intelligence Using Code-Generating: a self-programming AI implemented using a code generation model can successfully modify its own source code to improve performance and program sub-models to perform auxiliary tasks. by Schneller-als-Licht
Cool down. It's not that revolutionary as it sounds.
First of all, they reuse a code model.
> Our model is initialized with a standard encoder-decoder transformer model based on T5 (Raffel et al., 2020).
They use this model to randomly perturb the code of the proposed model.
> Given an initial source code snippet, the model is trained to generate a modified version of that code snippet. The specific modification applied is arbitrary
Then they use evolutionary methods - a population of candidates and a genetic mutation and selection process.
> Source code candidates that produce errors are discarded entirely, and the source code candidate with the lowest average training loss in extended few-shot evaluation is kept as the new query code
A few years ago we had black box optimisation papers using sophisticated probability estimation to pick the next candidate. It was an interesting subfield. This paper just takes random attempts.
visarga t1_irloorr wrote
Reply to comment by [deleted] in [D] Why can't language models, like GPT-3, continuously learn once trained? by SejaGentil
> You can just split a large text to parts and feed each one of them
This won't capture long range interactions between passages or care about their ordering.