visarga
visarga t1_iszuegi wrote
Reply to comment by gravitas_shortage in [D] GPT-3 is a DREAM for citation-farmers - Threat Model Tuesday #1 by TiredOldCrow
Rather than detecting fakes I'd rather have a model that can generate and implement papers. I bet there's a ton of samples to train on. Close the loop on AI self improvement.
visarga t1_isyfm7d wrote
Reply to comment by ajt9000 in [D] How frustrating are the ML interviews these days!!! TOP 3% interview joke by Mogady
I don't think most candidates have a repo to show, maybe just an empty one.
visarga t1_isqa867 wrote
Reply to comment by Background-Loan681 in Is this imagination? by Background-Loan681
> Does the chatbot imagined something in their head before describing it to me as a prompt?
You're attributing to the model what is the merit of the training data. It's culture that knows what would be a great answer to your task, of course, when culture is loaded up into a brain or an AI.
What I mean is that it doesn't matter the substrate - as long as it learned the distribution, then it can imagine coherent and amazing things. That's all the merit of the training data though. The brain or the model just dutifully carry that in a compact form that can be unfolded in new ways on demand.
visarga t1_isq80ci wrote
Reply to comment by raccoon8182 in Is this imagination? by Background-Loan681
It might surprise you that GPT-3 like models don't have just one bias, one point of view - that of its builders, as often accused.
The model learns all personality types, and emulates their biases to a very fine degree. It is in fact so good that researchers can run simulations of polls on GPT-3. In order to replicate the target population they prompt the model with a collection of personality profiles with the right distribution.
So you, as the user of the model, are in charge. You can make it assume any bias you want, just specify your preferred poison. There is no "absolutely unbiased" mode unless you got that kind of training data. That means the model is a synthesis of all personalities. It's more like humanity than a single person.
visarga t1_isq72rp wrote
Reply to comment by Ortus12 in Is this imagination? by Background-Loan681
> when Open Ai starts plugging in different Ai systems into each other.
Language Model Cascades
https://twitter.com/karpathy/status/1550590818041311232?lang=en
visarga t1_isq5mvf wrote
Reply to comment by tooold4urcrap in Is this imagination? by Background-Loan681
I believe there is no substantial difference. Both the AI and the brain transform noise into some conditional output. AIs can be original in the way they recombine things - there's space for adding a bit of originality there, and humans can be pretty reliant themselves on reusing other styles and concepts - so not as original as we like to imagine. Both humans and AIs are standing on the shoulders of giants. Intelligence was in the culture, not in the brain or AI.
visarga t1_isozvm1 wrote
<offtopic> Where do I get a large-ish list of company names? Also, product names. </>
visarga t1_isij2xr wrote
Reply to [R] UL2: Unifying Language Learning Paradigms - Google Research 2022 - 20B parameters outperforming 175B GTP-3 and tripling the performance of T5-XXl on one-shot summarization. Public checkpoints! by Singularian2501
I'm wondering what is the minimum hardware to run this model, is this really the portable alternative of GPT-3?
visarga t1_isbgl2o wrote
"anime girl with blue eyes" -> Generated image contains NSFW content
visarga t1_is9mzus wrote
Reply to comment by londons_explorer in [R] Mind's Eye: Grounded Language Model Reasoning through Simulation - Google Research 2022 by Singularian2501
We need a learned physics model, there's so much video to train on, it's one of the most neglected modalities.
visarga t1_is9mk63 wrote
Reply to comment by [deleted] in [R] Mind's Eye: Grounded Language Model Reasoning through Simulation - Google Research 2022 by Singularian2501
Not just simulation, LLMs can also benefit from other toys: search, code execution/REPL, sub-requests, calling external APIs.
visarga t1_is0j3lv wrote
Reply to comment by polygon_lover in NovelAI Improvements on Stable Diffusion by Dr_Singularity
Even art students suck at hands.
visarga t1_is0fpyb wrote
I became aware of AI in 2007 when Hinton came out with Restricted Boltzmann Machines (RBMs, a dead end today). I've been following it and started learning ML in 2010. I am a ML engineer now, and I read lots of papers every day.
Ok, so my evaluation - I am surprised with the current batch of text and image generators. The game playing agents and the protein folding stuff are also impressive. I didn't expect any of them even though I was following closely. Two other surprises along the way were residual networks, which put the deep into deep learning, and the impact of scaling up to billions of parameters.
I think we still need 10,000x scaling to reach human level both in intelligence and efficiency, but we'll have expensive to use AGI in a lab sooner.
I predict the next big thing will be large video models, not the ones we see today but really large like GPT-3. They will be great for robotics and automation, games and of course video generation. They have "procedural" knowledge - how we do things step by step - that is missing in text and images. They align video/images with audio and language. Unfortunately videos are very long, so hard to train on.
visarga t1_is0ek0y wrote
Reply to comment by BigMemeKing in Everyone seems so worried about mis/disinformation created by AI in the future and what it could cause people to believe, but I feel the opposite is true. by sidianmsjones
I agree, the problem is deeper. We have low level of trust in each other, so we ignore things.
visarga t1_is0cb6i wrote
Reply to comment by onyxengine in Everyone seems so worried about mis/disinformation created by AI in the future and what it could cause people to believe, but I feel the opposite is true. by sidianmsjones
> Which will make it easy for people to write off the truth.
Wouldn't it be nice if there was a place where Truth was written so we can all check things up. But unfortunately that is not possible, so we're left with a continually evolving social truth.
visarga t1_is0c489 wrote
Reply to comment by BigMemeKing in Everyone seems so worried about mis/disinformation created by AI in the future and what it could cause people to believe, but I feel the opposite is true. by sidianmsjones
AI works both ways, it's a tool. You can use it to counter disinformation.
visarga t1_irzod3c wrote
Reply to comment by _Arsenie_Boca_ in [D] Looking for some critiques on recent development of machine learning by fromnighttilldawn
Not a whole team, not even a whole job, but plenty of tasks can be automated. By averaging over many developers there is a cumulative impact.
But on the other hand software has been cannibalising itself for 70 years and we're still accelerating, there's always space at the top.
visarga t1_irznvdp wrote
Reply to comment by RobbinDeBank in [D] Looking for some critiques on recent development of machine learning by fromnighttilldawn
The original PILE.
visarga t1_irzidqj wrote
Reply to comment by vman512 in [D] Reversing Image-to-text models to get the prompt by MohamedRashad
> you'd need a gigantic dataset for this to work
If that's the problem then OP can use Lexica.art to search their huge database with a picture (they use CLIP), then lift the prompts from the top results. I think they even have an API. But the matching images can be quite different.
visarga t1_irziac5 wrote
Reply to comment by milleniumsentry in [D] Reversing Image-to-text models to get the prompt by MohamedRashad
Now is the time to convince everyone to embed the prompt data in the generated images, since the trend is just starting. Could be also useful later when we crawl the web, to separate real from generated images.
visarga t1_irzdrho wrote
Reply to comment by _Arsenie_Boca_ in [D] Looking for some critiques on recent development of machine learning by fromnighttilldawn
> if LSTMs would have received the amount of engineering attention that went into making transformers better and faster
There was a short period when people were trying to improve LSTMs using genetic algorithms or RL.
-
An Empirical Exploration of Recurrent Network Architectures (2015, Sutskever)
-
LSTM: A Search Space Odyssey (2015, Schmidhuber)
-
Neural Architecture Search with Reinforcement Learning (2016, Quoc Le)
The conclusion was that the LSTM cell is somewhat arbitrary and many other architectures work just as well, but none much better. So people stuck with classic LSTMs.
visarga t1_irta9lz wrote
Reply to comment by MassiveIndependence8 in Why does everyone assume that AI will be conscious? by Rumianti6
It's not just a matter of different substrate. Yes, a neural net can approximate any continuous function, but not always in a practical or efficient way. The result has been proven on networks of infinite width, not on the finite networks we are using in practice.
But the major difference comes from the environment of the agent. Humans have the human society, our cities and nature as environment. An AI agent, the kind we have today, would have access to a few games and maybe a simulation of a robotic body. We are billions of complex agents, more complex than the largest neural net, they are small and alone, and their environment is not real but an approximation. We can do causal investigations by intervention in the environment and apply the scientific method, they can't do much of that as they don't have access.
The more fundamental difference comes from the fact that biological agents are self replicators and artificial agents are usually not (AlphaGo had an evolutionary thing going). Self replication leads to competition leads to evolution and goals aligned with survival. An AI agent would need something similar to be guided to evolve its own instincts, it needs to have "skin in the game" so to speak.
visarga t1_irt7w5u wrote
Reply to comment by HeinrichTheWolf_17 in Why does everyone assume that AI will be conscious? by Rumianti6
> Have you heard of Integrated Information Theory?
That was a wasted opportunity. It didn't lead anywhere, it's missing essential pieces, and it has been proven that "systems that do nothing but apply a low-density parity-check code, or other simple transformations of their input data" have high IIT (link).
A theory of consciousness should explain why consciousness exists in order to explain how it evolved. Consciousness has a purpose - to keep itself alive, and to spread its genes. This purpose explains how it evolved, as part of the competition for resources of agents sharing the same environment. It also explains what it does, why, and what's the cost of failing to do so.
I see consciousness and evolution as a two part system of which consciousness is the inner loop and evolution the outer loop. There is no purpose here except that agents who don't fight for survival disappear and are replaced by agents that do. So in time only agents aligned with survival can exist and purpose is "learned" by natural selection, each species fit specifically to their own niche.
visarga t1_irt4m6q wrote
Reply to AI art 256x faster by Ezekiel_W
An important observation to make is that it's only been demonstrated on images sized 32x32 and 64x64. A long way away from 512x512. Papers that only test on small datasets are usually avoiding a deficiency.
visarga t1_it15bg3 wrote
Reply to comment by ginger_gcups in Talked to people minimizing/negating potential AI impact in their field? eg: artists, coders... by kmtrp
> A supply of matter and energy
I think some raw materials are going to be inevitably contested unless we find abundant replacements or reach 100% recycling rate. A replicator won't save us if it needs rare material X.