visarga
visarga t1_iwkaawn wrote
Reply to comment by Evil_Patriarch in A typical thought process by Kaarssteun
Never saw herbivore men term tied to crowding. Is that the reason? Crowding?
visarga t1_iwk9y88 wrote
Reply to comment by MGorak in A typical thought process by Kaarssteun
> Kids take a lot of resources to raise and it keeps getting worse.
I want to make a parallel here - automation is taking jobs away, but our expectations and desires outgrow it so, even after 100 years of fast tech progress we still have low unemployment rate. I don't think AI will mean idle humans with nothing to do and no motivation to try. We are unsatiated desire machines.
visarga t1_iwk988q wrote
Reply to comment by UnrulyNemesis in A typical thought process by Kaarssteun
Moore's law slowed down from 2x every 1.5 years to 2x every 20 years. We're already 7 years deep into this stage. Because of that AI research is expensive, state of the art models are inaccessible to normal researchers, and democratic access to AI is threatened. Building a state of the art fab is so necessary and difficult that it becomes a national security issue. I think there's room for concern, even while acknowledging the fast progress.
visarga t1_iwk7vbl wrote
Reply to comment by gynoidgearhead in A typical thought process by Kaarssteun
Exploration is necessary but risk is expensive. If you want innovation you got to have rewards or some other forcing factor, such as imminent danger.
visarga t1_iwh8n84 wrote
Reply to comment by -ZeroRelevance- in Ai art is a mixed bag by Nintell
I would first collect examples of frequent issues: double heads, noodle hands, deformities. These are the negative examples. I would collect positive examples from the training set because those images are supposedly normal, but match them as well as possible to the negative examples with cosine similarity. Train a rejection model.
To generate prompts I would finetune gpt-2 on a large collection of prompts crawled from the net. Put the prompts into SD, reject deformed images. Rank the images with an image quality model (probably easy to find), keep only the high quality ones.
You can generate as many images as you like. They would be un-copyrightable because they have been generated end-to-end without human supervision. So just great for making a huge training set for AI art.
You could also replace all known artist names with semantic hashes to keep the capability of selecting styles without needing to name anyone. We would have style codes or style embeddings instead of artist names.
visarga t1_iwdz0jc wrote
Reply to comment by -ZeroRelevance- in Ai art is a mixed bag by Nintell
But if you have a selection process it might become a virtuous cycle. An evolutionary art system based on humans and AI.
visarga t1_iwdy4ug wrote
Reply to comment by Several-Car9860 in Theories of consciousness - Seth, A.K. and Bayne, T. (2022). by Singularian2501
You can explain qualia (the subjective or qualitative properties of experiences) - they are perceptions and their emotional charge, in the context of learning how to achieve goals. They key is the last part. The environment+goal feeds the learning process and gives shape to our emotional reactions.
visarga t1_iwdt41n wrote
Reply to Meta AI Has Built A Neural Theorem Prover That Has Solved 10 International Math Olympiad (IMO) Problems — 5x More Than Any Previous Artificial Intelligence AI System by Shelfrock77
Sometimes people say "Language models are like parrots. They learn patterns, but could never do something novel or surpass their training data."
This is proof that it is possible. What you need is to learn from validation. This process can be applied to math and code because complex solutions might have trivial validations.
When you don't have a symbolic way to validate the solution, you can ensemble a bunch of solutions and choose the one who appears most frequently.
visarga t1_iwb604s wrote
Reply to comment by AkaneTori in Ai art is a mixed bag by Nintell
Art correctness is in the eye of the beholder, I feel like you're gate keeping the new art kids. Let them eat cake.
visarga t1_iwanxre wrote
Reply to comment by Quealdlor in DeviantArt AI Update: Now Artists Will Be "Opted Out" For AI Datasets by LittleTimmyTheFifth5
There is a tool to search images used to train Stable Diffusion. It has semantic search, so you can type in a "prompt" and it will find you the closest matches between real images, including art. You can also search by image.
visarga t1_iwamsbt wrote
You say it's enough to import a pretrained transformer from HuggingFace. I say not even that, you don't need to create a dataset and train a model in most cases, just try a few prompts on GPT-3.
In the last 4 years I worked on an information extraction task, created in-house dataset, and surprise - it seems GPT-3 can solve the task without any fine-tuning. GPT-3 is eating the regular ML engineer and labeller work. What's left to do, just templating prompts in and parsing text out?
visarga t1_iw91m20 wrote
Reply to comment by Tanglemix in Ai art is a mixed bag by Nintell
With NVIDIA's eDiff-I you can paint a sketch additionally to your text prompt.
visarga t1_iw90hqd wrote
Reply to comment by BearStorms in Ai art is a mixed bag by Nintell
What would happen if we loop this a few times?
visarga t1_iw8wgfn wrote
Reply to comment by Cultural_League_3539 in Ai art is a mixed bag by Nintell
An asshole because it gave everyone illustration superpowers?
visarga t1_iw8v2tm wrote
Reply to comment by AkaneTori in Ai art is a mixed bag by Nintell
> non artists invading the space
Many people using AI art generators do it for personal enjoyment, it's one-use art then throw it away, sightseeing, imagination fun. Or to see themselves and their loved ones in all sorts of imaginary situations and costumes. Not trying to take over professional art.
visarga t1_iw8udqf wrote
Reply to comment by Kaarssteun in Ai art is a mixed bag by Nintell
I don't believe it's a purge, it is a transformation. There is more potential for art now than before, but more evenly spread out.
visarga t1_iw8tf9r wrote
Reply to comment by plywood747 in Ai art is a mixed bag by Nintell
I bet you can use it to fish for ideas.
visarga t1_iw8l43h wrote
Reply to Ai art is a mixed bag by Nintell
You forgot the third element here: technology marching forward. Discoveries coming one by one from everywhere, USA, Europe, China, from universities, from companies, from hackers teaming up with visionary investors. It's impossible to get everyone to stop developing these models. If one of them disagrees, then releases a trained model, it becomes impossible to control how it is used. We already have pretty powerful models into the wild, nobody can put them back. What I mean is that technology, through 1000 forces, will march progress ahead no matter if we like it or not.
It might not be apparent but a ML engineers jobs are being "taken away" by GPT-3 at a huge speed. What used to take months to code and years to label can be achieved with a prompt and no training data today. No need to know PyTorch, Keras or Tensorflow. No need to know exactly the architecture of the network or how it was trained. This used to be the bread of many ML engineers. So it's not just artists. We all have to be assimilated by the new technology and find our new place.
visarga t1_iw6cj8l wrote
Reply to comment by Rezeno56 in Will this year be remembered as the start of the AI revolution? by BreadManToast
That's easy.
Neural nets before 2012 were small, weak and hard to train. But in 2012 we got a sudden jump in accuracy by 10% in image classification. In the next 2 years all ML researchers switched to neural nets and all papers were about them. This period lasted 5 years in total and scaled models from the size of an "ant" to that of a "human". Almost all fundamentals of neural nets were discovered during this time.
But in 2017 we got the transformer, this led to unprecedented scaling jumps, from the size of a "human" to that of a "city". By 2020 we had GPT-3 and today, just 5 years later from transformer, we have multiple generalist models.
On a separate arc, reinforcement learning, we got the first breakthroughs in 2013 with Deep Q-Learning from DeepMind on Atari games and by 2015 we had AlphaGo. Learning from self play has been proven to be amazing. There is cross pollination between large language models and RL. Robots with GPT-3 strapped on top can do amazing things. GPT-3 trained in self-play like AlphaGo can improve its ability to solve problems. It can already solve competition level problems in math and code.
The next obvious step is a massive video model, both for video generation and for learning procedural knowledge - how to do things step by step. YouTube and other platforms are full of video, which is a multi-modal format of image, audio, voice and text captions. I expect these models to revolutionise robotics and desktop assistants (RPA), besides media generation.
visarga t1_iw6ab1o wrote
Reply to comment by Reddituser45005 in Will this year be remembered as the start of the AI revolution? by BreadManToast
Maybe state of the art foundation models are hard to do without deep pockets, but applications built on these models are 100x easier to make now than before. I mean, you just tell it what you want. That's lowering the entry barrier for the public. Everyone can get in on it.
Used to be necessary to collect a dataset, create a custom architecture, train many models, pick the best, iterate on the dataset, etc to get to the same results. The work of months or years compressed into a prompt. It's not just artists that are being automated, traditional ML engineers too.
The only solution for ML eng is to jump on top of GPT-3 and its family, no more work left to do at a lower level. I am talking from personal experience, 4 years old project with 5 engineers and 3 labellers was solved at first sight by GPT-3 with no tuning. Just ask it nicely, it's all you have to do now.
visarga t1_iw6a1qa wrote
Reply to comment by green_meklar in Will this year be remembered as the start of the AI revolution? by BreadManToast
Maybe it was 2017, the year when "Attention is all you need" was published. This changed deep learning completely and everything we do today uses transformers.
visarga t1_iw4kyc3 wrote
Reply to comment by Quealdlor in DeviantArt AI Update: Now Artists Will Be "Opted Out" For AI Datasets by LittleTimmyTheFifth5
> There are billions of images on the web and you could spend your whole life browsing through what has been uploaded to this point, without even considering what will be uploaded in the coming years
That's a very good argument why this whole reaction against AI art is overblown. What's a few billion extra AI images on top of the billions already out there? Not like we were lacking choice before.
But AI to the rescue - have you seen how nice it is to browse lexica.art by selecting "Explore this style" on an image? It's like an AI Pinterest. AI can help you find the art you like among the billions of images out there.
visarga t1_iw4jdkc wrote
Reply to comment by IndependenceRound453 in DeviantArt AI Update: Now Artists Will Be "Opted Out" For AI Datasets by LittleTimmyTheFifth5
Copyright law generally protects the fixation of an idea in a “tangible medium of expression,” not the idea itself, or any processes or principles associated.
Neural networks don't store images inside, they decompose these images into elementary concepts and then recompose new images from such concepts. Basically they learn the unprotected part of the training set.
Think about it in size: 4 billion images shrunk into 4GB, that means a measly byte per input image. Not even a full pixel! It certainly has no space to store those images. It can only store general principles.
Getting offended for having a single byte learned from one of your images seems unjustified. On the other hand it looks ugly how pre-AI artists are gatekeeping the new wave of AI assisted artists. Let people eat cake.
visarga t1_iw4gbng wrote
Reply to comment by ReadSeparate in DeviantArt AI Update: Now Artists Will Be "Opted Out" For AI Datasets by LittleTimmyTheFifth5
Before the PC there were plenty of professional typists and secretaries. Their jobs disappeared or were transformed, and we got an even larger number of office jobs on PC.
Generative AI will support jobs in many fields - medicine, design, advertising, hobbies and fan fiction. Art itself might get a paradigm shift soon, as humans strive to find something AI can't do. The same happened when photography was popularised, and look how many more uses photography has then painting used to have.
visarga t1_iwkbncq wrote
Reply to comment by 94746382926 in Cerebras Builds Its Own (1 Exaflop) AI Supercomputer - Andromeda - in just 3 days by Dr_Singularity
One Cerebras chip is about 100 top GPUs in speed but in memory it only handles 20B weights, they mention GPT-NeoX 20B. They need to stack 10 of these to train GPT-3.