Recent comments in /f/MachineLearning

SomeLongWindedIdiot t1_jd07i7z wrote

Why is AI safety not a major topic of discussion here and in similar communities?

I apologize if the non-technical nature of my question is inappropriate for the sub, but as you’ll see from my comment I think this is very important.

I have been studying AI more and more over the past months (for perspective on my level that consists of Andrew Ng’s Deep Learning course, Kaggle competitions and simple projects, reading a few landmark papers and digging into transformers) The more I learn, the more I am both concerned and hopeful. It seems all but certain to me that AI will completely change life as we know it in the next few decades, quite possibly the next few years if the current pace of progression continues. It could change life to something much, much better or much, much worse based on who develops it and how safely they do it.

To me safety is far and away to most important subfield in AI now, but is one of the least discussed. Even if you think there is a low chance of AI going haywire on its own, in my admittedly very non-expert view it’s obvious that we should be also concerned about the judgment and motives of the people developing and controlling the most powerful AIs, and the risks of such powerful tools being accessible to everyone. At the very least I would want discussion on actionable things we can all do as individuals.

I feel a strong sense of duty to do what I can, even if that’s not much. I want to donate a percentage of my salary to funding AI safety, and I am looking whether I can effectively contribute with work to any AI safety organizations. I have a few of my own ideas along these lines; does anyone have any suggestions? I think we should also discuss ways to shift the incentives of major AI organizations. Maybe there isn’t a ton we can do (although there are a LOT of people looking, there is room for a major movement), but it’s certainly not zero.

3

djmaxm t1_jd05tgt wrote

I have a 4090 with 32GB of system RAM, but I am unable to run the 30B model because it exhausts the system memory and crashes. Is this expected? Do I need a bunch more RAM? Or am I doing something dumb and running the wrong model. I don't understand how the torrent model, the huggingface model, and the .pt file relate to each other...

3

nolimyn t1_jd01nm3 wrote

the LoRA is like a modular refinement of the base language model, in this case it's the part that makes it feel like a chatbot / assistant, and makes it follow instructions.

you can see the same concept over at civitai.com, filter by LoRAs. Something like a LoRA for one character can be run on different checkpoints that focus on photorealism or anime, etc.

1

wind_dude t1_jd012ru wrote

I'm not big into image generation, but... some thoughts...

- SSIM - I believe the issue here has to due with the quality of the img captions. Perhaps merging captions on images

- could try training boolean classifiers for both images and captions, `is_junk`, and than using that model to remove junk from the training data.

1