Recent comments in /f/MachineLearning
Individual-Sky-778 t1_jconas3 wrote
Reply to comment by banatage in [Discussion] Future of ML after chatGPT. by [deleted]
Yes, I completely agree. Right now that's true. But I wonder how long this will be true? Protocols for data encryption and privacy preserving learning are already out there, IMHO it's just a matter of time until openAI (and similar) will offer such services
banatage t1_jcon2zl wrote
Reply to [Discussion] Future of ML after chatGPT. by [deleted]
IMHO, those models are very good for general knowledge that can be sucked up from public sources.
When it comes to proprietary / confidential data / knowledge, this is where your work will pay off.
Individual-Sky-778 t1_jcon05y wrote
Reply to comment by hiptobecubic in [Discussion] Future of ML after chatGPT. by [deleted]
What do you mean by "advancing"? Just models getting bigger and bigger?
hiptobecubic t1_jcomtiq wrote
Reply to [Discussion] Future of ML after chatGPT. by [deleted]
Why are these things doomed just because they are advancing?
TheGuywithTehHat t1_jcomptw wrote
Any specific part you're wondering about? General advice applies here: test each unit of your software, and then integrate the units and test them that way. For each unit, hardcode the input and then test that the output is what you expect. For unit tests, make them as simple as possible while still testing as much of the functionality as possible. For integration tests, make a variety of them ranging from just a couple combined units & simple input/output to end-to-end tests that simulate the real world as closely as possible.
This is all advice that's not specific to ML in any way. Anything more specific depends on so many factors that boil down to:
- What is your environment like?
- What do you expect to change between different runs of the test?
For example: Will your dataset change? Will it change just a little (MNIST to Fashion-MNIST) or a lot (MNIST to CIFAR)? Will your model change? Will it just be a new training run of the same model? Will the model architecture stay the same or will it change internally? Will the input or output format of the model change? How often will any of these changes happen? Which parts of the pipeline are manual, and which are automatic? For each part of the system, what are the consequences of it failing (does it merely block further development, or will you get angry calls from your clients)?
Edit: I think the best advice I can give is to test everything that can possibly be tested, but prioritize based on risk impact (chance_of_failure * consequences_of_failure).
blazejd t1_jcolu6i wrote
I've been trying to figure it out myself and I'm very curious to see other responses.
[deleted] OP t1_jcolkjv wrote
Reply to comment by [deleted] in [D] Newbie question about Stanford Alpaca 7b fine-tuning by [deleted]
One thing to note is that various people prefer various types of responses. Some might like the default ChatGPT style, some something else. So the best thing to do is to teach the chatbot to behave based on the system message
[deleted] OP t1_jcolakv wrote
Reply to comment by KingsmanVince in [D] Newbie question about Stanford Alpaca 7b fine-tuning by [deleted]
I can modify the response behavior through the system message. What would you suggest as a system message to get away from that boring ChatGPT response style? For example, if I set the system message to "act as a pirate" I can grab thousands of responses to various topics in that style. Here is an example of how the system message can completely change the response type:
Question: what is apple Inc.
No system message answer: Apple Inc. is a multinational technology company that designs, develops, and sells consumer electronics, computer software, and online services. It is known for creating popular products such as the iPhone, MacBook, and Apple Watch, and operating services such as iTunes, Apple Music, and iCloud.
Now with system message "Avoid providing boring responses, make sure the user is entertained: Apple Inc. is much more than just a company that produces those fancy iPhones and Macbooks that you probably spend way too much time drooling over. It's a technological powerhouse that has been responsible for some of the biggest innovations in consumer technology over the past few decades.
From the iconic iPod that revolutionized the music industry, to the iPhone that has turned us all into smartphone-addicted zombies, Apple has been at the forefront of tech innovation for as long as anyone can remember.
The company was founded in 1976 by Steve Jobs and Steve Wozniak, and has since grown into one of the most influential companies in the world. Love them or hate them (or just be indifferent, whatever), there's no denying that Apple has left an indelible mark on the world of technology.
-xylon t1_jcokzje wrote
Reply to comment by fullstackai in [D] Unit and Integration Testing for ML Pipelines by Fender6969
Having a schema and generating random or synthetic data based on that schema is my way to go for testing.
fullstackai t1_jcokcsq wrote
I treat code artifacts of ML pipelines like any other software. I aim for 100% test coverage. Probably a bit controversial, but I always keep a small amount of example data in the repo for unit and integration tests. Could also be downloaded from blob in the CI pipeline, but repo size is usually not the limiting factor.
[deleted] OP t1_jcok6zs wrote
Reply to comment by Disastrous_Elk_6375 in [D] Newbie question about Stanford Alpaca 7b fine-tuning by [deleted]
Yeah, it does that. I can modify ChatGPT behavior through the system message, which should change the personality and response type in the final data. I could maybe start training it with examples on how it should act when the system message is present.
Example:
### System:{Act as a best friend}
### Instruction:{hi}### Input:{noinput}### Response: Hey there! What's up? How's your day going?
I could feed the model with thousands of examples like this which would result in complete personality change if the system message is present
[deleted] OP t1_jcojdrv wrote
Reply to comment by sqweeeeeeeeeeeeeeeps in [D] Newbie question about Stanford Alpaca 7b fine-tuning by [deleted]
My budget is $3k, hopefully it's enough to make something decent with that.
Also, data generation is surprisingly cheap, 50k mostly long response data from 3.5 cost $20.
asraniel t1_jcoj075 wrote
i would love to know more about this. some of my ideas are super simple dataset that might very well overfit, but show that the code works. other than that, i would love to hear more about simple tests which do not need to run the full pipeline on the full dataset (which does not tell you that much ultimately)
WASDx t1_jcogs0j wrote
Reply to [P] Web Stable Diffusion by crowwork
How does it store the model? I assume it's not re-downloaded every time you visit the page, and I would not expect my browser to allow caching multiple gigabytes from a single domain.
JustOneAvailableName t1_jcobgnf wrote
Reply to comment by mike94025 in [D] PyTorch 2.0 Native Flash Attention 32k Context Window by super_deap
That is a very nice surprise
tysam_and_co t1_jco9im0 wrote
I have a feeling that's because the heuristic is likely not giving you flash attention at all, but instead this kernel, which appropriately fits your usecase and is in the list of possibly-automatically-selected kernels: https://arxiv.org/abs/2112.05682
mikonvergence OP t1_jco5znu wrote
Reply to comment by kross00 in [P] ControlNetInpaint: No extra training and you can use 📝text +🌌image + 😷mask to generate new images. by mikonvergence
Which implementation do you have in mind?
When putting this together, neither the official implementation (https://github.com/lllyasviel/ControlNet) nor the diffusers pipeline (https://huggingface.co/docs/diffusers/main/en/api/pipelines/stable_diffusion/controlnet) had the inpainting option built-in (and it seems they still don't?).
​
While this framework still follows the principle of injecting ControlNet features into a core SD backbone, this core backbone had to be changed to an inpainting one, to allow mask input (and since the input also includes a mask, it is not possible to just specify a different backbone source and reuse an existing pipeline). The pipeline provided in this repository StableDiffusionControlNetInpaintPipeline implements this approach and merges the interfaces of StableDiffusionControlNetPipeline and StableDiffusionInpaintPipeline, so you can provide a source image, a mask, and a control image, and also set all possible parameters to the values you like.
sinazyo t1_jco44zo wrote
Reply to [D] Simple Questions Thread by AutoModerator
Hi guys! why isn't Facebook's Bart model studied like the larger models of the GPT family?
Is it just because BERT is superior in discriminative model and GPT is superior in generative model?
I like the BART model, but it's a pity that I haven't seen much research related to it. Please let me know if there are any studies on barts with more parameters.
shadowknight094 t1_jco3sjk wrote
Reply to [P] Web Stable Diffusion by crowwork
When I click generate button in Chrome on my desktop nothing happens? Is this a service worker or a network call? I don't see anything in Chrome dev tools
josejo9423 t1_jco1xea wrote
Reply to comment by Sonicxc in [D] Simple Questions Thread by AutoModerator
Maybe trying image classification? CNN pytorch
Disastrous_Elk_6375 t1_jco0q62 wrote
> can I expect better outputs than what Stanford Alpaca achieved?
I think better is a bit subjective. They do note that the answers are generally shorter than ChatGPT, because they used text-davinci-003. Using gpt-3.5-turbo would get your answers closer to ChatGPT, but it could also "grab" that boring monotone "firstly... secondly... in conclusion" that often gives it away.
asdf3011 t1_jcnz5d2 wrote
Reply to [P] Web Stable Diffusion by crowwork
Going to send this to people when someone shows curiosity to trying image generation. Thanks you!
programmerChilli t1_jcnydmw wrote
Reply to comment by tripple13 in [D] PyTorch 2.0 Native Flash Attention 32k Context Window by super_deap
I think it is used in Pytorch’s nn.transformerencoder but a lot of people like implementing their own.
programmerChilli t1_jcny4qx wrote
Reply to comment by royalemate357 in [D] PyTorch 2.0 Native Flash Attention 32k Context Window by super_deap
We currently officially support Cuda and CPU, although in principle it could be used for other backends too.
nucLeaRStarcraft t1_jcoo30z wrote
Reply to comment by -xylon in [D] Unit and Integration Testing for ML Pipelines by Fender6969
more or less the same. However, the simplest way to start, at least that's what I found, is to randomize a sub sample of real data. It may be the case that synthetic data is simply too simple / does not capture the real distribution and can hide bugs.
Probably both is the ideal solution.