Recent comments in /f/deeplearning
arrowoftime t1_j8z0jp2 wrote
Reply to I trained a neural network. Now what? by Ricenaros
Deploy it: https://youtu.be/oVHYOD7GaUg
Sim2955 t1_j8ytfmi wrote
Reply to I trained a neural network. Now what? by Ricenaros
possibly randomised search https://scikit-learn.org/stable/modules/grid_search.html#tuning-the-hyper-parameters-of-an-estimator
justundertheblack OP t1_j8yfuzw wrote
Reply to comment by Oreoed in NLP model for sentiment analysis by justundertheblack
I've built my own scrapper so I have tons of data for the model Yeah I don't think I'll get it right off the bat but let's see where I go with it
Oreoed t1_j8yfckw wrote
Reply to NLP model for sentiment analysis by justundertheblack
Depends on what you're trying to do with this project.
If you already have a news source, implementing pre-trained model from huggingface should be relatively easy.
If you want to fine-tune that model, you will need a dataset of news headlines.
Check out Kaggle, there are some small but publicly available datasets.
I know you can also find data on some obscure github repo, but good luck with that.
If your goal is to implement a fully operational pipeline, you will need not only all the above, but also a way to acquire news in real time. That may mean a scrapper of some news outlets that are of interest. Once again, github is your friend.
That said, don't expect a profit off this alone. Using news data alongside some trading indicators will *maybe* work on paper (ie. backtest) with the right features and optimization, but is unlikely to get live results.
Then again, for a college project that might not be relevant.
Moreymoe t1_j8y6zc7 wrote
Reply to NLP model for sentiment analysis by justundertheblack
If you know how to code with python. Then I would highly recommend trying the spacy module. It’s really awesome and they have a ton of guides and info on their site
SleekEagle t1_j8y0vuc wrote
Reply to comment by suflaj in How likely is ChatGPT to be weaponized as an information pollution tool? What are the possible implementation paths? How to prevent possible attacks? by zcwang0702
Agreed! I mean even if the proper resources were dumped into creating such a large detector it could be come quickly obsolete because of adversarial training (AFAIK, not an expert on adv. training)
suflaj t1_j8xv8md wrote
Reply to comment by zcwang0702 in How likely is ChatGPT to be weaponized as an information pollution tool? What are the possible implementation paths? How to prevent possible attacks? by zcwang0702
Maybe that's just a necessary step to cull those who cannot critically think and to force society to stop taking things for granted.
At the end of the day misinformation like this was already generated and shared even before ChatGPT was released, yet it seems that the governmental response was to allow domestic sources and try to hunt down foreign ones. So if the government doesn't care, why should you?
People generally don't seem to care even if the misinformation is human-generated, ex. Hunter Biden's laptop story. I wouldn't lose sleep over this either way, it is still human-operated.
zcwang0702 OP t1_j8xufwt wrote
Reply to comment by suflaj in How likely is ChatGPT to be weaponized as an information pollution tool? What are the possible implementation paths? How to prevent possible attacks? by zcwang0702
>if it isn't of roughly the same size or larger. Either ChatGPT is REALLY sparse, or such detection models won't be available to mortals. So far, it doesn't seem to be sparse, since similarly sized detectors can't reliably diffe
Yeah I have to say this is scary, cause if we cannot build a robust detector now, it will become increasingly difficult to do in the future. LLM will make the Internet information continuously blurred.
MugiwarraD t1_j8xu4hp wrote
suflaj t1_j8xor46 wrote
Reply to comment by SleekEagle in How likely is ChatGPT to be weaponized as an information pollution tool? What are the possible implementation paths? How to prevent possible attacks? by zcwang0702
It doesn't matter if it isn't of roughly the same size or larger. Either ChatGPT is REALLY sparse, or such detection models won't be available to mortals. So far, it doesn't seem to be sparse, since similarly sized detectors can't reliably differentiate between it and human text.
SleekEagle t1_j8xoiu6 wrote
Reply to comment by suflaj in How likely is ChatGPT to be weaponized as an information pollution tool? What are the possible implementation paths? How to prevent possible attacks? by zcwang0702
Adversarial training will be a huge factor regarding detection models imo
[deleted] OP t1_j8x413i wrote
Reply to comment by PaleontologistDue620 in My Neural Net is stuck, I've run out of ideas by [deleted]
I did it with tiny yolo v7, using google colab. My point is that there are barely any projects that are usable, unless you found some?
Yes the results were great, I am thinking of writing a little blogpost for others, it is actually quite simple because I found a tutorial in roboflow this time around.
Thanks for your support !
buttertoastey t1_j8x1gij wrote
Reply to [P] From “iron manual” to “Iron Man” — Augmenting GPT for fast editable memory to enable context aware question & answering by skeltzyboiii
Is the UI included in the source code? I cant seem to find it. Just the product_q_n_a.py which runs in the command line
justundertheblack OP t1_j8wzp0x wrote
Reply to comment by Aynit in NLP model for sentiment analysis by justundertheblack
I was looking into hugging face too found finbert over there
justundertheblack OP t1_j8wz0an wrote
Reply to comment by Aynit in NLP model for sentiment analysis by justundertheblack
It's a project for my college
Aynit t1_j8wytok wrote
Reply to NLP model for sentiment analysis by justundertheblack
I'd recommend scooting on over to hf.co and checking out some of their open source models. Realistically you can call one, or an ensemble from there in a few lines of code. Not sure what's causing this piece of work - school? work? hobby? But that's a decent place to get something set up quickly. They might even have models fine tuned on stock/economic vocabulary.
PaleontologistDue620 t1_j8wuk9b wrote
Reply to comment by [deleted] in My Neural Net is stuck, I've run out of ideas by [deleted]
go for YOLO v3 or YOLO v4. i promise they'll be good enough for you, don't be bothered with version numbers . (if you need lighter models go for tiny versions of v3 and v4).
Nicefinancials t1_j8wk98u wrote
Reply to Any of you know a local and Open Source equivalent to Eleven Labs text to speech AI ? by lordnyrox
Welp, that was fast. Haven’t tried the local version, this is based on the vall-e paper by ms researchers. https://github.com/enhuiz/vall-e trainable text to speech and comes with a colab implementation. Let me know how it works will spin this up later.
[deleted] OP t1_j8wh57w wrote
Reply to comment by tsgiannis in My Neural Net is stuck, I've run out of ideas by [deleted]
Yes but I dont really know what is your solution ?
I can skip rescaling it, but what then?
Mind to tell what were you using it for?
tsgiannis t1_j8wgwsj wrote
Reply to comment by [deleted] in My Neural Net is stuck, I've run out of ideas by [deleted]
I had serious issues with the whole training of VGG...this fixed it...
give it a spin.
[deleted] OP t1_j8wddyy wrote
Reply to comment by PaleontologistDue620 in My Neural Net is stuck, I've run out of ideas by [deleted]
It is interesting you commented this. I have spent a day yesterday trying to train some Yolo in python, but all the implementations on github are quite obsolete, apart from Yolov7.
Unless you refer to ultralytics only?
[deleted] OP t1_j8wd1cz wrote
Reply to comment by tsgiannis in My Neural Net is stuck, I've run out of ideas by [deleted]
What do you mean? The values were normalized in that way.
I think VGG16 will never work for an object detection task after reading about it. Do you disagee?
crimson1206 t1_j8w4qnt wrote
Reply to comment by Oceanboi in Physics-Informed Neural Networks by vadhavaniyafaijan
The steps really don’t matter. The normal NN will not learn to extrapolate better with more steps.
This post is precisely showing how the PINN has better generalization than the normal NN
tsgiannis t1_j8w2s8i wrote
Try instead of the rescale = 1. /255 to use its own preprocess function and report back
[deleted] OP t1_j8zb580 wrote
Reply to comment by PaleontologistDue620 in My Neural Net is stuck, I've run out of ideas by [deleted]
I have found out that VGG can indeed be used with SSD for the same task. Idk exactly what is the general idea but mostly you can combine CNNs with something else and get the bouding box. Pytorch has a SSD-VGG model.
I wonder why pytorch has no yolo implemented that we can just use..