Recent comments in /f/deeplearning
[deleted] OP t1_j8s8e0x wrote
Reply to comment by PaleontologistDue620 in My Neural Net is stuck, I've run out of ideas by [deleted]
Sorry for the ignorance but wdym by darknet here? All I was planning is to use tensorflow's keras, as tf.js won't make it I think. I started a week ago so I am still digesting things.
PaleontologistDue620 t1_j8s7qkn wrote
Reply to comment by [deleted] in My Neural Net is stuck, I've run out of ideas by [deleted]
train it with either darknet or python then convert the weights if you need to, there are already scripts for everything you need to do, that's why i said YOLO in the first place.
[deleted] OP t1_j8s73tb wrote
Reply to comment by PaleontologistDue620 in My Neural Net is stuck, I've run out of ideas by [deleted]
I am taking a look. But most tensorflow.js versions are either untrainable or too large, so I will probably use Keras and export the models as Layers, If I can manage to. I am still looking for light weight NNs, it's a challenge, so I may ask somewhere for candidates.
[deleted] OP t1_j8s6viw wrote
Reply to comment by erunim in My Neural Net is stuck, I've run out of ideas by [deleted]
I am not sure what you mean, but if it is normalizing the images and also the box dimensions, plus having them in the right order then yes.
Otherwise maybe not
erunim t1_j8s4hki wrote
Just curious. You did align the normalization methods right?
PaleontologistDue620 t1_j8s39ou wrote
do yourself a favor and go for the good old yolo, i think it has a tensorflow.js version too.
[deleted] OP t1_j8re6mi wrote
Reply to comment by trajo123 in My Neural Net is stuck, I've run out of ideas by [deleted]
> by "the problem of bounding objects" you mean object detection / localization then a single regression head on top a classifier architecture is not a good way of solving this problem, there
I just replied myself a similar thing in a comment. You are correct,
Indeed I am planning to do it in Keras because there arent implemented models in TF.js and doing it is quite difficult.
I do not think Pytorch models can be easily used in Tensorflow JS afterwards right
trajo123 t1_j8rdzfd wrote
Some general things to try:
- (more aggressive) data augmentation when training to make you model behave better on other data, not in the dataset
- if by "the problem of bounding objects" you mean object detection / localization then a single regression head on top a classifier architecture is not a good way of solving this problem, there are specialized architectures for this, e.g. R-CNN, Yolo.
- If you have to do it with the regression head, then go for at least Resnet50, it should get you better performance across the board, assuming it was pre-trained on a large dataset like ImageNet. Vgg16 is quite small/weak by modern standards.
Why do you need to implement this in JavaScript? Wouldn't it make sense to decouple the model development from the deployment? Get a Pytorch or Tensorflow model working first, then worry about deployment. This way you can access a zoo of pre-trained models - at Hugging Face for instance.
[deleted] OP t1_j8rdy53 wrote
One possible reason https://towardsdatascience.com/r-cnn-fast-r-cnn-faster-r-cnn-yolo-object-detection-algorithms-36d53571365e i.e the VGG convolutional model wont be good for bounding boxes but only for classification task.
pgmali0n t1_j8qt90x wrote
Reply to Physics-Informed Neural Networks by vadhavaniyafaijan
how do you inform nn with physics?
BrotherAmazing t1_j8q7wow wrote
Reply to comment by nibbajenkem in Physics-Informed Neural Networks by vadhavaniyafaijan
Indeed, this example of a simple harmonic oscillator is not a practical use case, but more of a simple example and pedagogical tool.
There are potential use cases for more complex problems. Many complex physical systems are governed by partial differential equations, however it is often impossible to write explicit formulas for the solutions to these equations, and so the physical states must be experimentally observed as they evolve or else computationally demanding high-fidelity simulations must be run, sometimes on supercomputers for many days, in order to numerically estimate how the physical system evolves.
Just think about recent work in protein folding. Would a deep NN that tries to make protein folding predictions benefit from knowing physics-based constraints?
BrotherAmazing t1_j8q4qdx wrote
Reply to comment by crimson1206 in Physics-Informed Neural Networks by vadhavaniyafaijan
Isn’t it more technically correct to state that a “regular NN” could learn to extrapolate this in theory, but is so unlikely to do so that the probability might as well be zero?
PINNs are basically universal function approximators that have additional knowledge about physics-based constraints imposed, so it’s not surprising and shouldn’t be taken as an “dig” on “regular NNs” that they can better decide what solutions may make sense and are admissible vs. something that is basically of an “equivalent” architecture and design but without any knowledge of physics encoded in to regularize it.
jgonagle t1_j8pdyti wrote
Reply to Physics-Informed Neural Networks by vadhavaniyafaijan
In other words, good priors result in good posteriors. Mind blown.
Jesse_marqo t1_j8oy1lc wrote
Reply to comment by extracoffeeplease in [P] From “iron manual” to “Iron Man” — Augmenting GPT for fast editable memory to enable context aware question & answering by skeltzyboiii
Good question! You can use either or both if you want. In the article you can see that switching between embedding based and lexical search is easy.
lambda_matt t1_j8oozo1 wrote
Reply to comment by N3urAlgorithm in GPU comparisons: RTX 6000 ADA vs Hopper h100 by N3urAlgorithm
https://lambdalabs.com/gpu-benchmarks
Now has rtx Ada 6k
canbooo t1_j8onqb4 wrote
Reply to comment by nibbajenkem in Physics-Informed Neural Networks by vadhavaniyafaijan
No, valid question, I just find it difficult to give examples that are easy to understand but let me try. Yes OPs example is not a good one to demonstrate the use case. Let us think about a swarm of drones and their physics, specifically the airflow around them. Hypothetically, you maybe able to describe the physics for a single drone accurately, although this would probably take quite some time in reality. Think days on a simple laptop for a specific configuration if you rally want high accuracy. Nevertheless, if you want to model say 50 drones, things get more complicated. Airflow of one effects the behavior/airflow of others, new turbulence sources and other effects emerge. Actually simulating such a complex system may be infeasible even with supercomputers. Moreover, you are probably interested in many configurations like flight patterns, drone design etc. so that you can choose the best one. In this case, doing a functional interpolation is not very helpful due to the interactions and new emerging effects as we only know the form of the function for a single drone. Sure, you know the underlying equations but you still can't really predict the behavior of the whole without solving them, which is as mentioned costly. The premise of PINNs in this case is to learn to predict the behaviour of this system and the inductive bias is expected to decrease the number of samples required for generalization.
nibbajenkem t1_j8ojasp wrote
Reply to comment by canbooo in Physics-Informed Neural Networks by vadhavaniyafaijan
Of course, more inductive biases trivially lead to better generalization. Its just not clear to me why you cannot forego the neural network and all its weaknesses and instead simply optimize the coefficients of the physical model itself. I.e in the example in OP, why have a physics-based loss with a prior that it's a damped oscillator instead of just doing regular interpolation on whatever functional class(es) describe the damped oscillators?
I don't have much physics expertise beyond the basics so I might be misunderstanding the true depth of the problem though
canbooo t1_j8ohxf9 wrote
Reply to comment by nibbajenkem in Physics-Informed Neural Networks by vadhavaniyafaijan
Esp. in engineering applications, i.e. with complex systems/philysics, fundamental physical equations are known but not how they influence each other and the observed data. Alternatively, these are too expensive to compute for all possible states. In those cases, we already build ML models using the data to, e.g. optimize design or do stuff like predictive maintenance. However, these models often do not generalize well to out of domain samples and producing samples is often very costly since we either need laboratory experiments or actually create some design that are bound to fail (stupid but for clarity: think planes with rectangular wings, cranes so thin they could not even pick up a feather. Real world use cases are more complicated to fit in these brackets). In some cases, the only available data may be coming from products in use and you may want to model failure modes without observing them. In all these cases PINNs could help. However, none of the models I have tested so far are actually robust to real world data and require much more tuning compared to MLPs, RNNS etc, which are already more difficult to tune compared to more conventional approaches. So I am yet to find an actual use case that is not academic.
TLDR; physics (and simulations) may be inefficient/inapplicable in some cases. PINNs allow us to embed our knowledge about the first principles in form of inductive bias to improve generalization to unseen/unobservable ststes.
smierdek t1_j8ogszh wrote
Reply to comment by smierdek in Physics-Informed Neural Networks by vadhavaniyafaijan
i mean pick a value somewhere in the middle between min and max and go home bahaha
smierdek t1_j8oggk9 wrote
Reply to Physics-Informed Neural Networks by vadhavaniyafaijan
what is this nonsense?
MrMoussab t1_j8o8b41 wrote
Reply to Physics-Informed Neural Networks by vadhavaniyafaijan
I don't think such post deserves upvoting given redditors comments. Number of steps is weird. What nn is used? Why use nns if physical modeling is possible ...
Skoogy_dan t1_j8o0l3j wrote
Reply to comment by Late_Scientist_9344 in Physics-Informed Neural Networks by vadhavaniyafaijan
Yes, you can discover the underlying parameters of the DiffE from a dataset.
extracoffeeplease t1_j8nqdl8 wrote
Reply to [P] From “iron manual” to “Iron Man” — Augmenting GPT for fast editable memory to enable context aware question & answering by skeltzyboiii
So IIUC this searches text first, then adds that to the prompt as input to the LLM. Now for the text search, why do vector searching and not Elasticsearch, or both? Reason I'm asking is I've seen vector search issues pop up when your data is uncommon and hence badly embedded, for example searching for a unique name or a weird token ( for example, P5.22.a.03), whereas classic text search can find that exact token.
wallynext t1_j8npy97 wrote
Reply to comment by nibbajenkem in Physics-Informed Neural Networks by vadhavaniyafaijan
exactly my thought, but I think this is the "trainning and validation set" where you already know the label and then they can use this neural net to discover new "unseen" mathematical equations
PaleontologistDue620 t1_j8s9y0j wrote
Reply to comment by [deleted] in My Neural Net is stuck, I've run out of ideas by [deleted]
you can train yolo on any framework you want and convert the weights later, then load them into your preferred inference framework. ( I'm not sure about js but in python you can load yolo models into opencv as well ). darknet is the original yolo framework which gives you scripts for training the model.