Recent comments in /f/deeplearning

suflaj t1_j8vt849 wrote

Likely

The possible implementation path is just using it, lol, it is already capable of doing that.

At this moment, your two bets are:

  • OpenAI watermarks their generated text and you have models which cam detect this watermark
  • a bigger, better model comes out which can detect synthetic text (although then THAT model becomes the problem)

You could also counter misinformation with a fact checking model, but there are two big problems:

  • we are nowhere near developing useful AI that can reason
  • the truth is subjective and full of dogmas, ex. look at how most countries implement dogmas regarding the holocaust - your model would, without a severe transformation of society itself, be biased and capable of spreading propaganda in a general sense, and misinformation as a subset of propaganda

Therefore I believe your question should be: when can we expect to have models that only share the "truth of the victor". And that's already happening with ChatGPT now, as it seems to be spreading western liberal views.

3

Oceanboi t1_j8uygkt wrote

why was the neural network stopped at like 1000 steps? why are we comparing a physics informed neural network to a neural network at a different number of steps lol

Also correct me if I'm wrong but don't we care about how the model generalizes? I think we can show that some NN will fit to any training set perfectly given enough steps, but this is already common knowledge no?

1

crimson1206 t1_j8ts496 wrote

Well how is it relevant then? Im happy to be corrected but I dont see how its relevant to this post

It just tells you that there is a well approximating NN for any given function. It doesn't tell you how to find such a NN and it doesnt tell you about extrapolation capabilities of a NN which is well approximating on just a subdomain (which is what this post here is mainly about) either.

The universal approximation theorem in practice just gives a justification for why using NNs as function approximators could be a reasonable thing to do. That's already pretty much the extent of their relevancy to practical issues though

1

danja t1_j8tgfpo wrote

I like it. On a meta level, giving the machine a bit of a priori knowledge about the shape of things to come, that makes a lot of sense.

When the self-driving car hits an obstacle, they will both obey mostly Newtonian sums.

Effectively embedding that knowledge (the differential equations) might make the system less useful for other applications, but should very cheaply improve it's chances on a lot of real-world problems.

Robotics is largely done with PID feedback things. Some more understanding of the behaviour of springs etc etc should help a lot. Quite possibly in other domains, hard to know where such things apply.

1

[deleted] OP t1_j8sdmmp wrote

Oh, that helps. I am not sure how true this is though. For example, TF.Keras and TF.SavedModel cant be converted into one another and have different features..Both can be used to predict but only one can be re trained and "tweaked" or extended from JS itself. And I am not sure you can convert Pytorch weights to Keras, but I will investigate. Apparently there is ONX that can be used to do it. I just dont want to train something that can not be converted and loaded into a browser.

What I learnt so far is that Sliding Window, Region of Interest, and Yolo are more like ways to prepare your data, and mostly any CNN could do the job, with more or less precision, I may be wrong. I am following this series https://www.youtube.com/watch?v=XXYG5ZWtjj0&list=PLhhyoLH6Ijfw0TpCTVTNk42NN08H6UvNq&index=2&ab_channel=AladdinPersson

0