Recent comments in /f/MachineLearning

LeN3rd t1_jcgrxfp wrote

Strongly depends on your constraints. There are ways to get 3d geometry from a photo/video. If you have the geometry of your glasses you should be able to see if they fit, though you might have some problems with actually adjusting the glasses to fit on the face geometry. But you could also just do what you optician does and take a frontal photo of your face in a controlled environment.

1

LeN3rd t1_jcgrhlm wrote

Be a little more coherent in your question please. No one has any idea about your specific setup unless you tell us what you want to achieve. I.e. RF is usually short for reinforcement learning in the AI community, not radiofrequency. If you want to classify data streams coming from drones, take a look at pattern matching and nearest neighbour methods, before you start to train up a large neural network.

3

LeN3rd t1_jcgqzvo wrote

If you have more variables than datapoints, you will run into problems, if your model starts learning by heart. Your models overfits to the training data: https://en.wikipedia.org/wiki/Overfitting

You can either reduce the number of parameters in your model, or apply a prior (a constraint on your model parameters) to improve test dataset performance.

Since neural networks (the standard emperical machine learning tools nowadays) have a structure for their parameters, this means they can have much more parameters than simple linear regression models, but seem to run into problems, when the number of parameters in the network matches the number of datapoints. This is just empirically shown, i do not know any mathematical proves for it.

1

LeN3rd t1_jcgq97y wrote

You will need more than a week. If you just want to predict the next word in a sentence, take a look at large language models. ChatGPT being one of them. BERT is a research alternative afaik. If you aim to learn the probabilities yourself, you will need at least a few months.

In general what you want is a generative model that can sample from the conditional probability distribution. In sequences usually transformers like BERT and chatgpt are state of the art. You can also take a look at normalizing flows and diffusion models to learn probability distributions. But this needs some maths, and i unfortunatly do not know what smaller models can be used for computational linguistic applications like this.

1

LeN3rd t1_jcgp44s wrote

How big is your dataset? Before you start anything wild, i would look at kernel clustering methods. Or even clustering without kernels. Just cluster your broken and non broken images and calculate some distance (can be done with kernels if it needs to be nonlinear).

Also Nearest neighbor could work pretty well in your case. Just compare your new image to the closest (according to some metric) in your two datasets and bobs your uncle.

If you need a number, look at simple CNNs. you need more training data though for this to work well.

2

LeN3rd t1_jcgo5ro wrote

You should take a look at uncertainty in general. What you are trying to do is calculate epistemic uncertainty. (google epistemic vs aleatoric uncertainty).

One thing that works well is to have a dropout layer, that is active during prediction!! (in tensorflow you have to feed training=True into the call to activate it during prediction). Sample like 100 times and calculate the standard deviation. This gives you a general "i do not know" function from the network. You can also do so by training 20 models and letting them output 20 different results. With this you can assign the 101 label, when the uncertainty is too high.

In my experience you should stay away from bayesian neural networks, since the are extremly hard to train, and cannot model multimodal uncertainty. (dropout can neither, but is WAAAAYYY easier to train).

1

LeN3rd t1_jcgn73n wrote

Can anyone recommend a good, maintained and well organized MCMC python package? Everything i found was either not maintained, had only a single research group behind it, or had to many bugs for me to continue with that project. I want Tensorflow/Pytorch, but for MCMC sampling please.

2

bohreffect t1_jcgcr96 wrote

Seeing who constitutes high profile names in "AI Ethics", having watched individual debacles unfold in the industry over the last few years, it doesn't instill confidence that they're particularly valuable. It's very, very hard not to have cynical takes about the people bubbling to the top.

In general I don't think unaccountable focus groups of people are the best moral arbiters either. In this instance I feel we have to go with our least worst option deferring to the wisdom of crowds.

3

1F9 t1_jcfxc5b wrote

That reason is that they replaced Lua with Python as the high-level language that wrapped Torch's core, and needed to differentiate that from the original Torch. But it seems as though you believe the "py" prefix means the correct design decision for the project is to replace ever more parts of torch with Python. Perhaps you could elaborate more on your thinking there?

2

WikiSummarizerBot t1_jcfub6d wrote

Natural language

>In neuropsychology, linguistics, and philosophy of language, a natural language or ordinary language is any language that has evolved naturally in humans through use and repetition without conscious planning or premeditation. Natural languages can take different forms, such as speech or signing. They are distinguished from constructed and formal languages such as those used to program computers or to study logic.

Formal language

>In logic, mathematics, computer science, and linguistics, a formal language consists of words whose letters are taken from an alphabet and are well-formed according to a specific set of rules. The alphabet of a formal language consists of symbols, letters, or tokens that concatenate into strings of the language. Each string concatenated from symbols of this alphabet is called a word, and the words that belong to a particular formal language are sometimes called well-formed words or well-formed formulas. A formal language is often defined by means of a formal grammar such as a regular grammar or context-free grammar, which consists of its formation rules.

^([ )^(F.A.Q)^( | )^(Opt Out)^( | )^(Opt Out Of Subreddit)^( | )^(GitHub)^( ] Downvote to remove | v1.5)

1