Recent comments in /f/deeplearning
General-Jaguar-8164 OP t1_jbmeblz wrote
Reply to comment by Nerveregenerator in AskReddit: which MBP Pro is future proof for AI apps development? by General-Jaguar-8164
I don’t have much space in my home office. Shall I still for a desktop?
Nerveregenerator t1_jbmc70k wrote
There's no such thing as future proof in ai. Focus on present proofing
big_ol_tender t1_jbm8s9r wrote
Literally none of them
_rundown_ t1_jbli9rg wrote
Reply to comment by Bielh in this is reality. by Genius_feed
This reminds me of when I was in grade school and we had a sub for the day because my teacher was “taking a class on how to use the World Wide Web”
[deleted] t1_jblfd4j wrote
[removed]
eugene129 OP t1_jbkfrsr wrote
Reply to comment by rkstgr in what exactly is Variance(Xt) during the Forward Process in Diffusion model ? by eugene129
Hello, thanks for your reply. So... N(Xt ; ... , BtI) doesn't mean that the V(Xt) = Bt ?
seanv507 t1_jbkc2o1 wrote
Reply to comment by Constant-Cranberry29 in Can feature engineering avoid overfitting? by Constant-Cranberry29
Please just remove the question. Basically your stack overflow question is asking to debug your code. No general principles
Constant-Cranberry29 OP t1_jbjt7m4 wrote
Reply to comment by trajo123 in Can feature engineering avoid overfitting? by Constant-Cranberry29
yes, you can see my problem here https://stackoverflow.com/questions/75672909/why-by-adding-additional-information-as-number-of-sequence-on-dataset-can-avoid
jzaunegger t1_jbjst8q wrote
Heres one paper that I can immediately think of, https://arxiv.org/abs/1409.7495. The authors use a synthetic dataset to select and enginer features of a “real” dataset. Not sure if this is what you are looking for but could be a step in the right direction.
trajo123 t1_jbjpz72 wrote
Have you done any research at all? What did you find so far?
neuralbeans t1_jbjizpw wrote
Reply to comment by BamaDane in Can feature engineering avoid overfitting? by Constant-Cranberry29
It's a degenerate case, not something anyone should do. If you include Y in your input, then overfitting will lead to the best generalisation. This shows that the input does affect overfitting. In fact, the more similar the input is to the output, the simpler the model can be and thus the less it can overfit.
BamaDane t1_jbjhitr wrote
Reply to comment by neuralbeans in Can feature engineering avoid overfitting? by Constant-Cranberry29
I’m not sure I understand what your method does. If Y is the output, then you say I should also include Y as an input? And if I manage to design my model so it doesn’t just select the Y input, then I’m not overfitting? This makes sense that it doesn’t overfit, but doesn’t it also mean I am dumbing-down my model? Don’t I want my model to preferentially select features that are most similar to the output?
Whispering-Depths t1_jbjatka wrote
Reply to this is reality. by Genius_feed
slow and steady wins the race tho amirite
RoboiosMut t1_jbj01s2 wrote
You can ask chatgpt for this type of questions
Constant-Cranberry29 OP t1_jbiyw8d wrote
Reply to comment by neuralbeans in Can feature engineering avoid overfitting? by Constant-Cranberry29
because I've read from some paper, they saying FS and FE is different
neuralbeans t1_jbiygo0 wrote
Reply to comment by Constant-Cranberry29 in Can feature engineering avoid overfitting? by Constant-Cranberry29
Well selection is part of engineering, is it not?
Constant-Cranberry29 OP t1_jbixwf1 wrote
Reply to comment by neuralbeans in Can feature engineering avoid overfitting? by Constant-Cranberry29
I think feature selection and feature engineering are different
neuralbeans t1_jbixife wrote
Constant-Cranberry29 OP t1_jbiutcs wrote
Reply to comment by neuralbeans in Can feature engineering avoid overfitting? by Constant-Cranberry29
Can you provide a reference that states that feature engineering can address overfitting?
neuralbeans t1_jbiu3io wrote
Yes, if the features include the model's target output. Then, the overfitting would result in the model outputting that feature as is. Of course this is a useless solution, but the more similar the features are to the output, the less overfitting will be a problem and the less data you would need to generalise.
trajo123 t1_jbits7f wrote
Posting assignment questions to reddit.
Bielh t1_jbir7wd wrote
Reply to this is reality. by Genius_feed
Udemy course: Learn chatgpt
rkstgr t1_jbia52h wrote
First of all, beta_t is just some predefined variance schedule (in literature often linear interpolated between 1e-2 and 1e-4) and it defines the variance of the noise that is added at step t. What you have in (1) is the variance of sample x_t which does not have to be beta_t.
What does hold for large t is var(x_t)=1 as our sample converges to ~ Normal Gaussian with mean 0 and var 1.
Murky-View-3380 t1_jbi7xvo wrote
Reply to How can i improve my model in order to get more accuray and less loss?? Thanks by Electronic-Clerk868
Bro ask chat gpt
Nerveregenerator t1_jbmem9l wrote
Reply to comment by General-Jaguar-8164 in AskReddit: which MBP Pro is future proof for AI apps development? by General-Jaguar-8164
I think you need to research the basics of ai hardware. Laptops are almost all worthless, with the exceptions being specially designed laptops, which still only provide mediocre performance.