Recent comments in /f/deeplearning

Nike_Zoldyck OP t1_iuvdvsa wrote

Thanks for your insight. We're you even able to access the link? Turns out it was behind a membership thing. I updated the link url so it should be free now. I couldn't find any helpful solutions to my problem and had to try everything, until the last paragraph which finally solved it and I had to figure that out through trial and error. So instead of someone new opening 35 tabs the next time , I figured I'd consolidate everything I attempted into a post that I can keep editing if I come across anything more, or if someone decides to share anything useful about their experience with this issue, along with what sort of models they were running.

This was mostly an attempt to collect more info from people who might see their usual trick not mentioned there. I'm glad I could cover everything you already know

1

Rare_Lingonberry289 OP t1_iuk3uxk wrote

Ok, that makes sense. One more thing though. According to my research, temporal points during the spring and autumn are more helpful for what I'm trying to do. However, I'm afraid that large jumps like this will confuse my model. Like it will have a hard time detecting features when time jumps like this happen. Is this a real concern?

1

suflaj t1_iuk2gfj wrote

Yeah, just experiment with it. Like I said, I would start with 4. Then go higher or lower depending on your needs. I have personally not seen a temporally sensitive neural network to go beyond 6 or 8 time points. As with anything, there are tradeoffs.

Although if you have x, y and c, you will be doing 3D convolutions, not 4D. A 4D convolution on 4D data is essentially a linear layer.

1

suflaj t1_iuji61k wrote

Probably not.

  • I am almost certain you don't have data that would take advantage of this dimensionality or the resources to process it
  • you can't accumulate so many features and remember all of them in recurrent models
  • I am almost certain you don't have the hardware to house such a large transformer model that could process it
  • I am almost certain you will not get a 365 day history of a sample during inference, 4 days seems more reasonable
1

suflaj t1_iujhefz wrote

I asked for the specific law so I could show you that it cannot apply to end-to-end encrypted systems, which either have partly destroyed information, or the information that leaves the premises is not comprehensible to anything but the model and there is formal proof that it is infeasible to crack it.

These are all long solved problems, the only hard part is doing hashing without losing too much information, or encryption compact enough to both fit into the model and be comprehensible to it.

2