Recent comments in /f/deeplearning
David202023 t1_iznb1ut wrote
Looks very nice, kudos! Can we expect a similar tool for Pytorch?
boosandy OP t1_izmkhg7 wrote
Reply to comment by incrediblediy in Graphics Card set up for deep learning by boosandy
Oh nice set up. I have thermaltake 850w 80+ gold. Judging by your answer i think my desktop will be okay.
incrediblediy t1_izm4ey7 wrote
Reply to comment by boosandy in Graphics Card set up for deep learning by boosandy
Looks like, GTX980 4GB = 165 W & RTX2080 6GB = 160 W, which would be 325 W. I haven't used Intel K CPUs, so I am not that familiar with power usage of that. But I think 850 W would be more than enough, if it is a proper 850 W PSU, even considering power usage by other components like motherboard, RAM, SSD etc.
You can use this to calculate power requirement https://outervision.com/power-supply-calculator
My power usage is AMD Ryzen 5600x (75 W) + RTX3060 (170W) + RTX3090 (350W) = 595 W at max, I think with other components total was 750 W ( System power budget : https://outervision.com/b/8XoZwf ).
I have a 850 W, Tier A - Deepcool PQ850M which is a Seasonic based 80+ Gold. I have power stress tested with OCCT and it was fine.
komunistbakkal t1_izlqyou wrote
Reply to Does anyone know how to get the NxNx12 from the input image - is it just using reshape function or is there any other function that can be used by Actual-Performer-832
It can also be pixel unshuffle.Here you can find pytorch reference: https://pytorch.org/docs/stable/generated/torch.nn.PixelUnshuffle.html
trialofmiles t1_izld8ft wrote
Reply to Does anyone know how to get the NxNx12 from the input image - is it just using reshape function or is there any other function that can be used by Actual-Performer-832
This is spaceToDepth effectively? https://www.researchgate.net/figure/Intuitive-examples-of-space-to-depth-During-the-space-to-depth-method-top-we-aim-to_fig3_347156516
To accomplish the specific op here, spaceToDepth with block_size=2.
https://discuss.pytorch.org/t/is-there-any-layer-like-tensorflows-space-to-depth-function/3487/12
RichardBJ1 t1_izlbf63 wrote
Looks great, defo like to have a go with this.Perhaps epoch n/total epochs too though?
Final-Rush759 t1_izkyupf wrote
Reply to Does anyone know how to get the NxNx12 from the input image - is it just using reshape function or is there any other function that can be used by Actual-Performer-832
Use convnet to transform to the right shape. May need to use dilution.
Final-Rush759 t1_izkyfql wrote
Reply to Does anyone know how to get the NxNx12 from the input image - is it just using reshape function or is there any other function that can be used by Actual-Performer-832
Use convnet to transform to the right shape
NulkStomomocg t1_izktta5 wrote
Reply to the biggest risk with generative AI is not its potential for misinformation but cringe. by hayAbhay
Nice use of violet in this spaces mate
VinnyVeritas t1_izk4aoa wrote
Wow, that's a nice extension/replacement. Very cool work!
saw79 t1_izjv9wa wrote
Reply to Does anyone know how to get the NxNx12 from the input image - is it just using reshape function or is there any other function that can be used by Actual-Performer-832
Sometimes that beautiful one-liner just isn't worth it compared to something like
torch.cat((
img[:N, :N],
img[N:, :N],
img[:N, N:],
img[N:, N:],
), dim=-1)
webbersknee t1_izjjea6 wrote
Reply to Does anyone know how to get the NxNx12 from the input image - is it just using reshape function or is there any other function that can be used by Actual-Performer-832
Skimage view_as_windows would do this.
robbsc t1_iziz6ie wrote
Reply to Does anyone know how to get the NxNx12 from the input image - is it just using reshape function or is there any other function that can be used by Actual-Performer-832
I don't have the time to figure it out, but I'm pretty sure you can do it through some combination of permutations and reshapes. Play around with an NxN numpy array (e.g. np.arange(8**2).reshape(8,8)) and perform various transposes and reshapes and see what comes out. You might have to add and remove an axis at some point too.
boosandy OP t1_iziy5do wrote
Reply to comment by incrediblediy in Graphics Card set up for deep learning by boosandy
Depending on my set up, do you think 850 watt PSU will be good enough ? My CPU is i7 6700k.
computing_professor t1_iziungw wrote
Reply to comment by WhizzleTeabags in Graphics Card set up for deep learning by boosandy
How would a 2x A5000 system differ from a single A6000 in actual use? Are your cards treated as a single card by the software?
computing_professor t1_izitpwa wrote
Reply to comment by WhizzleTeabags in Graphics Card set up for deep learning by boosandy
Here is a thread where we talked about this with GeForce cards. It's not treated as a single GPU and apparently you still need to parallelize. At least that's what I was told in that thread.
horselover_f4t t1_izik5mz wrote
Reply to comment by suflaj in What framework can I use to quantize a deep learning model to specific bit-widths? by MahmoudAbdAlghany
I will have to check that out, thank you!
suflaj t1_izihg01 wrote
Reply to comment by horselover_f4t in What framework can I use to quantize a deep learning model to specific bit-widths? by MahmoudAbdAlghany
This is only the code license for the open source portion, but the SDK license of the general, proprietary software that TensorRT is, is also something you have to agree on: https://docs.nvidia.com/deeplearning/tensorrt/sla/index.html
In there, ownership is phrased in such an ambiguous way the legal team of a company would probably never greenlight using it.
horselover_f4t t1_izibm6r wrote
Reply to comment by suflaj in What framework can I use to quantize a deep learning model to specific bit-widths? by MahmoudAbdAlghany
Can I ask you what you mean by "implicitly prevents"?
https://github.com/NVIDIA/TensorRT/blob/main/LICENSE seems to permit commercial use, do you refer to trademarks?
incrediblediy t1_izi6n58 wrote
Reply to Graphics Card set up for deep learning by boosandy
>Now if I connect my 2060 along with the gtx 980, and connect my display to the 980 , will pytorch be use the whole vram of 2060 ?
Yes, I have a similar setup, RTX3090 - No display (full VRAM for training), RTX3060 - 2 Monitors
When I play games, I connect 1 monitor to RTX3090 and play on that, other monitor on RTX3060
Volhn t1_izhyjsl wrote
Reply to comment by WhizzleTeabags in Graphics Card set up for deep learning by boosandy
Cool might try that with my 2080s then. Thnx.
WhizzleTeabags t1_izhyccl wrote
Reply to comment by Volhn in Graphics Card set up for deep learning by boosandy
Works fine with my A5000s
Volhn t1_izhxupy wrote
Reply to comment by WhizzleTeabags in Graphics Card set up for deep learning by boosandy
Is that confirmed to work? I’ve read mixed on same GPU connected by NVLINK.
WhizzleTeabags t1_izhxbho wrote
Reply to comment by Volhn in Graphics Card set up for deep learning by boosandy
Unless he gets nvlink
gahaalt OP t1_izo0w5m wrote
Reply to comment by David202023 in Progress Table - is it better than TQDM for your use case? by gahaalt
Hi! Thanks for the feedback.
Actually, Progress Table is not tied to Keras or any other Deep Learning framework. You can use Progress Table to track any long-running process that produces data. The source code is not neural network specific :)
To help you start out, I've created a markdown file with PyTorch integration example. Check this out: integrations.md. Let me know if it's clear!