Ronny_Jotten
Ronny_Jotten t1_j21sqgg wrote
Reply to comment by TheTrueBlueTJ in [P] We finally got Text-to-PowerPoint working!! (Generative AI for Slides ✨) by Mastersulm
I think they meant a personal "breakthrough" in terms of getting their project up and running, not that they consider it an important breakthrough in the world of technology...
Ronny_Jotten t1_j0vkk1z wrote
Reply to comment by Felice_rdt in [D] Will there be a replacement for Machine Learning Twitter? by MrAcurite
Not just nitpicking about "site" vs "federation". Your whole statement wrongly characterizes what Sigmoid is.
If you go to the front page of Reddit, you'll also see a bunch of stuff about Musk and Twitter, plus Amber Heard, other pointless gossip, clickbait, and videos of people falling down. That tells you literally nothing about the ML community here.
The reason the person to whom you're responding saw the comments about Musk on Sigmoid is because they were looking at the wrong page (maybe Sigmoid should make "Local" the landing page), not because it's a group of angry Twitter expats.
Ronny_Jotten t1_j0tnech wrote
Reply to comment by Felice_rdt in [D] Will there be a replacement for Machine Learning Twitter? by MrAcurite
It's not a "site" though. There are many different Mastodon servers, and you can join the one you like. They do carry message from other servers, but you don't have to look at them, you can just stay on the local server.
Look at the sidebar on the right, and click on "local" instead of "federated" or "explore". It's 90% about ML and AI.
Ronny_Jotten t1_j0jkwbj wrote
Yeah, no. That's just tortured and basically incomprehensible to me, sorry. Bottles of pixels, and an "algorithm" that adjusts the sea?
> The takeaway should be that a GAN does not copy parts of images in the training data
Also not so much:
Image-generating AI can copy and paste from training data, raising IP concerns | TechCrunch
Ronny_Jotten t1_j0hgi63 wrote
Reply to comment by happyhammy in [D] Why are there no good generative music AIs? by happyhammy
It depends what you mean by "AI", but there are already generative music systems that produce far better music than that.
Spectral analysis/resynthesis is certainly important. There have long been tools like MetaSynth that let you do image processing of spectrograms. It's interesting that the "riffusion" project works at all, and it's a valuable piece of research. I can imagine the technique being useful for musicians as a way to generate novel sounds to be incorporated in larger compositions.
But it's difficult to see how it can be used successfully on entire, already-mixed-down pieces, to generate a complete piece of music in that way. Although it can produce some interesting and strange loops, it's hard to call the output that riffusion produces "music" in the sense of an overall composition, and I'm skeptical that this basic technique can be tweaked to do so. I could be wrong, but I still think it's a naive approach, and any actually listenable music-generation system will be based on rather different principles.
Ronny_Jotten t1_j0cizez wrote
Your theories are somewhat naive. Large companies like Google have no problem getting access to all the music they want. And nobody tries to "hide the fact that they trained their model with copyrighted material". The current state of AI training seems to be that copyright is irrelevant, and it's fair use - though we'll see whether that holds up in court. Nearly everything in LAION is copyrighted images scraped from the web, and they are used without permission for training. Furthermore, anyone can use the Million Song Dataset, and get access to the actual tracks through an API.
Million-song dataset: take it, it’s free | Ars Technica
On the other hand, the idea of turning audio into a 2D spectrogram image, and using the same tools as image-generating AIs, is also naive. Music generation requires a very different approach. There are a multitude of AI music-generation projects, some using GANs. So far, the results have not been as astonishing as the image generators. But that's only a matter of degree, and probably a matter of time.
Ronny_Jotten t1_iymyxhf wrote
Reply to comment by igotbigballs in Government Scientists ‘Approaching What is Required for Fusion’ in Breakthrough Energy Research | Magnetic fields tripled the energy output of a fusion experiment at the National Ignition Facility, reports a new study. by mepper
And they only run for a few minutes. Same could be said about fusion reactors, and probably android butlers too... That is not what I was promised!
Ronny_Jotten t1_iymdqle wrote
Reply to comment by mikeymumbelz in Government Scientists ‘Approaching What is Required for Fusion’ in Breakthrough Energy Research | Magnetic fields tripled the energy output of a fusion experiment at the National Ignition Facility, reports a new study. by mepper
It's been 40 years away for the last 70 years.
Still waiting for my jet pack too, my vacation on the moon, and my two-way wrist TV. Ok, the last one I can check off. When Apple finally comes out with their car, it had better be a flying one...
Ronny_Jotten t1_iydddfe wrote
Reply to comment by Exarctus in Does anyone uses Intel Arc A770 GPU for machine learning? [D] by labloke11
> the statement you're making about AMD GPUs only "being fine in limited circumstances" is absolutely false
Sorry, but there are limitations to the circumstances in which AMD cards are "fine". There are many real-world cases where Nvidia/CUDA is currently required for something to work. The comment you replied to was:
> Limited use in neural network applications at present due to many application's CUDA requirements (though the same could be said of AMD)
It was not specificaly about "code that is pure PyTorch", nor self-developed systems, but neural network applications in general.
It's fair of you to say that CUDA requirements can be met with HIP and ROCm if the developer supports it, though there are numerous issues and flaws in ROCm itself. But there are still issues and limitations in some circumstances, where they don't, as you've just described yourself! You can say that's due to the "laziness" of the developer, but it doesn't change the fact that it's broken. At the least it requires extra development time to fix, if you have the skills. I know a lot of people would appreciate it if you would convert the bitsandbytes library! Just because it could work, doesn't mean it does work.
The idea that there's just no downside to AMD cards for ML, because of the existence of ROCm, is true only in limited circumstances. "Limited" does not mean "very few", it means that ROCm is not a perfect drop-in replacement for CUDA in all circumstances; there are issues and limitations. The fact that Dreambooth doesn't run on AMD proves the point.
Ronny_Jotten t1_iyd6ouv wrote
Reply to comment by Exarctus in Does anyone uses Intel Arc A770 GPU for machine learning? [D] by labloke11
> My comment was aimed more towards ML scientists (the vast majority of whom are linux enthusiasts) who are developing their own architectures.
Your original comment implied that ROCm works "fine" as a drop-in replacement for CUDA. I don't think that's true. I'm not an ML scientist, but nobody develops in a vaccum. There are generally going to be dependencies on various libraries. The issue with Dreambooth I mentioned involves this for example:
ROCM Support · Issue #47 · TimDettmers/bitsandbytes
While it should be possible to port it, someone has to take the time and effort to do it. Despite the huge popularity of Dreambooth, nobody has. My preference is to use AMD, and I'm happy to see people developing for it, but it's only "fine" in limited circumstances, compared to Nvidia.
Ronny_Jotten t1_iyd43te wrote
Reply to comment by trajo123 in Does anyone uses Intel Arc A770 GPU for machine learning? [D] by labloke11
> People replying with "don't bother, just use Nvidia&CUDA" only make the problem worse
No, they don't "only make it worse". It's good advice to a large proportion of people who just need to get work done. AMD/Intel need to hear that, and step up, by providing real, fully-supported alternatives, not leaving their customers to fool around with half-working CUDA imitations. ML is such an important field right now, and they've dropped the ball.
Ronny_Jotten t1_iyd1p42 wrote
Reply to comment by Exarctus in Does anyone uses Intel Arc A770 GPU for machine learning? [D] by labloke11
There are many issues with ROCm. "AMD cards should be fine" is misleading. For example, you can get Stable Diffusion to work, but not Dreambooth, because it has dependencies on specific CUDA libraries, etc.:
Training memory optimizations not working on AMD hardware · Issue #684 · huggingface/diffusers
Also, you must be running Linux. AMD cards can be useful, especially with 16 GB VRAM starting in the RX 6800, but currently will be extra effort, and just won't work in some cases.
Ronny_Jotten t1_iy9e1on wrote
Reply to comment by Mr_ToDo in You're not wrong - websites have way more trackers now by Sorin61
> You can switch IP's every 5 minutes and it won't mean a think if you've got other trackers updating them on who you are.
Exactly. The only thing a VPN offers in terms of trackers is that you won't be tracked by your IP. If you don't want that, instead of paying for a VPN, most people can just change their IP once in a while and get the same effect.
There are other reasons why a VPN can be useful, like with avoiding geographic restrictions, stopping your ISP from recording your browsing history (e.g. in the US) or in some countries torrenting, etc. But avoiding trackers is not one of them, unless maybe you do use tracker blockers but have a static IP.
This ad-disguised-as-news implying that a VPN is a good way to protect from trackers on websites, complete with affiliate links to NordVPN, is complete bullshit. I thought TechRadar was better than that, apparently I was mistaken.
Ronny_Jotten t1_iy81juy wrote
Reply to "Why does electrolux have a sideways butt wearing a thong as their logo?" :D by Folksvaletti
It's a nod to the people who have a "special" relationship with their machines. Nothing sucks like Electrolux!
Ronny_Jotten t1_iy813so wrote
An ad for a VPN, disguised as a news article. Good work.
Please explain how a VPN avoids trackers, cookies, and browser fingerprinting, as the adicle seems to suggest? Particularly if you have a dynamic IP, like most people?
Ronny_Jotten t1_iwl4ai8 wrote
Reply to comment by blunzegg in [D] If I bought a copy of tv series on Youtube (or other platforms), can I use them for training a model? by DarrenTitor
It's true that there are some exemptions for "fair use" of copyrighted material for educational purposes, but there are details to be aware of and rules to follow. There is no difference between a TV series and general YouTube content in terms of requiring permission (or not, if it's fair use), they are both copyrighted.
You are more likely to get away with any copyright infringement of some random youtuber, than a commercial TV show, but only because the latter has a much greater economic interest, and money to pay lawyers to stop you.
Ronny_Jotten t1_iwl38kq wrote
Reply to comment by A1-Delta in [D] If I bought a copy of tv series on Youtube (or other platforms), can I use them for training a model? by DarrenTitor
I don't believe there is such a legal precedent as you describe. Regarding your specific example, there is currently a multi-billion dollar class-action lawsuit against Copilot, for commercial copyright infringement damages.
Ronny_Jotten t1_iwl2wqw wrote
Reply to [D] If I bought a copy of tv series on Youtube (or other platforms), can I use them for training a model? by DarrenTitor
Buying a copy of a TV series doesn't give you any additional rights to use it for training a model. It just gives you a right to possess the copy, and to watch it. If you do train a model, and don't break any DRM / technological protection measures (in the US), and you don't distribute the model or anything generated by it, then it's ok. What you do with it at home is your business. If you do distribute it then...??? Nobody knows for sure about the legality, because it hasn't been tested thoroughly in the courts.
If your work is non-commercial, and has no potential impact on the sales of the TV series, or other economic damage to anyone, there is very little trouble you could get into. It may be seen as "fair use", though that's not a guarantee right now. The worst would be a "cease and desist" or DMCA takedown order, from the lawyers of the rights holders of the show. How likely that is to happen, or succeed if you challenged it in court, would depend on the details of your specific case.
Ronny_Jotten t1_iv5me9t wrote
Reply to [D] Sigmoid Social, an alternative to Twitter by and for the AI Community by regalalgorithm
Dunno about the name choice? Although I'm aware of sigmoid functions, my (and maybe the average person's) immediate reaction and association is the second sense of sigmoid "of, relating to, or being the sigmoid colon". In other words, full of shit and hot air...
Ronny_Jotten t1_iv0vo01 wrote
Reply to comment by CapaneusPrime in [N] Class-action lawsuit filed against GitHub, Microsoft, and OpenAI regarding the legality of GitHub Copilot, an AI-using tool for programmers by Wiskkey
That decision wasn't about copyrighted photos. It was about Google creating a books search index, which was allowed as fair use - just like their scanning of books for previews is. That's an entirely different situation than if Google had trained an AI to write books for sale, that contained snippets or passages from the digitized books.
The latter certainly would not be considered fair use under the reasoning given by the judge in the case. He found that the search algorithm maintained:
> consideration for the rights of authors and other creative individuals, and without adversely impacting the rights of copyright holders
and that its incorporation into the Google Books system works to increase the sales of the copyrighted books by the authors. None of this can be said about Microsoft's product. It would seem to clearly fail the tests for fair use.
Ronny_Jotten t1_itghh1i wrote
Reply to comment by everydayasl in Yes, Zuckerbergberg is right. WhatsApp is more secure than Apple’s iMessage by juptertk
The name game: How it works?
Zuckerbergberg, Zuckerbergberg, Zuckerbergberg, bananabana fo Fuckerbergberg, me my mo Muckerbergberg, Zuckerbergberg!
Ronny_Jotten t1_j2esha7 wrote
Reply to An Open-Source Version of ChatGPT is Coming [News] by lambolifeofficial
This is clickbait, there's nothing to see here. Wang, among others, has been working on putting together some code as a kind of "proof of concept" that could do RLHF on top of PaLM. Actually doing that on the scale of ChatGPT, i.e. implementing a large, trained, working system, is a completely different story.
The readme includes this:
> This repository has gone viral without my permission. Next time, if you are promoting my unfinished repositories (notice the work in progress flag) for twitter engagement or eyeballs, at least (1) do your research or (2) be totally transparent with your readers about the capacity of the repository without resorting to clickbait. (1) I was not the first, CarperAI had been working on RLHF months before, link below. (2) There is no trained model. This is just the ship and overall map. We still need millions of dollars of compute + data to sail to the correct point in high dimensional parameter space. Even then, you need professional sailors (like Robin Rombach of Stable Diffusion fame) to actually guide the ship through turbulent times to that point.