Recent comments in /f/MachineLearning
WH7EVR t1_jce683f wrote
Reply to comment by ReginaldIII in [N] PyTorch 2.0: Our next generation release that is faster, more Pythonic and Dynamic as ever by [deleted]
Quality > Effort. I welcome the higher-quality comments and content we'll be getting by augmenting human laziness with AI speed and ability.
TheWittyScreenName OP t1_jce65n4 wrote
Reply to comment by Daos-Lies in [D] Is there an expectation that epochs/learning rates should be kept the same between benchmark experiments? by TheWittyScreenName
This was useful! I appreciate your detailed response, thanks!
VelveteenAmbush t1_jce5y2v wrote
Reply to comment by I_will_delete_myself in In your experience, are AI Ethics teams valuable/effective? [D] by namey-name-name
If only it was like paying philosophers. More often it is like paying anti-corporate activists to sit inside the corporation and cause trouble. There's no incentive for them to stay targeted at things that are actually unethical -- nor even any agreement on what those things are. So they have a structural incentive to complain and block, because that is how they demonstrate impact and accrue power.
blablanonymous t1_jce5h65 wrote
I bet you $249.99k it’s just a bunch of dad jokes
professorlust t1_jce4rv6 wrote
Reply to comment by BrotherAmazing in [D] What do people think about OpenAI not releasing its research but benefiting from others’ research? Should google meta enforce its patents against them? by [deleted]
Check out Axriv if you think there’s only academic researchers publishing
nat_friedman OP t1_jce40o0 wrote
Reply to comment by noxiousmomentum in [N] A $250k contest to read ancient Roman papyrus scrolls with ML by nat_friedman
I am funding it, together with Daniel Gross.
BrotherAmazing t1_jce3zky wrote
Reply to comment by professorlust in [D] What do people think about OpenAI not releasing its research but benefiting from others’ research? Should google meta enforce its patents against them? by [deleted]
I would be happy to sign an NDA if Google allowed me to have access to verify, validate, and run some of their most prized models they keep secret and have not released, and it is incredibly rare for an NDA to last forever.
Also, a lot of research goes on behind closed doors among people who have signed NDAs. They still replicate each other’s work and verify and validate it, they just don’t publish it for you to read.
This thread isn’t specifically about “replication research” across the broad range international community either, is it? OP did not indicate that, and primary research a company performs and then successfully transitions it into a system that empirically outperforms the competition is validation enough that need not be replicated by their competitors. In fact, the whole point is you don’t want anyone to replicate it but it is still did valid useful research if you bring a product to market that everyone demands and finds useful.
When you work for Google or nearly any company and nove away from academia, you don’t have an ability to publish everything the company ever has done that you learn about or everything you do at the company automatically. Are you really under that impression? Have you ever worked in the Corporate world??
noxiousmomentum t1_jce35h7 wrote
so i can do this. but is the prize real? who funds this?
nat_friedman OP t1_jce2uq5 wrote
Reply to comment by IntelArtiGen in [N] A $250k contest to read ancient Roman papyrus scrolls with ML by nat_friedman
It's good feedback to know this wasn't clear! I will edit the scrollprize.org/data page to be even more explicit about this.
rainnz t1_jce240r wrote
Reply to [D] Simple Questions Thread by AutoModerator
I have degree in CS but have not done anything with ML, AI, NN or CV.
I want to create simple program, that I intend to run on Nvidia Jetson Nano, that will process live HDMI video stream from a street video camera. If someone appears in the video feed, holding a sign with a specific sport's team symbol, like Arizona Cardinals - I want this to be detected right away and some action performed. Like sending an email.
Is it something I can do with OpenCV's object detection? If not - please let me know what would be the appropriate framework I'd need to use for this.
Thank you.
tcho187 t1_jce1you wrote
Reply to comment by nopainnogain5 in [D] To those of you who quit machine learning, what do you do now? by nopainnogain5
I’m happy with the change. MLOps is an exciting field right now. There’s a lot of companies that need people who knows ML and software engineering.
twilight-actual t1_jce1tou wrote
Reply to comment by VelveteenAmbush in [D] What do people think about OpenAI not releasing its research but benefiting from others’ research? Should google meta enforce its patents against them? by [deleted]
The cat's kinda out of the bag at this point. But a non-profit public trust that acted as a patent-store to enforce the public dissemination of any derivative works based on the ideas maintained by the patent-store could make a huge difference ten, twenty years down the road. It would need an initial endowment to get started, retain a lawyer or two to manage it.
And then, publicize the hell out of it, evangelize the foundation over every college campus with a CS department. When students have established new state of art with ML, they can toss the design to the foundation in addition to arxiv, and where ever else they might publish.
professorlust t1_jce19sb wrote
Reply to comment by BrotherAmazing in [D] What do people think about OpenAI not releasing its research but benefiting from others’ research? Should google meta enforce its patents against them? by [deleted]
What researcher is signing an NDA?
That’s literally the opposite of what replication research is supposed to accomplish.
Operating under an NDA is for primary research, not replication
Smallpaul t1_jce114b wrote
Reply to comment by VelveteenAmbush in [D] What do people think about OpenAI not releasing its research but benefiting from others’ research? Should google meta enforce its patents against them? by [deleted]
Just to be clear, I was just elaborating on /u/twilight-actual’s idea.
Hydreigon92 t1_jce0yhf wrote
Reply to comment by rustlingdown in In your experience, are AI Ethics teams valuable/effective? [D] by namey-name-name
> Ethics teams are only useful if they are actively incorporated with and listened by engineering and business teams
I'm an ML fairness specialist who works on a responsible AI team, and in my experience, the best way to do this is to operate a fully-fledged product team whose "customers" are other teams in the company.
For example, I built an internal Python library that other teams can use to perform fairness audits of recommendation systems, so they can compute and report these fairness metrics alongside traditional rec. system performance metrics during the model training process. Now when the Digital Service Act goes into effect, and we are required to produce yearly algorithmic risk assessments of recommender systems, we already have a lot of this tech infrastructure in place.
-Rizhiy- t1_jce09xx wrote
Reply to comment by 1F9 in [N] PyTorch 2.0: Our next generation release that is faster, more Pythonic and Dynamic as ever by [deleted]
There is a reason it is called PyTorch)
IntelArtiGen t1_jce09i7 wrote
Reply to comment by nat_friedman in [N] A $250k contest to read ancient Roman papyrus scrolls with ML by nat_friedman
Oh nice! Thanks for the clarification. I thought it was just one big archive, but yeah it makes much more sense that way
zbyte64 t1_jcdzvhh wrote
Reply to comment by Philpax in [N] PyTorch 2.0: Our next generation release that is faster, more Pythonic and Dynamic as ever by [deleted]
That's why all my ML is done in OvjectiveC /s. Production looks different for different use cases.
nat_friedman OP t1_jcdzndc wrote
Reply to comment by IntelArtiGen in [N] A $250k contest to read ancient Roman papyrus scrolls with ML by nat_friedman
You can download arbitrary subsets of the scroll, and we provide scripts to do so on the download page. Each file is about 120MB and represents an 8µm horizontal slice (stacked from bottom to top). So if you download 125 of these files, that's a millimeter slice through the scroll. A centimeter is about 150GB. Still big, but more manageable.
super_deap t1_jcdz573 wrote
Reply to [D] What do people think about OpenAI not releasing its research but benefiting from others’ research? Should google meta enforce its patents against them? by [deleted]
RIP 💀 scientific progress for "the entire humanity" for the profits of a few. :(
The only way forward is if we as a collective AI community systematically fight against this type of censorship, or we might end up in an AI-dominated Orwellian world.
Ironic that I had read Life 3.0 by Max Teggmark where he was one of the guys raising concerns about the future of AI and trying to build an organization called 'OpenAI' for the benefit of mankind.
CosmosKrew t1_jcdyrd2 wrote
Reply to [N] PyTorch 2.0: Our next generation release that is faster, more Pythonic and Dynamic as ever by [deleted]
I really could get into pytorch if they provided a functional interface like keras. I find it mathematically pleasing.
programmerChilli t1_jcdykn2 wrote
Reply to comment by Philpax in [N] PyTorch 2.0: Our next generation release that is faster, more Pythonic and Dynamic as ever by [deleted]
The segregation is that the "ML logic" is moving into Python, but you can still export the model to C++.
IntelArtiGen t1_jcdxtrb wrote
The challenge looks very cool but also quite hard. However, if it's truly possible to read that ink and unfold these scrolls, I'm sure ML and data processing will be able to do it.
4.7 TB (for two scrolls) seems a lot, but I also get it's due to the required resolution to detect ink. I guess people can test their algorithms first on the other datasets and find a way to process these 4.7 TB if they need to. Perhaps the task could be more accessible if people could easily access 1/4~1/8 of 1 scroll (0.5/1 TB)
VelveteenAmbush t1_jcdxc8v wrote
Reply to comment by Smallpaul in [D] What do people think about OpenAI not releasing its research but benefiting from others’ research? Should google meta enforce its patents against them? by [deleted]
Maybe you're onto something.
I guess the trick is coming up with foundational patents that can't be traced back to a large tech company that would worry about being countersued. Like if you make these inventions at Google and then Google contributes them to the GPL-esque patent enforcer entity, and then that entity starts suing other tech co's, you can bet that those tech co's will start asserting their patents against Google, and Google (anticipating that) likely wouldn't be willing to contribute the patents in the first place.
Also patent litigation is really expensive, and you have to prove damages.
But maybe I'm just reaching to find problems at this point. It's not a crazy idea.
FigureClassic6675 t1_jce6d2r wrote
Reply to [D] What do people think about OpenAI not releasing its research but benefiting from others’ research? Should google meta enforce its patents against them? by [deleted]
OpenAI is a research organization that has made significant contributions to the field of artificial intelligence. While the organization has not always released its research findings publicly, it has also collaborated with other research institutions and made some of its research open-source.
Regarding the issue of OpenAI benefiting from others' research, it is important to note that all research builds upon previous work in the field. OpenAI researchers are likely to have cited and built upon the work of others in their research, just as other researchers have likely cited and built upon OpenAI's work.
As for the question of whether Google Meta should enforce its patents against OpenAI, that is ultimately a decision for Google Meta to make based on its own business interests and legal considerations. It is worth noting that many technology companies engage in patent litigation as a means of protecting their intellectual property and asserting their market position, but this is a complex and contentious issue with many different perspectives and implications. Ultimately, the best outcome would be for all parties involved to find a way to collaborate and share knowledge in a way that benefits everyone in the field of AI research.