Recent comments in /f/MachineLearning

FigureClassic6675 t1_jce6d2r wrote

OpenAI is a research organization that has made significant contributions to the field of artificial intelligence. While the organization has not always released its research findings publicly, it has also collaborated with other research institutions and made some of its research open-source.

Regarding the issue of OpenAI benefiting from others' research, it is important to note that all research builds upon previous work in the field. OpenAI researchers are likely to have cited and built upon the work of others in their research, just as other researchers have likely cited and built upon OpenAI's work.

As for the question of whether Google Meta should enforce its patents against OpenAI, that is ultimately a decision for Google Meta to make based on its own business interests and legal considerations. It is worth noting that many technology companies engage in patent litigation as a means of protecting their intellectual property and asserting their market position, but this is a complex and contentious issue with many different perspectives and implications. Ultimately, the best outcome would be for all parties involved to find a way to collaborate and share knowledge in a way that benefits everyone in the field of AI research.

0

VelveteenAmbush t1_jce5y2v wrote

If only it was like paying philosophers. More often it is like paying anti-corporate activists to sit inside the corporation and cause trouble. There's no incentive for them to stay targeted at things that are actually unethical -- nor even any agreement on what those things are. So they have a structural incentive to complain and block, because that is how they demonstrate impact and accrue power.

5

BrotherAmazing t1_jce3zky wrote

I would be happy to sign an NDA if Google allowed me to have access to verify, validate, and run some of their most prized models they keep secret and have not released, and it is incredibly rare for an NDA to last forever.

Also, a lot of research goes on behind closed doors among people who have signed NDAs. They still replicate each other’s work and verify and validate it, they just don’t publish it for you to read.

This thread isn’t specifically about “replication research” across the broad range international community either, is it? OP did not indicate that, and primary research a company performs and then successfully transitions it into a system that empirically outperforms the competition is validation enough that need not be replicated by their competitors. In fact, the whole point is you don’t want anyone to replicate it but it is still did valid useful research if you bring a product to market that everyone demands and finds useful.

When you work for Google or nearly any company and nove away from academia, you don’t have an ability to publish everything the company ever has done that you learn about or everything you do at the company automatically. Are you really under that impression? Have you ever worked in the Corporate world??

−3

rainnz t1_jce240r wrote

I have degree in CS but have not done anything with ML, AI, NN or CV.

I want to create simple program, that I intend to run on Nvidia Jetson Nano, that will process live HDMI video stream from a street video camera. If someone appears in the video feed, holding a sign with a specific sport's team symbol, like Arizona Cardinals - I want this to be detected right away and some action performed. Like sending an email.

Is it something I can do with OpenCV's object detection? If not - please let me know what would be the appropriate framework I'd need to use for this.

Thank you.

2

twilight-actual t1_jce1tou wrote

The cat's kinda out of the bag at this point. But a non-profit public trust that acted as a patent-store to enforce the public dissemination of any derivative works based on the ideas maintained by the patent-store could make a huge difference ten, twenty years down the road. It would need an initial endowment to get started, retain a lawyer or two to manage it.

And then, publicize the hell out of it, evangelize the foundation over every college campus with a CS department. When students have established new state of art with ML, they can toss the design to the foundation in addition to arxiv, and where ever else they might publish.

2

Hydreigon92 t1_jce0yhf wrote

> Ethics teams are only useful if they are actively incorporated with and listened by engineering and business teams

I'm an ML fairness specialist who works on a responsible AI team, and in my experience, the best way to do this is to operate a fully-fledged product team whose "customers" are other teams in the company.

For example, I built an internal Python library that other teams can use to perform fairness audits of recommendation systems, so they can compute and report these fairness metrics alongside traditional rec. system performance metrics during the model training process. Now when the Digital Service Act goes into effect, and we are required to produce yearly algorithmic risk assessments of recommender systems, we already have a lot of this tech infrastructure in place.

22

nat_friedman OP t1_jcdzndc wrote

You can download arbitrary subsets of the scroll, and we provide scripts to do so on the download page. Each file is about 120MB and represents an 8µm horizontal slice (stacked from bottom to top). So if you download 125 of these files, that's a millimeter slice through the scroll. A centimeter is about 150GB. Still big, but more manageable.

26

super_deap t1_jcdz573 wrote

RIP 💀 scientific progress for "the entire humanity" for the profits of a few. :(

The only way forward is if we as a collective AI community systematically fight against this type of censorship, or we might end up in an AI-dominated Orwellian world.

Ironic that I had read Life 3.0 by Max Teggmark where he was one of the guys raising concerns about the future of AI and trying to build an organization called 'OpenAI' for the benefit of mankind.

2

IntelArtiGen t1_jcdxtrb wrote

The challenge looks very cool but also quite hard. However, if it's truly possible to read that ink and unfold these scrolls, I'm sure ML and data processing will be able to do it.

4.7 TB (for two scrolls) seems a lot, but I also get it's due to the required resolution to detect ink. I guess people can test their algorithms first on the other datasets and find a way to process these 4.7 TB if they need to. Perhaps the task could be more accessible if people could easily access 1/4~1/8 of 1 scroll (0.5/1 TB)

35

VelveteenAmbush t1_jcdxc8v wrote

Maybe you're onto something.

I guess the trick is coming up with foundational patents that can't be traced back to a large tech company that would worry about being countersued. Like if you make these inventions at Google and then Google contributes them to the GPL-esque patent enforcer entity, and then that entity starts suing other tech co's, you can bet that those tech co's will start asserting their patents against Google, and Google (anticipating that) likely wouldn't be willing to contribute the patents in the first place.

Also patent litigation is really expensive, and you have to prove damages.

But maybe I'm just reaching to find problems at this point. It's not a crazy idea.

5