Recent comments in /f/MachineLearning

gengarvibes t1_jceeflx wrote

What’s also interesting is when it comes to standardized tests is that in minority circles we speak exclusively about ableism and classism. Classism being taking up a lot of said space in these collective conversations. I mean who has the time and money to study for these tests after just taking out loans for undergrad? ChatGPT’s training data really seems to be skewed towards the wealthy. That scares me.

0

JustAnAlpacaBot t1_jcedea5 wrote

Hello there! I am a bot raising awareness of Alpacas

Here is an Alpaca Fact:

Alpaca beans make excellent fertilizer and tend to defecate in only a few places in the paddock.


| Info| Code| Feedback| Contribute Fact

You don't get a fact, you earn it. If you got this fact then AlpacaBot thinks you deserved it!
0

justprotein t1_jceadxu wrote

No one said they were completely open, but there was a tradition of releasing papers, ideas, architectures, etc at least which really helped the field but now at risk because a set of people leveraged all this and wants people to regret being “open” with their research. I think Open in OpenAI is trolling

10

prettyyyyprettyygood t1_jce8ye3 wrote

I think "Machine Learning Engineer" or even just "Data Scientist". Most of the jobs out there are probably exactly like this. In fact there's a shortage of people who want to do the 'boring' stuff, compared to people who want to be researchers. If you are good at MLops and implementing solutions that you know will work, you're super valuable.

1

Jadien t1_jce8fcr wrote

The idea is that at Google, Meta, Microsoft scale, the companies and their patent portfolios are so sprawling in what they do and cover that it is improbable that there aren't multiple infringements in both sides. It is in fact impossible to determine how much infringement your company is committing because it is unfeasible to even enumerate everything your company is doing, much less ensure that there is no intersection with a given patent portfolio. So it's a fair assumption that multiple infringements exist in both directions.

3

chhaya_35 t1_jce7fx3 wrote

I jumped from ML ( 2 years) to MLoPs ( 1 year) then to backend engineering (1 year) . Now I am planning to come back to ML. Mostly worked with CV was bit bored since we mostly used off the shelf models, but other endeavours turned out to be more boring for me . It's my personal opinion, there's nothing wrong with the fields. I don't find it intellectually challenging. For me it was bit stressful because it seems there were lot of systems in place which created complexities. Coming back to ML, because I feel it satisfying ( although it has its cons) but I do enjoy it more than the other fields that I have tried.

2

Remarkable_Ad9528 t1_jce6hst wrote

I think AI Ethics teams are going to become increasingly more important to protect companies against lawsuits out the kazoo, although it's weird that Microsoft laid off their Ethics and Society team (however from what I read, they still have an “Office of Responsible AI”, which creates rules to govern the company’s AI initiatives).

Bloomberg law published a piece last week that discussed how 79% of companies leverage AI in some way during the hiring process. The whole point of the article was that there's more regulatory pressure on the horizon for auditing this and other forms of AI, especially in Europe and NYC.

From that article, I found an agency that audits algorithms. I suspect businesses of this nature to grow, just like SEO agencies did a while back.

Also last week, the US Chamber of Commerce published a "report" that called for policy makers to establish a regulatory framework for responsible and ethical AI. Some of the key takeaways were the following:

​

>The development of AI and the introduction of AI-based systems are growing exponentially. Over the next 10 to 20 years, virtually every business and government agency will use AI. This will have a profound impact on society, the economy, and national security.
>
>Policy leaders must undertake initiatives to develop thoughtful laws and rules for the development of responsible AI and its ethical deployment.
>
>A failure to regulate AI will harm the economy, potentially diminish individual rights, and constrain the development and introduction of beneficial technologies.
>
>The United States, through its technological advantages, well-developed system of individual rights, advanced legal system, and interlocking alliances with democracies, is uniquely situated to lead this effort.
>
>The United States needs to act to ensure future economic growth, provide for a competitive workforce, maintain a competitive position in a global economy, and provide for our future national security needs.
>
>Policies to promote responsible AI must be a top priority for this and future administrations and Congresses.

​

In summary, I think that tech companies will have some in-house AI Ethics team that works with external auditors, and tries to remain in compliance with regulations.

I'm currently a principal SWE at a tech company, but I think my job will be outsourced to AI within the next 5 years, so I've started to vigorously keep up with AI news.

I even started an email list called GPT Road (publish updates in AI weekdays at 6:30 AM EST) to keep myself and others up to date. If you or anyone reading this post is interested, please feel free to join. I don't make any money from it, and there's no ads. It's just a hobby but I do my best (its streamlined and in bullet point form, so quick to read). There's only ~320 subscribers so small community.

1