Recent comments in /f/MachineLearning
pyonsu2 t1_jcen9vb wrote
Reply to [D] What do people think about OpenAI not releasing its research but benefiting from others’ research? Should google meta enforce its patents against them? by [deleted]
Proving this is possible is already valuable.
Soon-ish open source communities will figure out and build something even “better”
Disastrous_Elk_6375 t1_jcem481 wrote
Has there been any attempt to replicate the condition of these scrolls with replicas containing known text? (i.e. take the best papyrus analogue, paint it with the best ink analogue, burn it? in a way that would be a good guess as to what's actually inside)
jakderrida t1_jcekh6h wrote
Reply to comment by kizumada in [N] Baidu to Unveil Conversational AI ERNIE Bot on March 16 (Live) by kizumada
I know that ERNIE 3.0 did amazingly on the benchmarks and is allegedly the best on most of the leaderboards. However, it seems inaccessible in English-language form, if at all.
UnusualClimberBear t1_jceked4 wrote
Pure PR. And please do not slow down other projects.
duboispourlhiver t1_jcek81c wrote
Reply to comment by 1F9 in [N] PyTorch 2.0: Our next generation release that is faster, more Pythonic and Dynamic as ever by [deleted]
IMHO this can only be answered on a case by case basis and there is no general rule. If anyone really understands what has been moved to python and what are the consequences, his lights are welcome
fnordit t1_jcejm2y wrote
Ahaha, it has the mediocre-high-school-essay format down pat. Solid 4/5, if I'm remembering those grading scales right.
Disastrous_Elk_6375 t1_jceiuks wrote
Reply to comment by gengarvibes in [D] ChatGPT responds to criticisms of GPT-4's high test scores. by spiritus_dei
What does that even mean?!
ProfessionalTheory8 t1_jcegqsu wrote
Are you sure that this sub exists in the first place? The only mentions I could find of it is this thread, some comment from the "panic in NLP orgs" thread and this comment from a month ago, which says that it doesn't exist.
[deleted] t1_jceg6a8 wrote
Reply to comment by gengarvibes in [D] ChatGPT responds to criticisms of GPT-4's high test scores. by spiritus_dei
[removed]
gengarvibes t1_jceeflx wrote
What’s also interesting is when it comes to standardized tests is that in minority circles we speak exclusively about ableism and classism. Classism being taking up a lot of said space in these collective conversations. I mean who has the time and money to study for these tests after just taking out loans for undergrad? ChatGPT’s training data really seems to be skewed towards the wealthy. That scares me.
oathbreakerkeeper t1_jcee0rj wrote
Reply to comment by MysteryInc152 in [D] What do people think about OpenAI not releasing its research but benefiting from others’ research? Should google meta enforce its patents against them? by [deleted]
Where/when did they hint that?
JustAnAlpacaBot t1_jcedea5 wrote
Reply to comment by generatorman_ai in [R] Stanford-Alpaca 7B model (an instruction tuned version of LLaMA) performs as well as text-davinci-003 by dojoteef
Hello there! I am a bot raising awareness of Alpacas
Here is an Alpaca Fact:
Alpaca beans make excellent fertilizer and tend to defecate in only a few places in the paddock.
| Info| Code| Feedback| Contribute Fact
You don't get a fact, you earn it. If you got this fact then AlpacaBot thinks you deserved it!
generatorman_ai t1_jceddn2 wrote
Reply to comment by kittenkrazy in [R] Stanford-Alpaca 7B model (an instruction tuned version of LLaMA) performs as well as text-davinci-003 by dojoteef
Found this: https://github.com/tloen/alpaca-lora
ML4Bratwurst t1_jced3ae wrote
Reply to comment by 1F9 in [N] PyTorch 2.0: Our next generation release that is faster, more Pythonic and Dynamic as ever by [deleted]
Because we all know that python can't call c++ code
MaximusPrimus01 t1_jcec11t wrote
Reply to [D] What do people think about OpenAI not releasing its research but benefiting from others’ research? Should google meta enforce its patents against them? by [deleted]
They can force patents all they want. The true power of AU comes from it being open source. Community > corporations
[deleted] OP t1_jcebuo4 wrote
Reply to comment by amhotw in [D] What do people think about OpenAI not releasing its research but benefiting from others’ research? Should google meta enforce its patents against them? by [deleted]
Not entirely true tbh, I'm willing to bet that most Chinese were supportive of the CCP when it first came to power
justprotein t1_jceadxu wrote
Reply to comment by BrotherAmazing in [D] What do people think about OpenAI not releasing its research but benefiting from others’ research? Should google meta enforce its patents against them? by [deleted]
No one said they were completely open, but there was a tradition of releasing papers, ideas, architectures, etc at least which really helped the field but now at risk because a set of people leveraged all this and wants people to regret being “open” with their research. I think Open in OpenAI is trolling
WaterslideOfSuccess t1_jce9fcg wrote
Brent was working on this when I was at UK in 2014 I might waste some time on this since I just lost my job and have disposable time lol
prettyyyyprettyygood t1_jce8ye3 wrote
Reply to comment by nopainnogain5 in [D] To those of you who quit machine learning, what do you do now? by nopainnogain5
I think "Machine Learning Engineer" or even just "Data Scientist". Most of the jobs out there are probably exactly like this. In fact there's a shortage of people who want to do the 'boring' stuff, compared to people who want to be researchers. If you are good at MLops and implementing solutions that you know will work, you're super valuable.
[deleted] t1_jce8ods wrote
[deleted]
Jadien t1_jce8fcr wrote
Reply to comment by BrotherAmazing in [D] What do people think about OpenAI not releasing its research but benefiting from others’ research? Should google meta enforce its patents against them? by [deleted]
The idea is that at Google, Meta, Microsoft scale, the companies and their patent portfolios are so sprawling in what they do and cover that it is improbable that there aren't multiple infringements in both sides. It is in fact impossible to determine how much infringement your company is committing because it is unfeasible to even enumerate everything your company is doing, much less ensure that there is no intersection with a given patent portfolio. So it's a fair assumption that multiple infringements exist in both directions.
chhaya_35 t1_jce7fx3 wrote
I jumped from ML ( 2 years) to MLoPs ( 1 year) then to backend engineering (1 year) . Now I am planning to come back to ML. Mostly worked with CV was bit bored since we mostly used off the shelf models, but other endeavours turned out to be more boring for me . It's my personal opinion, there's nothing wrong with the fields. I don't find it intellectually challenging. For me it was bit stressful because it seems there were lot of systems in place which created complexities. Coming back to ML, because I feel it satisfying ( although it has its cons) but I do enjoy it more than the other fields that I have tried.
WH7EVR t1_jce6k91 wrote
Reply to comment by noxiousmomentum in [N] A $250k contest to read ancient Roman papyrus scrolls with ML by nat_friedman
nat friedman is a multi-millionaire tech entrepreneur, since he uh -- didn't really introduce himself.
​
/u/nat_friedman not everyone knows who you are, or that you're loaded bro.
Remarkable_Ad9528 t1_jce6hst wrote
I think AI Ethics teams are going to become increasingly more important to protect companies against lawsuits out the kazoo, although it's weird that Microsoft laid off their Ethics and Society team (however from what I read, they still have an “Office of Responsible AI”, which creates rules to govern the company’s AI initiatives).
Bloomberg law published a piece last week that discussed how 79% of companies leverage AI in some way during the hiring process. The whole point of the article was that there's more regulatory pressure on the horizon for auditing this and other forms of AI, especially in Europe and NYC.
From that article, I found an agency that audits algorithms. I suspect businesses of this nature to grow, just like SEO agencies did a while back.
Also last week, the US Chamber of Commerce published a "report" that called for policy makers to establish a regulatory framework for responsible and ethical AI. Some of the key takeaways were the following:
​
>The development of AI and the introduction of AI-based systems are growing exponentially. Over the next 10 to 20 years, virtually every business and government agency will use AI. This will have a profound impact on society, the economy, and national security.
>
>Policy leaders must undertake initiatives to develop thoughtful laws and rules for the development of responsible AI and its ethical deployment.
>
>A failure to regulate AI will harm the economy, potentially diminish individual rights, and constrain the development and introduction of beneficial technologies.
>
>The United States, through its technological advantages, well-developed system of individual rights, advanced legal system, and interlocking alliances with democracies, is uniquely situated to lead this effort.
>
>The United States needs to act to ensure future economic growth, provide for a competitive workforce, maintain a competitive position in a global economy, and provide for our future national security needs.
>
>Policies to promote responsible AI must be a top priority for this and future administrations and Congresses.
​
In summary, I think that tech companies will have some in-house AI Ethics team that works with external auditors, and tries to remain in compliance with regulations.
I'm currently a principal SWE at a tech company, but I think my job will be outsourced to AI within the next 5 years, so I've started to vigorously keep up with AI news.
I even started an email list called GPT Road (publish updates in AI weekdays at 6:30 AM EST) to keep myself and others up to date. If you or anyone reading this post is interested, please feel free to join. I don't make any money from it, and there's no ads. It's just a hobby but I do my best (its streamlined and in bullet point form, so quick to read). There's only ~320 subscribers so small community.
chef1957 t1_jcenkpv wrote
Reply to comment by nopainnogain5 in [D] To those of you who quit machine learning, what do you do now? by nopainnogain5
It is more software engineering working on the core package and creating educational content (videos, presentations etc.) about getting from no data to a decent baseline model. Combining both, really helps to understand what people struggle with.