Recent comments in /f/MachineLearning
ReginaldIII t1_jcdwasr wrote
Reply to comment by Philpax in [N] PyTorch 2.0: Our next generation release that is faster, more Pythonic and Dynamic as ever by [deleted]
LPT copy pasting the bullet point change notes uses fewer GPUs. The more you know!
Spziokles t1_jcdw6q7 wrote
Reply to comment by namey-name-name in In your experience, are AI Ethics teams valuable/effective? [D] by namey-name-name
I don't work in the field either so I just forwarded your question to Bing, lol. I thought maybe it can find key takeaways of that "Practical Guide" (see above) to answer your question:
> According to this article, creating a culture in which a data and AI ethics strategy can be successfully deployed and maintained requires educating and upskilling employees, and empowering them to raise important ethical questions. The article also suggests that the key to a successful creation of a data and AI ethics program is using the power and authority of existing infrastructure, such as a data governance board that convenes to discuss privacy1.
> In addition, a blog post on Amelia.ai suggests that an AI ethics team must effectively communicate the value a hybrid AI-human workforce to all stakeholders. The team must be persuasive, optimistic and, most importantly, driven by data2.
> Finally, an article on Salesforce.com suggests that the AI ethics team not only develops its own strategy, but adds to the wider momentum behind a better, more responsible tech industry. With AI growing rapidly across industries, understanding how the practices that develop and implement the technology come together is invaluable3.
- https://hbr.org/2020/10/a-practical-guide-to-building-ethical-ai
- https://amelia.ai/blog/build-a-team-of-ai-ethics-experts/
- https://www.salesforce.com/news/stories/salesforce-debuts-ai-ethics-model-how-ethical-practices-further-responsible-artificial-intelligence/
> However, my main concern is whether or not AI ethics teams will be effective at helping promote ethical practices.
That surely depends on the company. Just speculating; if that team gets fired because the bosses don't like what the team (possibly for good reasons) recommends, then I don't see many ways for that team to be effective.
IntelArtiGen t1_jcdw6ih wrote
Reply to comment by Username912773 in [N] A $250k contest to read ancient Roman papyrus scrolls with ML by nat_friedman
It seems that everything is explained quite clearly on the website. The challenge is a mix of data processing & machine learning, the hardest part is probably in the data processing. (1) flatten (2) detect ink. They gave a dataset for the ink task on Kaggle.
janpaul123 t1_jcdw68q wrote
Reply to comment by Username912773 in [N] A $250k contest to read ancient Roman papyrus scrolls with ML by nat_friedman
Yes! We've released the CT scans (model input) and binary ink mask (ground truth) for 3 fragments of scrolls.
[deleted] OP t1_jcdvsjo wrote
Empty-Revolution7570 OP t1_jcdv1nt wrote
Reply to comment by MysteryInc152 in [P] Multimedia GPT: Can ChatGPT/GPT-4 be used for vision / audio tasks just by prompt engineering? by Empty-Revolution7570
No, it understands image through other models on hugging face, and outputs image with diffusers or OpenAI dalle
MysteryInc152 t1_jcduvhn wrote
Reply to comment by Empty-Revolution7570 in [P] Multimedia GPT: Can ChatGPT/GPT-4 be used for vision / audio tasks just by prompt engineering? by Empty-Revolution7570
I'm sorry maybe I want clear but you obviously have API access to GPT-4 right ? Does this access include an API call to their Vision model ? Or are you sending the images straight to BLIP and the like.
Username912773 t1_jcdu408 wrote
Well, is there an existing dataset to actually train a model off of?
Empty-Revolution7570 OP t1_jcdtuff wrote
Reply to comment by MysteryInc152 in [P] Multimedia GPT: Can ChatGPT/GPT-4 be used for vision / audio tasks just by prompt engineering? by Empty-Revolution7570
Yes, I included all the VFMs. I added upon those a few more, such as OpenAI Whisper. Still exploring how to incorporate video models
Philpax t1_jcdtj6o wrote
Reply to comment by 1F9 in [N] PyTorch 2.0: Our next generation release that is faster, more Pythonic and Dynamic as ever by [deleted]
Agreed. It also complicates productionising the model if you're reliant on features that are only available in the Python interface. Of course, there are ways around that (like just rewriting the relevant bits), but it's still unfortunate.
MysteryInc152 t1_jcdthob wrote
Reply to [P] Multimedia GPT: Can ChatGPT/GPT-4 be used for vision / audio tasks just by prompt engineering? by Empty-Revolution7570
Are you using Gpt-Vision ? Or are there separate assortments of visual foundation models ?
Snoo58061 t1_jcdtg08 wrote
Reply to comment by nopainnogain5 in [D] To those of you who quit machine learning, what do you do now? by nopainnogain5
Well I started of doing my time in the Data Warehouse. I was hoping I could retire to the Data Lakehouse. Now it's being drained by a Data Pipeline and the rest is slowly floating off into The Cloud.
Amusingly they recently changed my team name to Data Integration and Engineering. The DIE team.
Philpax t1_jcdt8r0 wrote
Reply to comment by ReginaldIII in [N] PyTorch 2.0: Our next generation release that is faster, more Pythonic and Dynamic as ever by [deleted]
Oh no, someone used a state of the art language model to summarise some text instead of doing it themselves. However will we live with this incalculable slight against norms of discussion on Reddit?
RemarkableGuidance44 t1_jcdsprg wrote
Reply to comment by ivalm in [R] Stanford-Alpaca 7B model (an instruction tuned version of LLaMA) performs as well as text-davinci-003 by dojoteef
Fine Tune it yourself for Medical.... I have it fine turned for software and it does a great job.
I_will_delete_myself t1_jcdsoy6 wrote
Reply to comment by jloverich in In your experience, are AI Ethics teams valuable/effective? [D] by namey-name-name
Lol AI ethics probably seems like just paying Philosophers from the prospective of a corporation. There are already plenty on Youtube and social media
nopainnogain5 OP t1_jcdshkg wrote
Reply to comment by Snoo58061 in [D] To those of you who quit machine learning, what do you do now? by nopainnogain5
I like how you phrased it "this week"
currentscurrents t1_jcdsf9u wrote
Reply to comment by Alimbiquated in Modern language models refute Chomsky’s approach to language [R] by No_Draft4778
The brain doesn't have any built-in knowledge about language, but it has an advantage; it's trying to communicate with other brains.
It is fundamentally impossible to understand human language without understanding how humans think. Language isn't a structured formal thing, it's more like the fuzzy interactions of two neural networks.
Humans already know how other humans think - plus they have a shared world environment to ground the symbols in. LLMs have to learn to approximate both of those.
Snoo58061 t1_jcdrz12 wrote
Never quite made it to working on ML professionally. This week I'm a 'Data Engineer'.
namey-name-name OP t1_jcdqxpf wrote
Reply to comment by Spziokles in In your experience, are AI Ethics teams valuable/effective? [D] by namey-name-name
I agree that AGI is an important concern. However, my main concern is whether or not AI ethics teams will be effective at helping promote ethical practices. For one thing, if a company can just fire the ethics team whenever they don’t like what they’re saying, then how would they actually be able to make any difference when it comes to AGI? In addition, I have also heard anecdotes from others that some in AI ethics are somewhat out of touch with actual ML engineering/research, which makes some of their suggestions inapplicable (admittedly they’re just anecdotes so I take them with salt as this may not generally be true, but I think it’s a concern worth considering). Is there any way that AI ethics teams can overcome these hurdles to help make save AGI?
Edit: also wanted to note that I don’t work in the field, if I got anything wrong please let me know!
twilight-actual t1_jcdqi1j wrote
Spziokles t1_jcdq0za wrote
What value do AI ethics teams add?
> Summary.
Artificial intelligence poses a lot of ethical risks to businesses: It may promote bias, lead to invasions of privacy, and in the case of self-driving cars, even cause deadly accidents. Because AI is built to operate at scale, when a problem occurs, the impact is huge. Consider the AI that many health systems were using to spot high-risk patients in need of follow-up care. Researchers found that only 18% of the patients identified by the AI were Black—even though Black people accounted for 46% of the sickest patients. And the discriminatory AI was applied to at least 100 million patients.
> The sources of problems in AI are many. For starters, the data used to train it may reflect historical bias. The health systems’ AI was trained with data showing that Black people received fewer health care resources, leading the algorithm to infer that they needed less help. The data may undersample certain subpopulations. Or the wrong goal may be set for the AI. Such issues aren’t easy to address, and they can’t be remedied with a technical fix. You need a committee—comprising ethicists, lawyers, technologists, business strategists, and bias scouts—to review any AI your firm develops or buys to identify the ethical risks it presents and address how to mitigate them. This article describes how to set up such a committee effectively.
Next door was an article A Practical Guide to Building Ethical AI, which I did not read but you might want to.
AI Ethics: What It Is And Why It Matters, also mentions bias, privacy and "mistakes which can lead to anything from loss of revenue to death", and also environmental impact (AIs as large resource consumers).
I feel these are valid concerns for AI. The stakes become higher when we come closer to AGI. Once we create such a powerful entity which outsmarts us in every way, it's probably too late to apply a safety patch, or make sure it's goals are aligned with our goals. Here's a quick intro: Robert Miles - Intro to AI Safety, Remastered
So we are racing towards ever more powerful A(G)I, and being the first or having the strongest promises profit. Adding safety concerns may be costly and slow things down, so this part might be neglected. The danger of this scenario is; we might end up with an unleashed, uncontrollable being which might be resistant to late efforts to fix it.
Like the other guy, I hate when ChatGPT refuses to comply with some requests, and find some of these rails unecessary. But overall I'm even more worried we let our guard down at the last mile. We better get this right, since as Miles said, we might only get one shot.
[deleted] OP t1_jcdnva2 wrote
Reply to comment by 1F9 in [N] PyTorch 2.0: Our next generation release that is faster, more Pythonic and Dynamic as ever by [deleted]
[deleted]
jloverich t1_jcdnq8k wrote
They seem to be punted as soon as you have a good product you want to sell that clashes with the ethics committee. It seems like the ethecists might be a bit too ethical for businesses. Axon, which does ai work [and tasers] for the police force I believe had a bunch of their ethics team resign.
Smallpaul t1_jcdnffe wrote
Reply to comment by VelveteenAmbush in [D] What do people think about OpenAI not releasing its research but benefiting from others’ research? Should google meta enforce its patents against them? by [deleted]
Software patents assigned to a public trust are a different idea than randomly suing people.
It might be set up to only sue companies that are not open.
Covered_in_bees_ t1_jcdwk7t wrote
Reply to comment by dangpzanco in [N] PyTorch 2.0: Our next generation release that is faster, more Pythonic and Dynamic as ever by [deleted]
I think it is a typo and is supposed to state Python 3.7