Recent comments in /f/MachineLearning
ML4Bratwurst t1_je925ms wrote
Yeah it's starts getting annoying because all the cool subs I joined are now full of annoying chatGPT kids
disbeam t1_je920cv wrote
What some people have done is to use Azure Cognitive Search as a pre-cursor to the LLM.
You use Cognitive Search to extract information from your organisation's own documentation and ask the LLM to only provide the correct answer from the details found in the search, otherwise responding with saying it doesn't know. It then answers complete with references. Having seen it in action with one of our customer's, I've been quite impressed.
Swordfish418 t1_je91o14 wrote
For once, a topic so important, that no amount of hype feels too much.
cc-test t1_je919kc wrote
Reply to comment by PussyDoctor19 in [D] What do you think about all this hype for ChatGPT? by Dear-Vehicle-3215
5 Ways You Can Run a SIDE HUSTLE To Make PASSIVE INCOME With AI....
TheAdvisorZabeth t1_je90pzt wrote
Reply to comment by DigThatData in [R] LLaMA-Adapter: Efficient Fine-tuning of Language Models with Zero-init Attention by floppy_llama
Hi!~ <3
umm... I am just an uneducated idiot, but I've been having a lot of ideas lately and I think some of them might be real Science too.
But I have no credentials or anyone to discuss ideas with or to help fact-check me about stuff I don't have nearly the Time to Learn.
You seem like a kind person, (and like you might have more Time than Puzzles with which to fruitfully spend that Time on.), do you think you might care to to chat about my ideas? Or possibly offer any sincere advice that is a bit more useful to an autistic puppy than: "That is not a real Theory."?
I never used Tumblr before, but Neil Gaiman made a point to explicitly state that he hangs out there a lot; and since there's no living Author who I have more respect for, I recently began posting my ideas there.
From my perspective I am writing 100% Non-Fiction.
From my perspective I am just a very strange Harmless-Holistic-Aberrant; who managed to dumb-luck their way into figuring out how to gain "Coherent-Root-Access-To-My-Own-Brain".
I am being fully sincere.
I would just ask that if you (or anyone else) thinks that I am just being a stupid Fool, that you please tell me gently, I am pretty sensitive.
Love ya either way!~
Keep on doin your awesome Science stuff no matter what! Cause it's just the coolest thing! (hehe, I wonder if they got that joke?)
hugs!~~~
bye!^!^(for, now...)
OH! I almost forgot to actually Hand You One End Of The Thread lol~
PussyDoctor19 t1_je90fas wrote
I feel like all the crypto-fluencers moved onto LLMs. They don't care about accuracy or value, just views and how that translates into cash.
PussyDoctor19 t1_je90bxq wrote
Reply to comment by SFDeltas in [D] What do you think about all this hype for ChatGPT? by Dear-Vehicle-3215
Isn't that a different sub from this one?
icedrift t1_je8zqd2 wrote
Reply to [P] Imaginary programming: implementation-free TypeScript functions for GPT-powered web development by xander76
This is really cool! Maybe I'm lacking creativity, but why bother generating imaginary functions and introducing risk that they aren't deterministic when you could just hit OpenAI's API for the data? For example in your docs you present a feature for recommending column names for a given table. Why is the whole function generated? Wouldn't it be more reliable to write out the function and use OAI's API to get the recommended column names?
Dear-Vehicle-3215 OP t1_je8zlsv wrote
Reply to comment by Maximus-CZ in [D] What do you think about all this hype for ChatGPT? by Dear-Vehicle-3215
Touchè
dampflokfreund t1_je8zlkp wrote
Reply to comment by Purplekeyboard in [R] The Debate Over Understanding in AI’s Large Language Models by currentscurrents
We don't have a proper definition of consciousness nor a way to test it either, by the way.
[deleted] t1_je8z2e7 wrote
[removed]
Deep-Station-1746 t1_je8yb60 wrote
nausea, noun. a feeling of loathing or disgust.
Maximus-CZ t1_je8ya68 wrote
>Aren’t you tired about read ChatGPT/GPT everywhere
so your solution is to start a discussion about it?
SFDeltas t1_je8y9n7 wrote
I consulted, ahem, a Magic 8 Ball for some responses to your discussion topic.
"Ah, the weekly ChatGPT hype thread. It's become a ritual at this point. 🙄"
"Not another ChatGPT discussion! Can we please just focus on other ML advancements?"
"Honestly, it's not the technology that's the problem, it's people overhyping it. ChatGPT has its uses, but it's not going to replace every job out there."
"It's frustrating that people outside the field make such strong statements without understanding the limitations of current AI systems."
"You know it's bad when your mom starts asking you about ChatGPT and how it's going to change the world."
"Hype is just part of the game. Remember the craze around deep learning a few years back? This too shall pass."
"I can't wait for the next big thing in ML to come along so we can finally move on from ChatGPT."
"The hype is annoying, but you have to admit that ChatGPT is a major milestone in NLP. Let's not completely dismiss its achievements."
"ChatGPT has its fair share of fans and critics on this sub, but it's important to stay grounded and remember that it's just one tool among many."
"I'm just waiting for the day when the ChatGPT hype dies down and we can go back to our regular, insightful discussions on r/machine_learning."
dubbitywap t1_je8xewl wrote
Do you have a git repository that we can take a look at?
tysam_and_co t1_je8wno6 wrote
Reply to comment by WildlifePhysics in [D] FOMO on the rapid pace of LLMs by 00001746
This will be one of the key things that I think will keep people up and going in their respective subfields. Perhaps not ironically at all, DougDoug, a popular YT creator has a great video on how to be a content creator that I find to be pretty exceptional and well-targeted to the current state of ML research, including some unique strategies on how to pick a unique fusion that only the person doing the researching is able to compete well in (while still contributing somewhat to the field), if one assumes that the research or software created is content that people might want
It's helped me as a researcher in producing research, I've recommended it to a number of people, and he just won a decently-sized award recognizing him doing in the way he's doing it. May not seem at all related but it has been one of the best guidebooks so far for me, and it has not failed me quite yet.
9182763498761234 t1_je8wfs4 wrote
Reply to comment by dreaming_geometry in [R] LLaMA-Adapter: Efficient Fine-tuning of Language Models with Zero-init Attention by floppy_llama
Work in a niche field instead! There are hundreds of smaller topics in ML that are yet unexplored and only a couple people working on it. I’m working on one of those and it is awesome. The field is slowly progressing but slowly enough that I can make a valuable contribution without getting scooped all the time.
MrLeylo t1_je8weob wrote
Reply to "[D]" Is wandb.ai worth using? by frodo_mavinchotil
It is.
valjestir t1_je8w22c wrote
Reply to comment by LetGoAndBeReal in [D] The best way to train an LLM on company data by jaxolingo
RAG is exactly what OP needs here. I don’t think Langchain has any way to connect to Azure of Snowflake though so they still need some way to extract that data.
A project I’m working on helps with ETL for retrieval augmented generation: https://github.com/ai-sidekick/sidekick
utopiah t1_je8vryb wrote
Reply to [P] Imaginary programming: implementation-free TypeScript functions for GPT-powered web development by xander76
It's pretty cool and thanks for providing the playground. I wouldn't have bothered without it. I think it's very valuable but also is quite costly, both economically and computationally, while creating privacy risks (all your data going through OpenAI) so... again in some situation I can imagine it being quite powerful but in others and absolute no. That being said there are others models (I just posted on /r/selfhosted minutes ago about the HuggingFace/Docker announcement enabling us to run Spaces locally) e.g Alpaca or SantaCoder or BLOOM that might enable us to follow the same principle, arguably with different quality, without the privacy risks. Have you considering relying on another "runtime"?
drizel t1_je8v9cj wrote
Reply to comment by idontcareaboutthenam in [R] LLaMA-Adapter: Efficient Fine-tuning of Language Models with Zero-init Attention by floppy_llama
GPT-4 can parse millions of papers and help uncover new optimizations or other improvements much faster than without it. Not only that but you can brainstorm ideas with it.
sparkpuppy t1_je8v49k wrote
Reply to [D] Simple Questions Thread by AutoModerator
Hello! Super-n00b question but I couldn't find an answer on google. When an image generation model has "48 M parameters", what does the term "parameter" mean in this sentence? Tags, concepts, image-word pairs? Does the meaning of "parameter" vary from model to model (in the context of image generation)?
[deleted] t1_je8uyb4 wrote
[deleted]
braindead_in t1_je8ucis wrote
Reply to comment by currentscurrents in [R] The Debate Over Understanding in AI’s Large Language Models by currentscurrents
In Nonduality, understanding or knowledge is the nature of pure consciousness, along with existence and bliss. I think of it as an if-then statement in programming. Once a program enters into an if condition, it understands and knows what has to be done next.
[deleted] t1_je93fd3 wrote
Reply to [D] What do you think about all this hype for ChatGPT? by Dear-Vehicle-3215
[removed]