Recent comments in /f/MachineLearning
ThePogromist OP t1_jdvay6c wrote
Reply to comment by i_am__not_a_robot in My ChatGPT Chrome Extension that saves conversations in .md files is finally approved by the Chrome Web Store. It's still and so will continue to be Open Source. [P] by ThePogromist
>Do you know that words have a more than a singular meaning? You somehow wrote this comment so pretty sure you will be able to use google/DeepL or whatever else translator.
>
>> полный беспорядок, разгром ◆ И все это, надо заметить, делалось у нее как-то без всякой трескотни и погрома и всегда весело. Д.В. Григорович, «Недолгое счастье», 1884 г. [НКРЯ]
>
>> (figuratively) mayhem, chaos, disorder
>
>> https://en.wiktionary.org/wiki/%D0%BF%D0%BE%D0%B3%D1%80%D0%BE%D0%BC
>
>> комп. жарг. шутл. то же, что программист ◆ Отсутствует пример употребления (см. рекомендации).[https://ru.wiktionary.org/wiki/%D0%BF%D0%BE%D0%B3%D1%80%D0%BE%D0%BC%D0%B8%D1%81%D1%82#:~:text=%D0%B6%D0%B0%D1%80%D0%B3.,%E2%97%86%20%D0%9E%D1%82%D1%81%D1%83%D1%82%D1%81%D1%82%D0%B2%D1%83%D0%B5%D1%82%20%D0%BF%D1%80%D0%B8%D0%BC%D0%B5%D1%80%20%D1%83%D0%BF%D0%BE%D1%82%D1%80%D0%B5%D0%B1%D0%BB%D0%B5%D0%BD%D0%B8%D1%8F%20(%D1%81%D0%BC](https://ru.wiktionary.org/wiki/%D0%BF%D0%BE%D0%B3%D1%80%D0%BE%D0%BC%D0%B8%D1%81%D1%82#:~:text=%D0%B6%D0%B0%D1%80%D0%B3.,%E2%97%86%20%D0%9E%D1%82%D1%81%D1%83%D1%82%D1%81%D1%82%D0%B2%D1%83%D0%B5%D1%82%20%D0%BF%D1%80%D0%B8%D0%BC%D0%B5%D1%80%20%D1%83%D0%BF%D0%BE%D1%82%D1%80%D0%B5%D0%B1%D0%BB%D0%B5%D0%BD%D0%B8%D1%8F%20(%D1%81%D0%BC).
>
>https://letmegooglethat.com/?q=jargonism
x2
ThePogromist OP t1_jdvakpe wrote
Reply to comment by sEi_ in My ChatGPT Chrome Extension that saves conversations in .md files is finally approved by the Chrome Web Store. It's still and so will continue to be Open Source. [P] by ThePogromist
Do you know that words have a more than a singular meaning? You somehow wrote this comment so pretty sure you will be able to use google/DeepL or whatever else translator.
​
> полный беспорядок, разгром ◆ И все это, надо заметить, делалось у нее как-то без всякой трескотни и погрома и всегда весело. Д.В. Григорович, «Недолгое счастье», 1884 г. [НКРЯ]
> (figuratively) mayhem, chaos, disorderhttps://en.wiktionary.org/wiki/%D0%BF%D0%BE%D0%B3%D1%80%D0%BE%D0%BC
> комп. жарг. шутл. то же, что программист ◆ Отсутствует пример употребления (см. рекомендации).https://ru.wiktionary.org/wiki/%D0%BF%D0%BE%D0%B3%D1%80%D0%BE%D0%BC%D0%B8%D1%81%D1%82#:~:text=%D0%B6%D0%B0%D1%80%D0%B3.,%E2%97%86%20%D0%9E%D1%82%D1%81%D1%83%D1%82%D1%81%D1%82%D0%B2%D1%83%D0%B5%D1%82%20%D0%BF%D1%80%D0%B8%D0%BC%D0%B5%D1%80%20%D1%83%D0%BF%D0%BE%D1%82%D1%80%D0%B5%D0%B1%D0%BB%D0%B5%D0%BD%D0%B8%D1%8F%20(%D1%81%D0%BC.
muskoxnotverydirty t1_jdvak20 wrote
Reply to comment by Borrowedshorts in [D]GPT-4 might be able to tell you if it hallucinated by Cool_Abbreviations_9
We've already seen similar prompts such as telling it to say "I don't know" when it doesn't know, and then priming it with examples of it saying "I don't know" to nonsense. Maybe there's something to the added work of getting an output and then iteratively self-critiquing to get to a better final output.
I wonder if they could be using this idea to automatically and iteratively generate and improve their training dataset at scale, which would create a sort of virtuous cycle of improve dataset -> improve LLM -> repeat.
pale2hall t1_jdvaify wrote
Reply to comment by CormacMccarthy91 in [D] Simple Questions Thread by AutoModerator
Data In -> Data Out
I don't think they're having any religion re-enforced on them, but think of it this way:
You know how mad some super religious extremists get when you even use words that imply gay people are normal, or trans people exist (and aren't just mentally ill),
Imagine if people got as mad every time someone said "oh my god" or "JFC" etc. This imaginary group would be claiming "micro-reglious-agression" all. day. long.
I think that Abrahamic religious are soooo ubiquitous in the training set that the AI is likely to just go with the flow on it.
[deleted] t1_jdvab8v wrote
[removed]
pale2hall t1_jdva40w wrote
Reply to comment by fishybird in [D] Simple Questions Thread by AutoModerator
Great point! I
actually really enjoy AIExplained's videos on this. There are a bunch of different ways ways to measure 'consciousness' and many of them are passed by GPT4, which really just means we need new tests / definitions for AI models.
aozorahime t1_jdv9sd2 wrote
Reply to [D] Keeping track of ML advancements by Anis_Mekacher
only get updated from Twitter since I follow some prominent AI people there. If I got some interesting topic related to my research interest, I read the papers, mostly skimming, and starred their repo also. Whenever I have free time (mostly weekends), I will focus to study their findings.
muskoxnotverydirty t1_jdv9m5v wrote
Reply to comment by was_der_Fall_ist in [D]GPT-4 might be able to tell you if it hallucinated by Cool_Abbreviations_9
"Temperature" governs this behavior, doesn't it? I was under the impression that when you set temperature to zero, you get a deterministic output because it always selects the most probable token.
pale2hall t1_jdv97t1 wrote
That's helpful. I built a music rec prompt / prompt generator, and I had to use Spotify's API and some fuzzy matching to double check if it hallucinated.
darkbluetwilight OP t1_jdv9560 wrote
Reply to comment by supreethrao in [D]Suggestions on keeping Llama index cost down by darkbluetwilight
You are a gentleman! There doesn't appear to be any documentation in the llama-index docs yet but there is support added via the langchain module. It looks like I can "from langchain.llms import OpenAIChat" and then use this function to build a new index using "gpt-3.5-turbo" model. I will give this a go and see if it works. I will look into Treeindex too, reading the docs around these different indexing tools was getting a bit too complex for me
1azytux OP t1_jdv913f wrote
Reply to comment by aozorahime in Recent advances in multimodal models: What are your thoughts on chain of thoughts models? [D] by 1azytux
I am actually not using discord for time being, but maybe reddit messaging will work :) I can DM you.
nonotan t1_jdv8hy1 wrote
Reply to comment by vintergroena in [D] Can we train a decompiler? by vintergroena
I can't speak for GPT-4, but in my experience with ChatGPT, I would definitely not say it is better with code. It's just absurdly, terribly, unbelievably bad at maths. It's a bit better at dealing with code, but it doesn't mean it's good, you're just comparing it with its weakest area. It's not really capable of generating code that does anything even a little complex without heavy guidance directing it towards mistakes and getting it to make revision after revision (and even that is non-trivial to get it to do, it tends to just start generating completely different programs with completely different problems instead)
That being said, I can definitely believe it could do okay at decompilation. It's an easy enough task in general, comparatively, and the "trickiest" bit (interpreting what the program is supposed to be doing, to have the context to name variables etc) feels like the kind of thing it'd perform surprisingly well at. Getting a general "vibe" and sticking with it, and translating A to B, it tends to do okay. It's when it needs to generate entirely novel outputs that need to fulfill multiple requirements at once that it starts failing miserably.
enn_nafnlaus t1_jdv8gdn wrote
Reply to comment by yaosio in [D] Will prompting the LLM to review it's own answer be any helpful to reduce chances of hallucinations? I tested couple of tricky questions and it seems it might work. by tamilupk
If you want to make life hard on an LLM, give it a spelling task ;)
The public seems to think these tasks should be easy for them - after all they're "language models", right?
People forget that they don't see letters, but rather, tokens, and there can be a variable number of tokens per word. Tokens can even include the spaces between words. It has to learn the numbers and letters (in order) of every single token and how each one combines on spelling tasks. And it's not like humans tend to write out that information a lot (since we just look at the letters).
It's sort of like giving a vocal task to a deaf person or a visual task to a blind person.
aozorahime t1_jdv86p7 wrote
Reply to comment by 1azytux in Recent advances in multimodal models: What are your thoughts on chain of thoughts models? [D] by 1azytux
yes, I am interested in multimodal, I think I wanna use this topic for Ph.D. plan. I actually am still a master's student :)) I dunno why since there are various new models, I also get confused what should I do or improve for this CoT, probably because I read few papers, i guess
sure, let's talk via DM or discord (?). I am interested in hearing about your experience in this area.
nonotan t1_jdv7m7j wrote
Reply to comment by matthkamis in [D] Can we train a decompiler? by vintergroena
You could certainly do that to some extent, but I suspect that wouldn't generalize very well to programs that do things significantly different from anything in the training set. Transforming the syntax alone would probably be straightforward enough, but the parts that need more "interpretation" of what's going on (such as assigning plausible variable/function/class names, nevermind something like writing comments) I just can't see a standard supervised model handling particularly gracefully. Whereas that's one of the areas LLM excel at.
pm_me_your_pay_slips t1_jdv748e wrote
Reply to comment by yaosio in [R] Reflexion: an autonomous agent with dynamic memory and self-reflection - Noah Shinn et al 2023 Northeastern University Boston - Outperforms GPT-4 on HumanEval accuracy (0.67 --> 0.88)! by Singularian2501
this is literally what gdb did during the GPT-4 launch livestream
Frumpagumpus t1_jdv6yrq wrote
Reply to [D] Can we train a decompiler? by vintergroena
decompiler childs play, train a model that reconstructs servers and databases based on api endpoints
[deleted] t1_jdv6uhe wrote
Reply to [D] ICML 2023 Reviewer-Author Discussion by zy415
[deleted]
pm_me_your_pay_slips t1_jdv6l50 wrote
Reply to comment by ellev3n11 in [R] Reflexion: an autonomous agent with dynamic memory and self-reflection - Noah Shinn et al 2023 Northeastern University Boston - Outperforms GPT-4 on HumanEval accuracy (0.67 --> 0.88)! by Singularian2501
while the paper doesn't mention any code, there is no practical difference: replace RL environment with compiler/interpreter, and action selection with prompt engineering.
MjrK t1_jdv6h8l wrote
Reply to [D] Will prompting the LLM to review it's own answer be any helpful to reduce chances of hallucinations? I tested couple of tricky questions and it seems it might work. by tamilupk
[Submitted on 24 Feb 2023 (v1), last revised 8 Mar 2023 (this version, v3)]...
> LLM-Augmenter significantly reduces ChatGPT's hallucinations without sacrificing the fluency and informativeness of its responses.
sEi_ t1_jdv5u13 wrote
Reply to My ChatGPT Chrome Extension that saves conversations in .md files is finally approved by the Chrome Web Store. It's still and so will continue to be Open Source. [P] by ThePogromist
OP username is pretty offensive imo.
"A pogrom is a violent riot incited with the aim of massacring or expelling an ethnic or religious group, particularly Jews."
OP, any clarification?
IDe- t1_jdv5f5b wrote
Reply to comment by bpooqd in [D]GPT-4 might be able to tell you if it hallucinated by Cool_Abbreviations_9
I mean it is a (higher order) Markov chain.
NoLifeGamer2 t1_jdv52o0 wrote
Reply to [D] Will prompting the LLM to review it's own answer be any helpful to reduce chances of hallucinations? I tested couple of tricky questions and it seems it might work. by tamilupk
This is basically bootstrapping for llms right?
lhenault OP t1_jdv515a wrote
Reply to comment by Educational_Ice151 in [P] SimpleAI : A self-hosted alternative to OpenAI API by lhenault
Thank you!
[deleted] t1_jdvbnkx wrote
Reply to comment by pale2hall in [D] Simple Questions Thread by AutoModerator
[removed]