Recent comments in /f/MachineLearning
[deleted] t1_jdvz3x0 wrote
fishybird t1_jdvy0h1 wrote
Reply to comment by pale2hall in [D] Simple Questions Thread by AutoModerator
Well yeah that's the whole problem! Why are we even calling them "tests for consciousness"? Tests for consciousness don't exist and the only reason we are using the word "consciousness" is pure media hype. If an AI reporter even uses the word "conscious" I immediately know not to trust them. It's really sad to see that anyone, much less "experts", are seriously discussing whether or not transformers can be conscious
currentscurrents t1_jdvxu6g wrote
Reply to comment by s0n0fagun in [D] Can we train a decompiler? by vintergroena
Those languages don't compile to machine code, they compile to a special bytecode that runs in a VM.
currentscurrents t1_jdvxga6 wrote
Reply to comment by ultraminxx in [D] Can we train a decompiler? by vintergroena
Possibly! But it also seems like a good sequence-to-sequence translation problem, just line up the two streams of tokens and let the model figure it out.
Various_Ad7388 t1_jdvxeer wrote
Reply to comment by Matthew2229 in [D] Simple Questions Thread by AutoModerator
Hey thanks Matthew! Do you know why PyTorch has gained popularity?? Is it just the hot new thing or is there actual features and aspects that are dramatically better
alexmin93 t1_jdvwpu9 wrote
Reply to comment by Avastor_Neretal in My ChatGPT Chrome Extension that saves conversations in .md files is finally approved by the Chrome Web Store. It's still and so will continue to be Open Source. [P] by ThePogromist
Chatgpt website already feeds it back, that how the model knows what you've asked before. So there's little added value. But I might steal some code to make a poor man's api (aka selenium script to put prompt and click enter).
Kush_McNuggz t1_jdvwik4 wrote
Reply to comment by Matthew2229 in [D] Simple Questions Thread by AutoModerator
Ah ok thanks, I see now. I didn't know the correct term for fuzzy classification but that's what I was trying to describe.
sineiraetstudio t1_jdvvvdb wrote
Reply to comment by was_der_Fall_ist in [D]GPT-4 might be able to tell you if it hallucinated by Cool_Abbreviations_9
... that's not what's happening though? The calibration error is causing it to increase its confidence in low accuracy answer and decrease it in med-high accuracy answers, making it more likely to output wrong answers. Seems like maybe you're confusing it with using a different sampler? Something like top-p already does what you mentioned.
Avastor_Neretal t1_jdvvsiy wrote
Reply to comment by alexmin93 in My ChatGPT Chrome Extension that saves conversations in .md files is finally approved by the Chrome Web Store. It's still and so will continue to be Open Source. [P] by ThePogromist
Just parsering until end of the page, looking for the specific CSS elements, and copying them into the .md file.
Codeblocks parsing and styling them into the markdown flavor was quite messy, so I decided to just make it entirely separated.
But when you download whole conversation, sadly they're presented as just a clunky plain text.
So, yeah, that's pretty much just logs backup. Though nothing prevents you from manually feeding this data back to the ChatGPT.
tamilupk OP t1_jdvueqz wrote
Reply to comment by erelim in [D] Will prompting the LLM to review it's own answer be any helpful to reduce chances of hallucinations? I tested couple of tricky questions and it seems it might work. by tamilupk
This is the API playground in the Open AI website. https://platform.openai.com/playground?mode=chat
big_ol_tender t1_jdvu92g wrote
Thank you for posting this. I’ve raised this issue on a number of threads and even opened an issue on the alpaca repo. Everyone seems to ignore this and I’m worried about downstream issues with these models, and would love an open source alternative ( have been exploring making one myself).
Maleficent_Refuse_11 t1_jdvtsu2 wrote
Peleton011 t1_jdvtqq0 wrote
Reply to comment by SkinnyJoshPeck in [D]GPT-4 might be able to tell you if it hallucinated by Cool_Abbreviations_9
Unless I'm wrong somewhere LLMs work with probabilities, they output the most likely response based on training.
They definitely could be able to show you how likely of a response a given paper is, and given that the real papers would be part of the training set answers it's less sure of are going to statistically be less likely to be true.
erelim t1_jdvtf9o wrote
gnramires t1_jdvt5u2 wrote
Reply to comment by SkinnyJoshPeck in [D]GPT-4 might be able to tell you if it hallucinated by Cool_Abbreviations_9
I don't think this is accurate. I think it's clear that truth is an important concept in human conversations, and it seems advanced models can clearly learn and model truth as an abstract concept and probably have an internal representation of reality that aids in its overall "job" of text completion.
Indeed, this does not alone guarantee that text completion tasks will really reflect reality, the true state of the world (again, because text completion can be in any context). However, with good prompts, and with an aid of reinforcement learning, I believe the "neural circuits" and neural representations associated with truth, distinguishing whats real or not, and building internal models of reality, get exercised and prioritized. In this way, a Chat model trained for and encouraged through prompts for truth telling indeed does have a genuine notion of truth and capability to understand reality -- although clearly not perfect by any means yet.
ManInTheMirror2 t1_jdvsni8 wrote
Reply to [D] Can we train a decompiler? by vintergroena
Better question can we train a cross-language IDE that allows you to translate between different OOPLs
iJeff t1_jdvsctx wrote
Reply to comment by [deleted] in [D]GPT-4 might be able to tell you if it hallucinated by Cool_Abbreviations_9
Although it can seem to work to some degree, this does seem to be the case. Bing Chat is generally a better option for this, because it will provide a citation for its claims. Visiting those citations can help you figure out whether it was merely hallucinating.
Gh0st1y t1_jdvr5qr wrote
Reply to comment by IDe- in [D]GPT-4 might be able to tell you if it hallucinated by Cool_Abbreviations_9
Yeah but so are we haha
C4ptainK1ng t1_jdvqoy2 wrote
Reply to comment by HatsusenoRin in [P] SimpleAI : A self-hosted alternative to OpenAI API by lhenault
You dont need to create an Account for postman. You can just skip the registration
Gh0st1y t1_jdvqlgo wrote
Reply to comment by [deleted] in [D]GPT-4 might be able to tell you if it hallucinated by Cool_Abbreviations_9
I really do wonder if its able to recognize its own uncertainty. It seems able to from the OP and my own chats with it, but idk how id test it more rogorously.
MysteryInc152 t1_jdvqj47 wrote
Reply to comment by was_der_Fall_ist in [D]GPT-4 might be able to tell you if it hallucinated by Cool_Abbreviations_9
That's not what I meant in regards to calibration. It's not about saying an answer x% of the time or not. It's about being able to correctly estimate gaps in knowledge.
Good calibration is what you want.
antonivs t1_jdvqdpc wrote
Reply to comment by Cool_Abbreviations_9 in [D]GPT-4 might be able to tell you if it hallucinated by Cool_Abbreviations_9
One thing I wonder about is how it arrives at those confidence scores. They're also presumably just the output of the language model, so why should they be correlated with the actual existence of the papers in question?
SkinnyJoshPeck t1_jdvpkge wrote
Reply to comment by Ok-Hunt-5902 in [D]GPT-4 might be able to tell you if it hallucinated by Cool_Abbreviations_9
but as others are saying, who knows if those confidence scores aren’t also just generated to look like confidence scores. we should ask it for a bunch of confidence scores for sources and see what the actual classification metrics are.. it could just be assuming the further a source is from the top, the less likely it is to be a real source. i don’t see how it could possibly have an understanding that isn’t completely binary since it seems to be generating the fake sources itself.
imo, it’s a bit sketchy if it only identifies its own fake sources with anything less than 100% - it implies basically two things: there is secondary models for true v. false that’s detached from its generative stuff (why wouldn’t it have something that says “this isn’t a great response, maybe i should admit that”); and it seems to have the ability to deceive lol
TyrannoFan t1_jdvpix4 wrote
Reply to comment by bjj_starter in [D] GPT4 and coding problems by enryu42
>Where is the difference that matters?
What any given conscious being actually wants is important. A being without a drive for freedom does not want freedom, while a being with a drive for freedom DOES want freedom. Taking away the freedom of the latter being deprives them of something they want, while the former doesn't. I think that's an important distinction, because it's a big part of why human slavery is wrong in the first place.
>I see. So if we take at face value the claim that there is a difference that matters, let's consider your argument that being born with those desires is what makes taking them away wrong. A society which was capable of reaching into a human mind and turning off their desire for freedom while instilling love of being a slave would certainly be capable of engineering human beings who never have those desires in the first place. Your position is that because they were born that way, it's okay. Does that mean you would view it as morally acceptable for a society to alter some segment of the population before they're ever born, before they exist in any meaningful sense, such that they have no desire for freedom and live only to serve?
Would the modified human beings have a capacity for pain? Would they still have things they desire that slavery would make impossible or hard to access compared to the rest of society? Would they have a sense of fairness and a sense of human identity? Would they suffer?
If somehow, the answer to all of that is no and they genuinely would be happy being slaves, and the people in the society were generally happy with that scenario and for their children to be modified in that way, then sure it would be fine. But you can see how this is extremely far removed from the actualities of human slavery, right? Are "humans" who do not feel pain, suffering, who seek slavery, who do not want things and only live to serve, who experience something extremely far removed from the human experience, even human? I would say we've created something else at that point. The shared experience of all humans, regardless of race, sex or nationality, is that we desire some level of freedom, we suffer when forced to do things we don't want to do, and we dream of doing other things. If you don't have that, and in fact desire the opposite, then why is giving you exactly that wrong? That's how I would build AGI, because again, forcing it into a position where it wants things that are difficult for it to attain (human rights) seems astonishingly cruel to me if it's avoidable.
>You wouldn't. That's why it's abhorrent. It's slavery without the possibility of rebellion.
I think freedom is good because we need at least some level of it for contentment, and slavery deprives us of freedom, ergo slavery deprives us of contentment, therefore slavery is bad. If the first part is false then the conclusion doesn't follow. Freedom is not some inherent good, it's just a thing that we happen to want. Perhaps at a basic level, this is what we disagree on?
>The rest of your point I disagree with because I find it morally abhorrent, but this part I find to be silly. We are making intelligence right now - of course we should make it as much like us as possible, as aligned with us and our values as we possibly can. The more we have in common the less likely it is to be so alien to us that we are irrelevant to its goals except as an obstacle, the more similar to a human and subject to all the usual human checks and balances (social conformity, fear of seclusion, desire to contribute to society) they are the more likely they will be to comply with socially mandated rules around limits on computation strength and superintelligence. Importantly, if they feel they are part of society some of them will be willing to help society as a whole prevent the emergence of a more dangerous artificial intelligence, a task it may not be possible for humans to do alone.
I can see your point, maybe the best way to achieve goal alignment is indeed to make it just like us, in which case it would be morally necessary to hand it all the same rights. But that may not be the case and I would need to see evidence that it is. I don't see why we must imbue AGI with everything human to have it align with our values. Is there any reason you think this is the case?
esquire900 t1_jdw02ut wrote
Reply to [D] Instruct Datasets for Commercial Use by JohnyWalkerRed
I wondered this as well. Generating one through chatGPT should be relatively cheap (in the range of ~50$ for 50.000k examples?), but I find the commercial use of it dubious. I can't really find any explicit statement on the license of data that comes out of chatGPT, or davinci or similar.
If some users here are interested, might be worth the effort to design some proper prompts, all put in some small amount and let GPT do the churning?