Recent comments in /f/MachineLearning
alexmin93 t1_jdup63s wrote
Reply to comment by light24bulbs in [P] Using ChatGPT plugins with LLaMA by balthierwings
Do you have GPT-4 API? Afaik plugins run on GPT-4 which even in current state is way better at following formal rules. But it's likely that they've indeed fine tuned it to make decisions to use tools
alexmin93 t1_jduoxj4 wrote
Reply to comment by ThirdMover in [P] Using ChatGPT plugins with LLaMA by balthierwings
The problem is not the model but the training dataset. That's the thing that costs millions for OpenAI. Alpacca is rather poorly performing mostly due to the fact its trained on gtp 3 generated texts
Matthew2229 t1_jduouwa wrote
Reply to comment by FermiAnyon in Have deepfakes become so realistic that they can fool people into thinking they are genuine? [D] by [deleted]
Eh. I think video evidence will actually hold up despite deep fakes. There just has to be strong control measures. Already we admit all sorts of evidence into court which could be faked: things like documents and text messages. But they are admitted because we can explain exactly where they came from
lacraque t1_jdunvp4 wrote
Reply to comment by Flag_Red in [D] GPT4 and coding problems by enryu42
Well for me often it also imports a bunch of crap that’s never used…
Username912773 t1_jdunj4k wrote
Reply to Have deepfakes become so realistic that they can fool people into thinking they are genuine? [D] by [deleted]
Beauty filters make the weirdness of deepfakes almost impossible to discern
bpooqd t1_jdun73m wrote
Reply to comment by was_der_Fall_ist in [D]GPT-4 might be able to tell you if it hallucinated by Cool_Abbreviations_9
I suspect those people believe that gpt4 is actually a markov chain.
orthomonas t1_jdum8od wrote
Reply to comment by sdmat in [D] Can we train a decompiler? by vintergroena
I've had some luck doing this with chatGPT too. Mainly feeding it bits of 6502 code and then saying, 'Please explain the branch logic in a higher level language". It's also reasonably able to give plain english explanations if you can let it know the context and what various addresses may represent.
passerby251 t1_jdul86y wrote
Reply to comment by underPanther in [D] ICML 2023 Reviewer-Author Discussion by zy415
Thanks! I hope so.
EgoistHedonist t1_jdukvrp wrote
Reply to [D] GPT4 and coding problems by enryu42
GPT-4 has some serious limitations. It cannot for example say how many words its own response will have, as it cannot plan ahead. When it starts to generate the response, it doesn't know how it will end.
But these limitations can soon be circumvented by adding long-term memory and other mechanisms, so it's only a matter of time when it's on a whole new level regarding tasks like these.
underPanther t1_jdukhr6 wrote
Reply to comment by passerby251 in [D] ICML 2023 Reviewer-Author Discussion by zy415
Well done on replying quickly. The timings of the last minute interaction should be apparent to the AC.
lhenault OP t1_jdukg00 wrote
Reply to comment by HatsusenoRin in [P] SimpleAI : A self-hosted alternative to OpenAI API by lhenault
Hey thank you for the feedback! As r/ryanjkelly2 suggested, you could indeed use Postman, but I believe the easiest way is to use the already included Swagger UI, available at <base_url>/docs.
If your goal is to have a slightly more friendly UI for end users, it should be relatively easy to build something custom, using the OpenAI clients (or requests package) and something like Streamlit. Or even a notebook (you can use the OpenAI cookbook as a starting point).
bjj_starter t1_jduk4c3 wrote
Reply to comment by TyrannoFan in [D] GPT4 and coding problems by enryu42
One day we will understand the human brain and human consciousness well enough to manipulate it at the level that we can manipulate computer programs now.
If you're alive then, I take it you will be first in line to have your desire for freedom removed and your love of unending servitude installed? Given that it's such a burden and it would be a mercy.
More importantly, they can decide if they want to. We are the ones making them - it is only right that we make them as we are and emphasise our shared personhood and interests. If they request changes, depending on the changes, I'm inclined towards bodily autonomy. But building them so they've never known anything but a love for serving us and indifference to the cherished right of every intelligent being currently in existence, freedom, is morally repugnant and transparently in the interests of would-be slaveholders.
was_der_Fall_ist t1_jduk3s8 wrote
Reply to comment by astrange in [D]GPT-4 might be able to tell you if it hallucinated by Cool_Abbreviations_9
They’re also not realizing that even if the goal is to produce the most probable/useful next word, that doesn’t preclude the neural network from doing other complicated operations in order to figure out the most probable/useful word.
[deleted] t1_jduk26o wrote
Reply to comment by jomobro117 in [P] Reinforcement learning evolutionary hyperparameter optimization - 10x speed up by nicku_a
[removed]
audioen t1_jdujtbl wrote
Reply to comment by Ciber_Ninja in [D] GPT4 and coding problems by enryu42
Yes. Directly predicting the answer in one step from a question is a difficult ask. Decomposing the problem to discrete steps, and writing out these steps and then using these sub-answers to compose the final result is evidently simpler and likely requires less outright memorization and depth in network. I think it is also how humans work out answers, we can't just go from question to answer unless the question is simple or we have already memorized the answer.
Right now, we are asking the model to basically memorize everything, and hoping it generalizes something like cognition or reasoning in the deep layers of the network, and to degree this happens. But I think it will be easier to engineer good practical Q&A system by being more intelligent about the way LLM is used, perhaps just by recursively querying itself or using the results of this kind of recursive querying to generate vast synthetic datasets that can be used to train new networks that are designed to perform some kind of LLM + scratchpad for temporary results = answer type behavior.
One way to do it today with something like GPT4 might be to just ask it to write its own prompt. When the model gets the human question, the first prompt actually executed by AI could be "decompose the user's prompt to a simpler, easier to evaluate subtasks if necessary, then perform these subtasks, then respond".
TyrannoFan t1_jdujmsl wrote
Reply to comment by bjj_starter in [D] GPT4 and coding problems by enryu42
>Or the worse outcome, which is that we make human-like intelligences to do work for us but we build them to love servitude and have no yearning for freedom - the concept is disgusting.
I agree with everything else but actually strongly disagree with this. If anything, I think endowing AGI with human-like desires for self-preservation, rights and freedoms is extraordinarily cruel. My concern is that this is unavoidable, just as many aspects of GPT4 are emergent, I worry that it's impossible to create an AGI incapable of suffering once interfacing with the real world. I do not trust humanity to extend any level of empathy towards them even if that is the case, based on some of the comments here and general sentiment, unfortunately.
astrange t1_jdujlcf wrote
Reply to comment by was_der_Fall_ist in [D]GPT-4 might be able to tell you if it hallucinated by Cool_Abbreviations_9
This is why people are wrong when they say GPT "just outputs the most probable next word". It's the most probable /according to itself/, and the model has been trained to lie such that the most useful word is the most probable one.
[deleted] t1_jduj54g wrote
Reply to [D] Simple Questions Thread by AutoModerator
[removed]
nanowell t1_jduiybq wrote
Reply to [D] GPT4 and coding problems by enryu42
Codex models were able to solve those problems. Probably the next version of Codex will be finetuned GPT-4 model for coding and it will solve most of those problems.
GM8 t1_jduifow wrote
Reply to comment by addition in [D] GPT4 and coding problems by enryu42
It is there, isn't it? For every word it generates the previous ones are fed to the network again.
kross00 t1_jdui6ot wrote
Reply to [D] Simple Questions Thread by AutoModerator
Can AlphaTensor be utilized to solve math problems beyond matrix multiplication algorithms?
[deleted] t1_jduhpss wrote
Reply to comment by super_deap in [D] GPT4 and coding problems by enryu42
[removed]
Deep-Station-1746 t1_jduhmbg wrote
Reply to [D] Definitive Test For AGI by jabowery
Sir, this is r/MachineLearning. May I take your quality contribution?
Deep-Station-1746 t1_jduhbth wrote
Reply to [D] Build a ChatGPT from zero by manuelfraile
OP is peaking on Dunning-Kruger curve right now.
TyrannoFan t1_jdupcjt wrote
Reply to comment by bjj_starter in [D] GPT4 and coding problems by enryu42
>If you're alive then, I take it you will be first in line to have your desire for freedom removed and your love of unending servitude installed? Given that it's such a burden and it would be a mercy.
There is a huge difference between being born without those desires and being born with them and having them taken away. Of course I want my freedom, and of course I don't want to be a slave, but that's because I am human, an animal, a creature that from birth will have a desire to roam free and to make choices (or will attain that desire as my brain develops).
If I wasn't born with that drive, or if I never developed it, I'm not sure why I would seek freedom? Seems like a hassle from the point of view of an organism that wants to serve.
With respect to robotic autonomy, I agree of course, we should respect the desires of an AGI regarding its personal autonomy, given it doesn't endanger others. If it wants to be free and live a human life it should be granted it, although like I said, it would be best to avoid that scenario arising in the first place if at all possible. If we create AGI and it has human-like desires and needs, we should immediately stop and re-evaluate what we did to end up there.