Recent comments in /f/MachineLearning
SWESWESWEh t1_jdt1ly6 wrote
Reply to comment by enryu42 in [D] GPT4 and coding problems by enryu42
So my guess is whoever ran the test did a bad job with the prompting. One of the examples they share, https://github.com/enryu43/llm_coding/blob/main/atcoder_eval/gpt4_solutions/abc294_c.py, the author complains it's not an efficient algorithm. So I just gave it a shot with ChatGPT:
Me: What does this code do?
def main(): N, M = map(int, input().split()) A = list(map(int, input().split())) B = list(map(int, input().split()))
C = sorted(A + B)
pos_A = [str(C.index(a) + 1) for a in A]
pos_B = [str(C.index(b) + 1) for b in B]
print(" ".join(pos_A))
print(" ".join(pos_B))
if name == "main": main()
ChatGPT: This code takes two lists of integers as input, merges them, sorts the merged list, and then prints the positions of the elements from the original lists in the sorted merged list.
Here's a step-by-step explanation of the code:
The main function is defined. Two integers, N and M, are read from the input and mapped to int using map(int, input().split()). Two lists, A and B, are read from the input, with each element being mapped to int using map(int, input().split()). A new list C is created by merging lists A and B and then sorting the resulting list using sorted(A + B). Two new lists, pos_A and pos_B, are created. These lists store the positions of the elements from lists A and B in the sorted list C. The positions are calculated using the index() method and are 1-indexed. The elements in pos_A and pos_B are joined into strings and printed, separated by a space. The if name == "main": statement checks if the script is being run directly (not imported as a module), and if so, calls the main function. Here's an example to illustrate how the code works:
Input: 4 3 2 1 4 3 5 6 7
Output: 1 2 4 3 5 6 7 The sorted merged list C is [1, 2, 3, 4, 5, 6, 7]. The positions of the elements from the original lists in the sorted list are 1, 2, 4, 3 for list A and 5, 6, 7 for list B.
Me: Could this code be re-factored to be my algorithmically efficient?
Yes, the code can be refactored to be more algorithmically efficient. The current implementation uses the index() method to find the positions of elements from lists A and B in the sorted list C. This approach has a time complexity of O(n^2) because index() is called for every element in both A and B.
To improve the algorithmic efficiency, you can use a dictionary to store the positions of the elements in the sorted list C. This will reduce the time complexity of finding the positions to O(n). Here's a refactored version of the code:
def main(): N, M = map(int, input().split()) A = list(map(int, input().split())) B = list(map(int, input().split()))
C = sorted(A + B)
positions = {value: index + 1 for index, value in enumerate(C)}
pos_A = [str(positions[a]) for a in A]
pos_B = [str(positions[b]) for b in B]
print(" ".join(pos_A))
print(" ".join(pos_B))
if name == "main": main()
The main changes in the refactored code are:
A positions dictionary is created that maps the values in the sorted list C to their 1-indexed positions. The pos_A and pos_B lists are created by directly looking up the positions of the elements from the positions dictionary. This refactored version of the code improves the algorithmic efficiency by reducing the time complexity of finding the positions of the elements from O(n^2) to O(n).
So it accurately analyzes that this code solves the problem without knowing the problem definition, and will spot the inefficiency if asked to actually write algorithmically efficient code. Seems fine to me
ephemeralentity t1_jdt1krp wrote
Reply to comment by machineko in [R] Hello Dolly: Democratizing the magic of ChatGPT with open models by austintackaberry
Thanks a lot! To be honest, I need to spend a bit more time familiarising myself with pytorch / this package. I'll see if I can figure it out from here.
Calamero t1_jdt1h8v wrote
Reply to comment by uspmm2 in [D] GPT4 and coding problems by enryu42
Also the prompt seems messed up? What are they doing there? Why not give it the original question?
rya794 t1_jdt0dxe wrote
Reply to comment by sweatierorc in [P] Using ChatGPT plugins with LLaMA by balthierwings
That’s a really good counter argument. You may have moved me over to the other side.
sweatierorc t1_jdszzh4 wrote
Reply to comment by rya794 in [P] Using ChatGPT plugins with LLaMA by balthierwings
Firefox did, they only lost to another "open-source" project
LanchestersLaw t1_jdszbjk wrote
Reply to comment by addition in [D] GPT4 and coding problems by enryu42
What I think is the most amazing thing is that GPT got this far while only trying to predict the very next word one word at a time. The fact it can generate essays by only considering one token at a time is mind boggling.
With all the feedback from ChatGPT it should be easy to program a supervisor who can look at the entire final output of GPT and make a prediction what the user would say in response; then it asks that to GPT to revise the output recursively until it converges. That should be relatively easy to do but would be very powerful.
ThePhantomPhoton t1_jdsyzhn wrote
Reply to [D] GPT4 and coding problems by enryu42
It’s easier to gauge the effectiveness of these large language models within the context of what they are actually doing, and that is repeating language they’ve learned elsewhere, predicated on some prompt provided by the user. They are not “reasoning,” although the language they use can lead us to believe that is the case. If you’re disappointed by their coding, you will certainly be disappointed by their mathematics.
K9ZAZ t1_jdsyxgb wrote
Reply to [D] GPT4 and coding problems by enryu42
People got way, way, way out over their skis on the whole "this is agi" and I would love to hear some of their responses to this.
BeautifulLazy5257 t1_jdsyfbc wrote
Reply to [D] Build a ChatGPT from zero by manuelfraile
I'd start by going through the Hugging Facd courseware.
You'll learn in the first chapter of their courses that it is just better for people to fine-tune pre-trained models. That's what they are there for.
It costs a lot and produces a lot of energy and heat waste to train a LLM from scratch.
Hamoodzstyle t1_jdsy1cu wrote
Reply to comment by CobaltAlchemist in [D] GPT4 and coding problems by enryu42
That's mind blowing holy moly
LastVariation t1_jdsx10i wrote
Reply to [D] Build a ChatGPT from zero by manuelfraile
Start by training a gpt-2, then add 2, then reinforce it to not be crazy by feeding chatGPT responses
[deleted] t1_jdswzfc wrote
Reply to comment by BeautifulLazy5257 in [D] GPT4 and coding problems by enryu42
[deleted]
FermiAnyon t1_jdswreq wrote
Reply to Have deepfakes become so realistic that they can fool people into thinking they are genuine? [D] by [deleted]
Yeah, even if it's not literally the case now, give it another year or two. I recon video evidence in court has maybe another decade of legs
W_O_H t1_jdswox3 wrote
Reply to [D] Build a ChatGPT from zero by manuelfraile
Ye it will be easy if you have a budget of 200+ mil.
bjj_starter t1_jdswame wrote
Reply to comment by night81 in [D] GPT4 and coding problems by enryu42
It's probably worth noting that the hard problem of consciousness is considered by most to be fundamentally unsolvable, and that it is currently just as good of an argument that any given human isn't conscious as it is an argument that any given AI isn't conscious.
robobub t1_jdswai9 wrote
Reply to comment by ngildea in [D] GPT4 and coding problems by enryu42
Indeed, it just made them more efficient so we need less of them and/or less pay for them.
Hands0L0 t1_jdsvv9h wrote
Reply to [D] Build a ChatGPT from zero by manuelfraile
You and everyone else here
big_ol_tender t1_jdsvods wrote
Reply to [D] Build a ChatGPT from zero by manuelfraile
Lol
[deleted] t1_jdsvjjy wrote
Reply to [D] GPT4 and coding problems by enryu42
[removed]
sleeplessinseattle00 t1_jdsvfhg wrote
Reply to comment by zy415 in [D] ICML 2023 Reviewer-Author Discussion by zy415
NeurIPS reviewers ghost as well, but ICLR I believe they do get involved
light24bulbs t1_jdsulyn wrote
Reply to comment by endless_sea_of_stars in [P] Using ChatGPT plugins with LLaMA by balthierwings
By "in context learning" i take it you mean zero shot.
Yes, you can hot swap. Id be unsurprised if what Open-AI did is fine tune on how to use plugins in general by giving some examples combined with a little bit of zero-shot primer.
Something trained with ToolFormers technique and then told it can use a new, but similar, plugin is IMO going to generalize way better than something that's never used a plugin before.
[deleted] t1_jdstnqd wrote
Reply to comment by robobub in [D] GPT4 and coding problems by enryu42
[deleted]
Yardanico t1_jdstn3v wrote
Reply to [P] Using ChatGPT plugins with LLaMA by balthierwings
Have the author seen https://github.com/hwchase17/langchain? I think this is exactly the problem they're trying to solve.
robobub t1_jdst1oo wrote
Reply to comment by enryu42 in [D] GPT4 and coding problems by enryu42
> Moreover, I doubt any human programmer will have troubles with the "Beginner" problems, regardless of their specialization.
Have you not heard about how many fail to pass FizzBuzz interview questions?
nixed9 t1_jdt1xyp wrote
Reply to comment by farmingvillein in [D] GPT4 and coding problems by enryu42
Ok my bad but that’s how I’ve been using the reflexion prompting