Recent comments in /f/MachineLearning

SWESWESWEh t1_jdt1ly6 wrote

Reply to comment by enryu42 in [D] GPT4 and coding problems by enryu42

So my guess is whoever ran the test did a bad job with the prompting. One of the examples they share, https://github.com/enryu43/llm_coding/blob/main/atcoder_eval/gpt4_solutions/abc294_c.py, the author complains it's not an efficient algorithm. So I just gave it a shot with ChatGPT:

Me: What does this code do?

def main(): N, M = map(int, input().split()) A = list(map(int, input().split())) B = list(map(int, input().split()))

C = sorted(A + B)
pos_A = [str(C.index(a) + 1) for a in A]
pos_B = [str(C.index(b) + 1) for b in B]

print(" ".join(pos_A))
print(" ".join(pos_B))

if name == "main": main()

ChatGPT: This code takes two lists of integers as input, merges them, sorts the merged list, and then prints the positions of the elements from the original lists in the sorted merged list.

Here's a step-by-step explanation of the code:

The main function is defined. Two integers, N and M, are read from the input and mapped to int using map(int, input().split()). Two lists, A and B, are read from the input, with each element being mapped to int using map(int, input().split()). A new list C is created by merging lists A and B and then sorting the resulting list using sorted(A + B). Two new lists, pos_A and pos_B, are created. These lists store the positions of the elements from lists A and B in the sorted list C. The positions are calculated using the index() method and are 1-indexed. The elements in pos_A and pos_B are joined into strings and printed, separated by a space. The if name == "main": statement checks if the script is being run directly (not imported as a module), and if so, calls the main function. Here's an example to illustrate how the code works:

Input: 4 3 2 1 4 3 5 6 7

Output: 1 2 4 3 5 6 7 The sorted merged list C is [1, 2, 3, 4, 5, 6, 7]. The positions of the elements from the original lists in the sorted list are 1, 2, 4, 3 for list A and 5, 6, 7 for list B.

Me: Could this code be re-factored to be my algorithmically efficient?

Yes, the code can be refactored to be more algorithmically efficient. The current implementation uses the index() method to find the positions of elements from lists A and B in the sorted list C. This approach has a time complexity of O(n^2) because index() is called for every element in both A and B.

To improve the algorithmic efficiency, you can use a dictionary to store the positions of the elements in the sorted list C. This will reduce the time complexity of finding the positions to O(n). Here's a refactored version of the code:

def main(): N, M = map(int, input().split()) A = list(map(int, input().split())) B = list(map(int, input().split()))

C = sorted(A + B)
positions = {value: index + 1 for index, value in enumerate(C)}

pos_A = [str(positions[a]) for a in A]
pos_B = [str(positions[b]) for b in B]

print(" ".join(pos_A))
print(" ".join(pos_B))

if name == "main": main()

The main changes in the refactored code are:

A positions dictionary is created that maps the values in the sorted list C to their 1-indexed positions. The pos_A and pos_B lists are created by directly looking up the positions of the elements from the positions dictionary. This refactored version of the code improves the algorithmic efficiency by reducing the time complexity of finding the positions of the elements from O(n^2) to O(n).


So it accurately analyzes that this code solves the problem without knowing the problem definition, and will spot the inefficiency if asked to actually write algorithmically efficient code. Seems fine to me

9

LanchestersLaw t1_jdszbjk wrote

Reply to comment by addition in [D] GPT4 and coding problems by enryu42

What I think is the most amazing thing is that GPT got this far while only trying to predict the very next word one word at a time. The fact it can generate essays by only considering one token at a time is mind boggling.

With all the feedback from ChatGPT it should be easy to program a supervisor who can look at the entire final output of GPT and make a prediction what the user would say in response; then it asks that to GPT to revise the output recursively until it converges. That should be relatively easy to do but would be very powerful.

28

ThePhantomPhoton t1_jdsyzhn wrote

It’s easier to gauge the effectiveness of these large language models within the context of what they are actually doing, and that is repeating language they’ve learned elsewhere, predicated on some prompt provided by the user. They are not “reasoning,” although the language they use can lead us to believe that is the case. If you’re disappointed by their coding, you will certainly be disappointed by their mathematics.

2

K9ZAZ t1_jdsyxgb wrote

People got way, way, way out over their skis on the whole "this is agi" and I would love to hear some of their responses to this.

2

BeautifulLazy5257 t1_jdsyfbc wrote

I'd start by going through the Hugging Facd courseware.

You'll learn in the first chapter of their courses that it is just better for people to fine-tune pre-trained models. That's what they are there for.

It costs a lot and produces a lot of energy and heat waste to train a LLM from scratch.

8

bjj_starter t1_jdswame wrote

Reply to comment by night81 in [D] GPT4 and coding problems by enryu42

It's probably worth noting that the hard problem of consciousness is considered by most to be fundamentally unsolvable, and that it is currently just as good of an argument that any given human isn't conscious as it is an argument that any given AI isn't conscious.

25

light24bulbs t1_jdsulyn wrote

By "in context learning" i take it you mean zero shot.

Yes, you can hot swap. Id be unsurprised if what Open-AI did is fine tune on how to use plugins in general by giving some examples combined with a little bit of zero-shot primer.

Something trained with ToolFormers technique and then told it can use a new, but similar, plugin is IMO going to generalize way better than something that's never used a plugin before.

2

robobub t1_jdst1oo wrote

Reply to comment by enryu42 in [D] GPT4 and coding problems by enryu42

> Moreover, I doubt any human programmer will have troubles with the "Beginner" problems, regardless of their specialization.

Have you not heard about how many fail to pass FizzBuzz interview questions?

3