Recent comments in /f/MachineLearning

yaosio t1_jdtc4jf wrote

I think it's unsolvable because we're missing key information. Let's use an analogy.

Imagine an ancient astronomer trying to solve why celestial bodies sometimes go backwards because they think the Earth is the center of the universe. They can spend their entire life on the problem and make no progress so long as they don't know the sun is the center of the solar system. They will never know the celestial bodies are not traveling backwards at all.

If they start with the sun being the center of the solar system an impossible question becomes so trivial even children can understand it. This happens again and again. An impossible question becomes trivial once an important piece of information is discovered.

Edit: I'm worried that somebody is going to accuse me of saying things I haven't said because that happens a lot. I am saying we don't know what consciousness is because we're missing information and we don't know what information we're missing. If anybody thinks I'm saying anything else, I'm not.

4

yaosio t1_jdtbh6i wrote

Reply to comment by E_Snap in [D] GPT4 and coding problems by enryu42

Aurther C. Clarke wrote a book called Profiles of the Future. In it he wrote:

>Too great a burden of knowledge can clog the wheels of imagination; I have tried to embody this fact of observation in Clarke’s Law, which may be formulated as follows:
>
>When a distinguished but elderly scientist states that something is possible, he is almost certainly right. When he states that something is impossible, he is very probably wrong.

13

m0ushinderu t1_jdtbee7 wrote

GPT's model architecture has not exactly been shrouded in mystery anyways. It is all about the training data and training methodology. Thats why projects that crowd sources training data such as OpenAssistant is so important rn. You can check them out!

5

Abikdig t1_jdtail3 wrote

I check ChatGPT for optimizing my Leetcode solution everyday. It rarely optimizes it without breaking the code.

Sometimes the only optimization that I get from it is that it tells me to use Datastructure X instead of Y because it is better in this kind of problem.

1

cegras t1_jdta9mj wrote

Reply to comment by pengo in [D] GPT4 and coding problems by enryu42

More like, the ability to know that 'reversing a linked list' and 'linked list cycle and traversal problems' are the same concepts but different problems, and to separate those into train/test. Clearly they haven't figured that out because ChatGPT is contaminated, and their (opaquely disclosed) ways of addressing that issue don't seem adequate at all.

3

sdmat t1_jdt9ik9 wrote

Reply to comment by Jeffy29 in [D] GPT4 and coding problems by enryu42

> There would need to be some time limit imposed so it can't brute force the solution after guessing for a few days

Not exactly unheard of for junior programmers, to be fair.

3

[deleted] t1_jdt9g2f wrote

A reviewer suddenly lowered the score and wrote a concern just a few hours before the end of the discussion period. Since the reviewer revised the original official review, I didn’t get notified.

The reason for lowering the score is that we did not apply to other algorithms and other datasets. If so, I wonder why the reviewer didn’t talk about it earlier and give us a chance to respond during the rebuttal or discussion period.

3

Trotskyist t1_jdt8tx6 wrote

Reply to comment by enryu42 in [D] GPT4 and coding problems by enryu42

It's still an extremely useful tool if you accept its limitations, and I think it's being reductive to say it can only solve "dumb" problems or suggest boilerplate code.

I used GPT-4 the other day to refactor/optimize an extremely bespoke and fairly complicated geoprocessing script that we use at work that was written by a former employee who's no longer with the organization. Yes, it got some things wrong that had to be corrected (sometimes all it took was feeding it a stacktrace, other times this wasn't enough and I'd have to figure out the issue myself)

But at the end of the day (literally, this was over the course of an afternoon,) I'd managed to cut the runtime by more than half, using libraries I'd never before touched and wasn't previously familiar with. It probably would have taken a week to implement otherwise.

11

WarAndGeese t1_jdt8f3u wrote

Arguments against solipsism are reasonable enough to assume that other humans, and therefore other animals, are conscious. One knows that one is conscious. One, even if not completely understanding how it works, understands that it historically materially developed somehow. One knows that other humans both act like one does, and they also know that other humans have gone through the same developmental process, evolutionarity, biologically, and so on. It's reasonable to assume that whatever inner workings developed consciousness in one's mind, would have also developed in others' minds, though the same biological processes. Hence it's reasonable to assume that other humans are conscious, even that it's the most likely situation that they are conscious. This thinking can be expanded to include animals, even if they have higher or lower levels of consciousness and understanding than we do.

With machines you have a fundamentally different 'brain structure', and you have one that was pretty fundamentally designed to mimic. Whereas consciousness can occur independently and spontaneously and so on, it is not just as good of an argument that any given human isn't conscious as it is an argument that any given AI isn't conscious.

1

sdmat t1_jdt85pr wrote

Yes, it's amazing to see something as simple as "Assess the quality of your answer and fix any errors" actually work.

Or for more subjective results such as poetry "Rate each line in the preceding poem" then "Rewrite the worst lines".

6

pengo t1_jdt6iv2 wrote

Reply to comment by cegras in [D] GPT4 and coding problems by enryu42

> Then what you have is something that can separate content into logically similar, but orthogonal realizations.

Like a word vector? The thing every language model is based on?

1

SWESWESWEh t1_jdt2m8y wrote

Reply to comment by lambertb in [D] GPT4 and coding problems by enryu42

It often has errors, but if you just paste the errors into the chat it will generally fix them. In the early versions of chatGPT, I had issues with doing stuff like writing networking code in C++, but it still got me a lot of the way there.

I recently went over writing a high throughput async data pipeline in Java, and it did a great job of writing the code and even taught me a new design pattern. I had to make a few small changes here and there, but basically it turned a week of work into a couple hours. With the context of the written code there, I also had it write unit tests and documentation for me, and I was able to have it add more unit tests and also integration tests based on my feedback.

I'm fine with people underestimating how good ChatGPT is as a coding assistant, it just makes me look better because of how productive it makes me.

11

fiftyfourseventeen t1_jdt29u3 wrote

I've wasted too much time trying to do basic tasks with it as well. For example, I argued with it for many messages about something that was blatantly wrong, and it insisted it wasn't (that case it was trying to use order by similarity with an arg to sort by euclidian distance or cosine similarity, but it really didn't want to accept that cosine similarity isn't a distance metric and therefore has to be treated differently when sorting).

My most recent one was where I wasted an hour of time doing something that was literally just 1 line of code. I had videos of all different framerates, and I wanted to make them all 16fps while affecting length and speed as little as possible. It gave me a couple solutions that just straight up didn't work, and then I had to manually fix a ton of things with them, and then I finally had a scuffed and horrible solution. It wouldn't give me a better algorithm, so I tried to make one on my own, when I thought "I should Google if there's a simpler solution". From that Google search I learned "oh, there's literally just a .set_fps() method".

Anyways from using it I feel like it's helpful but not as much as people make it out to be. Honestly, GitHub copilot had been way more helpful because it can auto complete things that just take forever to write but are common, like command line args and descriptions, or pieces of repetitive code.

2