Recent comments in /f/MachineLearning

Haycart t1_jdu7hlp wrote

Reply to comment by visarga in [D] GPT4 and coding problems by enryu42

Oh, you are probably correct. So it'd be O(N^2) overall for autoregressive decoding. Which still exceeds the O(n log n) that the linked post says is required for multiplication, though.

1

blose1 t1_jdu4cln wrote

Reply to comment by visarga in [D] GPT4 and coding problems by enryu42

What? Where I'm exactly mistaken? Because both of my statements are true. And there is 0% chance you can pass olympiad task without knowledge, human with all the knowledge WILL reason and come up with a solution BASED on the knowledge he has AND experience of others that is part of that knowledge, if that weren't true then no human would solve any Olympiad. Sorry, but what you wrote in context of my comment is just ridiculous, and looks like a reply to something I didn't write.

4

super_deap t1_jdu3zan wrote

It is fine if you disagree and I believe a lot more people will disagree with this philosophical position as it is not very popular these days.

Near-death experiences, out-of-body experiences, contact with 'immaterial entities' and so on hint towards an existence beyond our material reality. Since there is no way one could 'scientifically' test these does not mean these things simply do not exist.

Testimony widely used yet mostly dismissed method of knowledge acquisition establishes all of the above:

A patient being operated on while in a complete medical comma explaining the things happening in clear details in a nearby room after the operation that there is no way they could have known that, one such testimony by a reliable person is sufficient to establish that our current understanding of the world is insufficient. And there are so many of these.

I am not saying u have to change your worldview just because I am saying so. do your research. the world is much bigger than what is out there on the internet. (pun intended)

−1

lvvy t1_jdu3qg4 wrote

It would be interesting to see if ChatGPT can solve these problems not with code, but with a text instruction, that would allow a human to solve these problems? So if you force it to write giant text wall of actions, would a human with calculator solve these confident? Also, is code that it generates cannot be corrected at all by discussing or discussing would take too long?

1

sdmat t1_jdu3imb wrote

GPT4 will do this to an extent out of the box, feed it some assembly and it will hypothesise a corresponding program in the language of your choice. For me it still has that disassembler character of over-specificity, but I didn't try very hard to get idiomatic result.

It can give detailed analysis of assembly too, including assessing what it does at a high level in plain english. Useful!

Edit: Of course it's going to fail hopelessly for large/complex programs.

14

Majestic_Food_4190 t1_jdu3fod wrote

Reply to comment by cegras in [D] GPT4 and coding problems by enryu42

It amuses me that people always mentions things of this nature. If the answer is simply, yes.... Then it's still doing it far faster than you are. Making it a better developer than most others.

It's like Watson beating the top people at jeopardy. Was it just searching the internet? Pretty much. Did it in turn win jeopardy? Yes.

So does the how matter?

7

artsybashev t1_jdu2hjs wrote

Reply to comment by super_deap in [D] GPT4 and coding problems by enryu42

The physical world that we know is very different from the virtual twin that we see. The human mind lives in a virtual existence created by the material human brain. This virtual world creates nonexisting things like pain, colors, feelings and also the feeling of existence.

The virtual world that each of our brain creates is the wonderful world where a soul can emerge. Virtual worlds can also be created by computers. There is no third magical place besides these two in my view.

0

visarga t1_jdu1fgf wrote

> Does this mean developers/humans don't have AGI?

The intellect of our species isn't universal, we're merely experts at self-preservation and propagation. Take, for instance, chess – it isn't our forte, and even a small calculator could outperform us. Our minds are incapable of 5-D visualization, and we struggle to maintain over 10 unrelated items in our immediate memory. Generally, we falter when addressing problems where the initial move relies on the final steps, or situations that don't allow for linear progression, such as chess or mathematical quandaries. It took us centuries to decipher many of these enigmas. Our specialization lies in tackling human-centric challenges, rather than all-encompassing ones. Evolution simply hasn't had sufficient time to adapt our cerebral cortex for mathematical prowess.

1

super_deap t1_jdu0w8f wrote

Hard disagree with Materialism. I know I might get a lot of -ve votes, but this has to be said:

A large portion of the world (especially outside of the west) does not believe in 'consciousness "emerging" from electrical impulses of the brain.' While the west has progressed a lot materially, bringing us to modernity (and now post-modernity), people outside of the west believe in an immaterial soul that cannot be captured by definition by the scientific method and it transcends our material body.

While I believe we will reach general human-level intelligence (and may go beyond this) because intelligence has a purely material component that we can replicate in computers, consciousness will never ever arise in these systems. There are very strong philosophical arguments to support this case.

−5

vintergroena OP t1_jdtzyuo wrote

Yeah, GPT sucks on tasks which require actual thinking and personally I am kind of skeptical about it's actual usefulness tbh. But my impression is that despite being primarily built to work with natural language, it actually does work better with computer code. Probably because computer code has much simpler structure. This got me thinking that building something more specialized that would be required to only work with computer code would actually be an easier task - more similar to automated translation perhaps, which is already working pretty well using ML.

4

sdmat t1_jdtytyy wrote

Reply to comment by yaosio in [D] GPT4 and coding problems by enryu42

> Like coding, even if you use chain of thought and self reflection GPT-4 will try to write the entire program in one go. Once something is written it can't go back and change it if it turns out to be a bad idea, it is forced to incorporate it. It would be amazing if a model can predict how difficult a task will be and then break it up into manageable pieces rather than trying to do everything at once.

I've had some success leading it through this in coding with careful prompting - have it give a high level outline, check its work, implement each part, check its work, then put the thing together. It will even revise the high level idea if you ask it to and update a corresponding implementation in the context window.

But it definitely can't do so natively. Intuitively it seems unlikely that we can get similar results to GPT4+human with GPT4+GPT4 regardless of how clever the prompting scheme is. But the emergent capabilities seen already are highly surprising, so who knows.

Really looking forward to trying these schemes with a 32K context window.

Add code execution to check results and browsing to to get library usage right and it seems all the pieces are there for an incredible level of capability even it still needs human input in some areas.

5