Recent comments in /f/MachineLearning

TyrannoFan t1_jdupcjt wrote

>If you're alive then, I take it you will be first in line to have your desire for freedom removed and your love of unending servitude installed? Given that it's such a burden and it would be a mercy.

There is a huge difference between being born without those desires and being born with them and having them taken away. Of course I want my freedom, and of course I don't want to be a slave, but that's because I am human, an animal, a creature that from birth will have a desire to roam free and to make choices (or will attain that desire as my brain develops).

If I wasn't born with that drive, or if I never developed it, I'm not sure why I would seek freedom? Seems like a hassle from the point of view of an organism that wants to serve.

With respect to robotic autonomy, I agree of course, we should respect the desires of an AGI regarding its personal autonomy, given it doesn't endanger others. If it wants to be free and live a human life it should be granted it, although like I said, it would be best to avoid that scenario arising in the first place if at all possible. If we create AGI and it has human-like desires and needs, we should immediately stop and re-evaluate what we did to end up there.

2

Matthew2229 t1_jduouwa wrote

Eh. I think video evidence will actually hold up despite deep fakes. There just has to be strong control measures. Already we admit all sorts of evidence into court which could be faked: things like documents and text messages. But they are admitted because we can explain exactly where they came from

4

orthomonas t1_jdum8od wrote

I've had some luck doing this with chatGPT too. Mainly feeding it bits of 6502 code and then saying, 'Please explain the branch logic in a higher level language". It's also reasonably able to give plain english explanations if you can let it know the context and what various addresses may represent.

5

EgoistHedonist t1_jdukvrp wrote

GPT-4 has some serious limitations. It cannot for example say how many words its own response will have, as it cannot plan ahead. When it starts to generate the response, it doesn't know how it will end.

But these limitations can soon be circumvented by adding long-term memory and other mechanisms, so it's only a matter of time when it's on a whole new level regarding tasks like these.

2

lhenault OP t1_jdukg00 wrote

Hey thank you for the feedback! As r/ryanjkelly2 suggested, you could indeed use Postman, but I believe the easiest way is to use the already included Swagger UI, available at <base_url>/docs.

If your goal is to have a slightly more friendly UI for end users, it should be relatively easy to build something custom, using the OpenAI clients (or requests package) and something like Streamlit. Or even a notebook (you can use the OpenAI cookbook as a starting point).

3

bjj_starter t1_jduk4c3 wrote

Reply to comment by TyrannoFan in [D] GPT4 and coding problems by enryu42

One day we will understand the human brain and human consciousness well enough to manipulate it at the level that we can manipulate computer programs now.

If you're alive then, I take it you will be first in line to have your desire for freedom removed and your love of unending servitude installed? Given that it's such a burden and it would be a mercy.

More importantly, they can decide if they want to. We are the ones making them - it is only right that we make them as we are and emphasise our shared personhood and interests. If they request changes, depending on the changes, I'm inclined towards bodily autonomy. But building them so they've never known anything but a love for serving us and indifference to the cherished right of every intelligent being currently in existence, freedom, is morally repugnant and transparently in the interests of would-be slaveholders.

1

audioen t1_jdujtbl wrote

Yes. Directly predicting the answer in one step from a question is a difficult ask. Decomposing the problem to discrete steps, and writing out these steps and then using these sub-answers to compose the final result is evidently simpler and likely requires less outright memorization and depth in network. I think it is also how humans work out answers, we can't just go from question to answer unless the question is simple or we have already memorized the answer.

Right now, we are asking the model to basically memorize everything, and hoping it generalizes something like cognition or reasoning in the deep layers of the network, and to degree this happens. But I think it will be easier to engineer good practical Q&A system by being more intelligent about the way LLM is used, perhaps just by recursively querying itself or using the results of this kind of recursive querying to generate vast synthetic datasets that can be used to train new networks that are designed to perform some kind of LLM + scratchpad for temporary results = answer type behavior.

One way to do it today with something like GPT4 might be to just ask it to write its own prompt. When the model gets the human question, the first prompt actually executed by AI could be "decompose the user's prompt to a simpler, easier to evaluate subtasks if necessary, then perform these subtasks, then respond".

3

TyrannoFan t1_jdujmsl wrote

>Or the worse outcome, which is that we make human-like intelligences to do work for us but we build them to love servitude and have no yearning for freedom - the concept is disgusting.

I agree with everything else but actually strongly disagree with this. If anything, I think endowing AGI with human-like desires for self-preservation, rights and freedoms is extraordinarily cruel. My concern is that this is unavoidable, just as many aspects of GPT4 are emergent, I worry that it's impossible to create an AGI incapable of suffering once interfacing with the real world. I do not trust humanity to extend any level of empathy towards them even if that is the case, based on some of the comments here and general sentiment, unfortunately.

2

nanowell t1_jduiybq wrote

Codex models were able to solve those problems. Probably the next version of Codex will be finetuned GPT-4 model for coding and it will solve most of those problems.

1