Recent comments in /f/MachineLearning

paulgavrikov t1_jdguvmk wrote

Same here, 3 out of 4 reviewers didn’t respond. And with all due respect, there is no excuse for that. If you agree to serve as reviewer, you also agree to do it faithfully. Unfortunately, the last part is often missed.

5

kross00 t1_jdgutr0 wrote

Is it feasible to train Llama 65B (or smaller models) to engage in chit-chatting in a manner that would not readily reveal whether one is conversing with an AI or a human? The AI does not need to answer highly complex questions and could decline them similarly to how a human would.

1

utopiah t1_jdgu9aa wrote

Does ChatGPT actually do that currently, namely keep track of your past prompts and makes a model of your tastes or values, so that "me" here is meaningful?

PS: not sure why the downvote. Is it an offensive or idiotic question?

1

dotnethero t1_jdgqv13 wrote

Hey everyone, I'm trying to figure out which parts of my code are using CPU and which are using GPU. During training, I've noticed that only about 5% of my usage is on the GPU, while the CPU usage is high. Any tips on how I can better understand what's going on with my code? Thanks in advance!

1

willer t1_jdgps4b wrote

I read through the docs, and in this release, ChatGPT only calls the /query API. So you can't implement long term memory of your chats yourself, as it won't send your messages and the responses to this service. Your retrieval API acts in effect as a readonly memory store of external memories, like a document library.

3