MysteryInc152
MysteryInc152 t1_j4lv0d5 wrote
Codex and chatGPT can understand more than just functions. The issue with them is the limited token window.
MysteryInc152 t1_j4l8fwz wrote
Reply to comment by [deleted] in [D] Can ChatGPT flag it's own writings? by MrSpotgold
Yeah well, that's not really how these models work. There's no pulling from a database and there's no external searching. The model was trained and frozen.
While it is possible to have the model access some external database in the future, yeah...that's not going to happen in relation to previous chat entries you have no right or access to. That's a privacy can of worms no corporation with any sense will get into as well as being prohibitively expensive for no real gain at all.
MysteryInc152 t1_j4l38t9 wrote
Reply to [D] Can ChatGPT flag it's own writings? by MrSpotgold
No and no
MysteryInc152 t1_iyots1p wrote
Reply to comment by Ribak145 in Have you updated your timelines following ChatGPT? by EntireContext
What it can do with code is pretty astounding ?
MysteryInc152 t1_j50pkxw wrote
Reply to comment by Daos-Lies in [D] Inner workings of the chatgpt memory by terserterseness
With embeddings, it should theoritically not have a hard limit at all. But experiments here suggest a sliding context window of 8096
https://mobile.twitter.com/goodside/status/1598874674204618753?t=70_OKsoGYAx8MY38ydXMAA&s=19