Submitted by fangfried t3_11alcys in singularity
Nukemouse t1_j9thvgk wrote
Reply to comment by beezlebub33 in What are the big flaws with LLMs right now? by fangfried
Pardon me but isnt the life long learning one intentional as they limit its ability to learn? My understanding was that after the initial training it doesnt simply use all of its conversations as training data, to prevent a new Tay.
beezlebub33 t1_j9ue4ov wrote
Slightly different things. That's more the episodic memory.
For Life-Long-Learning: No system gets it right all the time; if there is a mistake that it makes, like misclassifying a penguin as a fish (it doesn't make this mistake), then there is no way for it to get fixed. Similarly, countries, organizations, and the news change constantly and so it quickly becomes out of date.
It can't do incremental training. There are ways around this; some AI/ML systems will do incremental training (there was a whole DARPA program about it). Or the AI/ML system (which is stable) can reason over a dynamic data set / database or go get new info; this is the Bing Chat approach. It works better, but if something is embedded in the logic, it is stuck there until re-training.
Viewing a single comment thread. View all comments