Researchers from Huawei and University College London developed EM-LLM to emulate human episodic memory in large language models (LLMs). This innovation allows LLMs to have infinite context lengths without increased computing power. EM-LLM organizes tokens into events using Bayesian surprise and graph-theoretic methods, enhancing LLM performance on long-context tasks, and outperforming other approaches by significant margins.