New paper out today, asking: What books has ChatGPT/GPT-4 *memorized*? A LOT. Harry Potter, Pride & Prejudice, 1984, LotR, Hunger Games, GoT, 50 Shades of Grey, Dune. Memorization is linked to web popularity--lots of old classics + new sci-fi/fantasy. This is a problem since some downstream tasks in NLP/DH do better on memorized books than non-memorized ones, leading to test contamination for questions in cultural analytics when the set of memorized books is unknown. https://arxiv.org/abs/2305.00118
Speak, Memory: An Archaeology of Books Known to ChatGPT/GPT-4
In this work, we carry out a data archaeology to infer books that are known to ChatGPT and GPT-4 using a name cloze membership inference query. We find that OpenAI models have memorized a wide collection of copyrighted materials, and that the degree of memorization is tied to the frequency with which passages of those books appear on the web. The ability of these models to memorize an unknown set of books complicates assessments of measurement validity for cultural analytics by contaminating test data; we show that models perform much better on memorized books than on non-memorized books for downstream tasks. We argue that this supports a case for open models whose training data is known.
