Feldkamp et al. reveal diverse perspectives on literary quality and show that expert opinion often conflicts with crowd-sourced ratings & award nominations, highlighting the complexity of measuring literary quality. https://doi.org/10.48694/jcls.3908 #CLS #CCLS24 #LiteraryQuality #CulturalAnalytics
Measuring Literary Quality. Proxies and Perspectives

Computational studies of literature use proxies like sales numbers, human judgments, or canonicity to estimate literary quality. However, many quantitative use one such measure as a gold standard without fully reflecting on what it represents. We examine the interrelation of 14 proxies of literary quality in novels published in the US from the late 19th to 20th century, distinguishing between expert-based judgments (e.g., syllabi, anthologies) and crowd-based ones (e.g., GoodReads ratings). We show that works favored in expert-based judgments often score lower on GoodReads, while award-nominated works tend to circulate more widely in libraries. Generally, two main kinds of `quality perception' emerge as we map the literary judgment landscape: One associated with canonical literature and one with more popular literature. Additionally, prestige in genre literature, reflected in awards like the Hugo Award, forms a distinct category, more aligned with popular than canonical proxies.

Journal of Computational Literary Studies