#DigitalHumanities #LiteraryComputing
Her stay concluded with a workshop at the AI Futures Lab as part of the #CulturalAnalytics series, where she introduced the new add-on #Flexicon for the concordance web application #CLiC.
Dear Michaela, thank you so much for traveling to the I School and for these productive and collegial days of exchange. 💻✍️📚
This paper explores the computational analysis of sound in English-language literary fiction, building on Guhr's (2026) operationalisation of fictional sound events as sound-word-bearing verbal phrases annotated with loudness levels. Originally developed for German prose, the method is adapted here to 19th-century British fiction, using the Dickens Novel Corpus (DNov) as a case study. Rather than relying exclusively on manual annotation, German-language training texts were automatically translated into English using the DeepL API, preserving XML-based annotation spans. These, combined with a single manually annotated English text, were used to fine-tune a pre-trained English BERT model. The results show a surprisingly strong performance compared to similar adaptations in other genres of the same target language. The paper discusses the benefits of using translated annotations and examines sound-related patterns across Dickens's novels using a scalable reading approach to DNov.