๐Ÿ‡ฉ๐Ÿ‡ช Herzlichen Glรผckwunsch an Sophie KergaรŸner, Nina Doerr, Markus Wieland, Martin Fuchs, Michael Sedlmair, Lesley-Ann Mathis, Carla Bernadette Bubeck, Matthias Peissner, Martin Kocur, Thomas Noack, Valentin Schwind, Johanna Bogon, Niels Henze, Heiko Drewes, Yara Fanger, und Sven Mayer zum Honorable Mention Paper Award! ๐ŸŽ‰

https://muc2024.mensch-und-computer.de/de/awards/

#muc2024 #awards #fullpaper

Awards โ€“ Mensch und Computer 2024

๐Ÿ‡ฌ๐Ÿ‡ง Congratulations to Sophie KergaรŸner, Nina Doerr, Markus Wieland, Martin Fuchs, Michael Sedlmair, Lesley-Ann Mathis, Carla Bernadette Bubeck, Matthias Peissner, Martin Kocur, Thomas Noack, Valentin Schwind, Johanna Bogon, Niels Henze, Heiko Drewes, Yara Fanger, and Sven Mayer! ๐ŸŽ‰ Your paper received an Honorable Mention Paper Award for an outstanding work!

https://muc2024.mensch-und-computer.de/en/awards/
#muc2024 #awards #fullpaper

Awards โ€“ Mensch und Computer 2024

๐Ÿ‡ฉ๐Ÿ‡ช Herzlichen Glรผckwunsch an Zelun Tony Zhang, Felicitas Buchner, Yuanting Liu und Andreas Butz fรผr den Gewinn des Best Paper Award auf der MUC 2024 fรผr ihr Paper โ€žYou Can Only Verify When You Know the Answer: Feature-Based Explanations Reduce Overreliance on AI for Easy Decisions, but not for Hard Onesโ€œ. ๐ŸŽ‰ Mehr รผber ihre herausragende Forschung erfahren Sie am Montag (02.09.) in der MCI-Paper Sesson 03.

https://muc2024.mensch-und-computer.de/de/awards/
#muc2024 #awards #fullpaper

Awards โ€“ Mensch und Computer 2024

๐Ÿ‡ฌ๐Ÿ‡ง Big congratulations to Zelun Tony Zhang, Felicitas Buchner, Yuanting Liu, and Andreas Butz for winning the Best Paper Award at MUC 2024 for their paper "You Can Only Verify When You Know the Answer: Feature-Based Explanations Reduce Overreliance on AI for Easy Decisions, but Not for Hard Ones." ๐ŸŽ‰ Hear more about their outstanding research on Monday (02.09.) in the MCI-Paper Sesson 03.

https://muc2024.mensch-und-computer.de/en/awards/
#muc2024 #awards #fullpaper

Awards โ€“ Mensch und Computer 2024

๐Ÿ‡ฌ๐Ÿ‡ง Don't forget, the deadline for full paper submissions is on April 11th, 2024 (AoE). Please note that there will be no extensions this year. Best of luck as you finalize your manuscripts! For information about the process, please visit: https://muc2024.mensch-und-computer.de/en/call-for-papers/hci/full-paper/ ๐Ÿ“โœจ

#FullPaper #MuC24 #cfp

Full Paper โ€“ Mensch und Computer 2024

๐Ÿ‡ฌ๐Ÿ‡ง We are looking for you!

https://docs.google.com/forms/d/e/1FAIpQLSeDjyQ12RsA_8TC-MgBy3rrwtgqJBaiz2usxhRZBdAg9Qah6Q/viewform

If you would like to volunteer as an Associate Chair or Reviewer in the Full Paper Track, please fill in the form by 1 March. We are looking forward to meeting you ๐Ÿซฑ๐Ÿปโ€๐Ÿซฒ๐Ÿพ

#muc24 #callforvolunteers #volunteers #fullpaper

MUC 24 - AC Recruitment - Full Paper Track

On behalf of Katta Spiel, Max Birk, and Jasmin Niess, we are inviting you to volunteer as an Associate Chair (AC) for the Mensch und Computer (MuC) 2024 Program Committee! Please use the form below to express your interest to serve as an AC for the MuC 2024 paper track. MuC Program Committee (PC) members oversee around 8 submissions through a formal review process. The process will begin after the 11th of April and the main reviewing taks will be finished before the 9th of May. However, the PC meeting is scheduled for mid-May, so we might come back to you to revise your meta-reviews before then. As an AC you are expected to recruit and support 2 reviewers for about 4 submissions (hence, you will have to recruit approximately 8 external reviewers) and act as 2nd AC/reviewer for another 4 submissions. More details about the MuC 2024 paper track can be found at: https://muc2024.mensch-und-computer.de/en/call-for-papers/hci/full-paper/ Deadline for completing this application form: 1 March 2024 For additional details, you are also welcome to contact the Full Paper Chairs (Katta, Jasmin, and Max) at [email protected].

Google Docs

๐Ÿ‡ฌ๐Ÿ‡ง The PCS is now online for full papers! Reminder: The deadline is April 11th 2024 (AoE). For information about the process, please see https://muc2024.mensch-und-computer.de/en/call-for-papers/hci/full-paper/. We look forward to your submissions!

๐Ÿ‡ฉ๐Ÿ‡ช German: PCS ist nun fรผr die Full Paper geรถffnet. Erinnerung: Die Frist ist der 11. April 2024 (AoE). Alle Informationen zum Ablauf findet ihr auf unserer Website: https://muc2024.mensch-und-computer.de/de/call-for-papers/hci/full-paper/. Wir sind gespannt auf eure Einreichungen!

#callforpapers #fullpaper #muc24

Full Paper โ€“ Mensch und Computer 2024

Analisis Tematik Refleksi Pelajar Terhadap ProNaja X2 โ€“ Permainan Papan dalam Pembelajaran Asas Pengaturcaraan

Abstrak, Pengenalan, Tinjauan Literatur, Metodologi, Dapatan, Perbincangan, Rujukan โ€” selesai, alhamdulillah, ayuhlah Merdekaยณ !!!  

#fullpaper #ICTE2023 #UPSI

๐Ÿ“œ We compared three event sequence visualization tools via a insight-based crowdsourced study.
โœ๏ธ Kazi Tasnim Zinat, Jinhua Yang, Arjun Gandhi, Nistha Mitra, Zhicheng Liu
๐Ÿ‘‰ http://arxiv.org/abs/2306.02489
#Fullpaper #EuroVis #Eurovis2023
A Comparative Evaluation of Visual Summarization Techniques for Event Sequences

Real-world event sequences are often complex and heterogeneous, making it difficult to create meaningful visualizations using simple data aggregation and visual encoding techniques. Consequently, visualization researchers have developed numerous visual summarization techniques to generate concise overviews of sequential data. These techniques vary widely in terms of summary structures and contents, and currently there is a knowledge gap in understanding the effectiveness of these techniques. In this work, we present the design and results of an insight-based crowdsourcing experiment evaluating three existing visual summarization techniques: CoreFlow, SentenTree, and Sequence Synopsis. We compare the visual summaries generated by these techniques across three tasks, on six datasets, at six levels of granularity. We analyze the effects of these variables on summary quality as rated by participants and completion time of the experiment tasks. Our analysis shows that Sequence Synopsis produces the highest-quality visual summaries for all three tasks, but understanding Sequence Synopsis results also takes the longest time. We also find that the participants evaluate visual summary quality based on two aspects: content and interpretability. We discuss the implications of our findings on developing and evaluating new visual summarization techniques.

arXiv.org
๐Ÿ“œ GO-Compass: How can we compare lists of apples and oranges, or more specifically, lists of GO terms?
โœ๏ธ Theresa Harbig, Mathias Witte-Paz, Kay Nieselt
๐Ÿ‘‰ https://tuevis.cs.uni-tuebingen.de/go-compass/
#Fullpaper #EuroVis #Eurovis2023
GO-Compass โ€“ TueVis