CMU Data Interaction Group

@cmudig@hci.social
102 Followers
6 Following
12 Posts
We are a research group at the Human-Computer Interaction Institute (@cmuhcii) at Carnegie Mellon University. Our mission is to empower everyone to analyze and communicate data with interactive systems.
webpagehttps://dig.cmu.edu

I’m at @chi 🌸 next week so come and say hi 👋.

My group, the @cmudig, has a few presentation you don’t want to miss https://dig.cmu.edu/2025/03/21/chi2025.html.

We will also have awesome folks from Apple 🍎 in Yokohama. Check out https://machinelearning.apple.com/updates/apple-at-chi-2025.

DIG Lab at CHI 2025

DIG lab members are a part of papers that will be presented at ACM CHI in 2025. Come say hi to Venkat, Will, Yanna, Arpit, and Dominik in Yokohama!

CMU Data Interaction Group

Do you work with text data for your job or research? Join our user study!

✅ Who? Programmers, researchers, or anyone working with text-based datasets

📅 What? 2–3 sessions where you’ll share how you work with text data and try out our prototype tool (up to $60 compensation)

📊 Data? Any text dataset—social media, news, research abstracts, etc.

Your feedback will help our research and shape our open-source visualization tool.

Learn more & sign up here 👉 https://forms.gle/beUBzERwiPQ9iVLEA

Texture User Study Interest Form

We are researchers at Carnegie Mellon University working on a new tool for visualizing text data while programming. We are recruiting people to participate in our user study to better understand how people program with text data and gather feedback on our new tool, Texture. This user study consists of 2-3 sessions: Session 1: Baseline interview [1 hour]. In the first session, you will bring a text dataset you have worked with before and show what you were trying to accomplish with the data and demonstrate how you programmed to achieve this goal. Session 2: Tool use [1 hour]. In the second session, we will load your data into our text visualization tool, Texture. You will use this tool to examine and explore the dataset and share your perceptions of the tool. Session 3: Independent tool use [Optional]. After the second session, you can optionally continue to use Texture on your own. While using the tool, you can report on your experience through a survey or additional interview. Participants who complete this independent tool use will be eligible for additional compensation. You will be compensated with a $20 USD gift card per session of this study. If you complete all three sessions you can earn $60 in total! Eligibility You are eligible to participate in this study if: You are 18 years or older You have experience working with text data for your work AND are able to share a dataset you have used recently to use in this study. You have experience with Python programming If you are interested in participating please fill out the form below and we will reach out! Thanks so much! Other questions? Reach out to Will Epperson at willepp@cmu.edu

Google Docs
We are (virtually) at @ieeevis and have a few cool papers and an award: https://dig.cmu.edu/2024/10/15/vis2024.html.
DIG Lab at VIS 2024

VIS is virtual this year but the Data Interaction Group is still well represented. Please say hi on Discord.

CMU Data Interaction Group
We’re now easier find in Newell Simon Hall at CMU.

Linked data visualization with Mosaic and #DuckDB

New blog!

@uwdata
@cmudig

https://calcwithdec.dev/posts/linked-visuals/

Calc with Dec 🧮 🤓 - Linked visuals for data exploration 🔍🎯

I'll be at Quack and Code today with @jeffrey_heer discussing #Mosaic, #DuckDB, and fast interactive visualizations on the web. You can be there, too: https://www.linkedin.com/events/duckdb-mosaic-interactiveinsigh7170740303287582720/.
DuckDB & Mosaic : Interactive Insights on Large datasets | LinkedIn

Dominik Moritz (Carnegie Mellon University / Apple) & Jeffrey Heer (University of Washington) are both professors. They research and develop data visualization tools used by thousands of people around the world! We’ll dive specifically into Mosaic, which leverage the power of hashtag#duckdb for snappy, interactive visualization both server-side and in your browser via WebAssembly.

We're hosting a virtual event November 12-14 for prospective human-computer interaction PhD students.

Drop in to meet our faculty, talk to current students, and learn more about admissions (our HCI PhD app is open until December).

Register: https://bit.ly/hci-phd-preview

#CarnegieMellon #cmuhcii #HumanComputerInteraction #HCI #hciphd

HCI PhD Virtual Recruiting Event Registration

The Human-Computer Interaction Institute at Carnegie Mellon University is holding a preview into our PhD program. This virtual recruitment event will be held on Sunday 11/12, Monday 11/13, and Tuesday 11/14. Please fill out the form below to register. You will receive Zoom information and details soon!

Google Docs

Frank, Will, Catalina, and Dominik will be representing the DIG lab at #ieeevis2023 @ieeevis in Melbourne.

Learn more about our papers and panels we will be on at https://dig.cmu.edu/2023/10/19/vis2023.html.

DIG Lab at VIS 2023

Frank, Will, Catalina, and Dominik will be representing the DIG lab in Melbourne.

CMU Data Interaction Group
Venkat Sivaraman gave an excellent talk at #chi2023 on how clinicians negotiate with AI when making high-stakes clinical decisions https://dig.cmu.edu/publications/2023-sepsis-ai.html @cmudig @cmuhcii
Ignore, Trust, or Negotiate: Understanding Clinician Acceptance of AI-Based Treatment Recommendations in Health Care

Artificial intelligence (AI) in healthcare has the potential to improve patient outcomes, but clinician acceptance remains a critical barrier. We developed a novel decision support interface that provides interpretable treatment recommendations for sepsis, a life-threatening condition in which decisional uncertainty is common, treatment practices vary widely, and poor outcomes can occur even with optimal decisions. This system formed the basis of a mixed-methods study in which 24 intensive care clinicians made AI-assisted decisions on real patient cases. We found that explanations generally increased confidence in the AI, but concordance with specific recommendations varied beyond the binary acceptance or rejection described in prior work. Although clinicians sometimes ignored or trusted the AI, they also often prioritized aspects of the recommendations to follow, reject, or delay in a process we term “negotiation.” These results reveal novel barriers to adoption of treatment-focused AI tools and suggest ways to better support differing clinician perspectives.

CMU Data Interaction Group
Alex Cabrera ( @cabreraalex ) speaks at #chi2023 about an amazing framework describing how data scientists develop mental models of AI behavior https://dig.cmu.edu/publications/2022-aiffinity.html @cmudig @cmuhcii
What Did My AI Learn? How Data Scientists Make Sense of Model Behavior

Data scientists require rich mental models of how AI systems behave to effectively train, debug, and work with them. Despite the prevalence of AI analysis tools, there is no general theory describing how people make sense of what their models have learned. We frame this process as a form of sensemaking and derive a framework describing how data scientists develop mental models of AI behavior. To evaluate the framework, we show how existing AI analysis tools fit into this sensemaking process and use it to design AIFinnity, a system for analyzing image-and-text models. Lastly, we explored how data scientists use a tool developed with the framework through a think-aloud study with 10 data scientists tasked with using AIFinnity to pick an image captioning model. We found that AIFinnity’s sensemaking workflow reflected participants’ mental processes and enabled them to discover and validate diverse AI behaviors.

CMU Data Interaction Group