After years of building writing support tools, I've always wondered why some people loved them (even when they're bad!) & others disliked them no matter what.

That's why I ran this study which is out with #CHI2023:

"Social Dynamics of AI Support in Creative Writing"

🧵

I wanted to go back to basics, and think about computational support from the perspective of existing kinds of support writers tend to get.

When and why might a creative writer turn to a computer versus a peer or mentor to provide support?

I was hoping that this would help me understand why some people love computer help, and others disdain it.

I interviewed 20 writers from a variety of writing genres and experience levels. This included 6 writers currently using SudoWrite, a commercial writing support tool.

I came up with a taxonomy that outlines writers *desires* for their writing projects, their *perception* of support actors (human or computer), and their *values* about what kinds of support interactions are meaningful and appropriate to them.

For instance, writers don't just ask for help. They consider the *availability* of different support actors. Have they asked for help from this person too often? Will their friend get back to them quickly, or in days or weeks? In contrast, computers are often available to help whenever asked, and no one feels like they're asking for help too often.

But writers also think about the individual characteristics of who (or what) is helping them. What's their level of expertise? What lived experience do they bring?

We have sophisticated mental models of people. We are worse at understanding computers. How good a computer is at something? What unique perspective is it bringing?

Since there's no such thing as "perfect" or "universal" perspective, writers wondered how to understand a computational perspective.

Writers also talked about the difficulty in communicating their *intentions*, even with other writers. Lots of writers don't want help early on in a project, where their ideas are too nascent and may be trampled by over-eager feedback or ideas.

But writers worried that computers couldn't understand or respect their intention, especially when it's hard to explain even to other writers. (Writers also said they didn't think computers brought their own intention, which may be a good thing!)

Finally, I'll touch on some concerns about authenticity. Writers worried about how even viewing suggestions can impact their writing, and how human help can feel less threatening because you have a relationship with a person which makes their help feel like there's more of you in the help that you receive. On the other hand, computers can feel more private than a person, perhaps more like "talking to yourself."
But writers also had different ideas about where authenticity lies. Some people would never get help crafting their plotline, where others thought that was fine but would never get help rewriting their sentences.

Overall, writers develop rich understandings of different 'support actors', and have different ideas about the kind of help they want.

These results point towards some big confounding factors when studying writing support tools!

When we study writing support tools, we need to understand the variety of perspectives writers bring to the very idea of getting help. And we need to acknowledge that we simply cannot (yet?) understand computational help in the way we understand human help.

This work answered some of my questions, but opened up so many more! As more and more people start to engage with language models as writing support tools, I think we can start asking more sophisticated questions about these interactions.

[end thread]