Unconscious incompetence with technology

I really like this concept I was introduced by Terry Hanley, writing about AI and psychotherapy:

When it comes to artificial intelligence and therapy, I’m increasingly struck by how many of us may be operating in a place of unconscious incompetence. Not through negligence or lack of care, but through familiarity. Therapy has always absorbed new tools, new forms of language, new contexts for relating. Technology, in that sense, can feel like just more background noise – something that sits “over there” in admin systems, appointment booking, outcome measures, or risk protocols.

But, and this is quite a big but, AI is arguably not just another tool. It is quietly reshaping how information is produced, filtered, summarised, and interpreted – including information about people’s distress, identities, and lives. And when something becomes woven into the fabric of everyday systems, it becomes easy not to notice what we don’t yet understand.

Unconscious incompetence is a surprisingly comfortable place to be. If we don’t quite see where AI is operating, or we assume it is neutral, peripheral, or someone else’s responsibility, then there is little immediate pressure to engage. The risk, however, is that decisions about therapeutic work – ethical, relational, and practical – are being shaped in ways we haven’t fully thought through.

https://counselling.substack.com/p/a-new-years-resolution-for-therapy?utm_source=post-email-title&publication_id=869300&post_id=183794185&utm_campaign=email-post-title&isFreemail=true&r=hcf3&triedRedirect=true&utm_medium=email

This is exactly how I’ve always seen the challenge of digital scholarship. What I call technological reflexivity is an antidote to unconscious incompetence in the sense of deliberately practicing a reflective orientation to the use of technology in your work. Competence can often result as an outcome of that process but it’s not a necessity for it – what matters is the reflection itself. This maps onto what Terry says here about therapists and AI:

None of this requires perfect knowledge. What it requires is attention, humility, and a willingness to say, “I need to know more about this and understand this better.” This list is of course not comprehensive but some areas that I believe are important for us to have on our radars.

The risk is not that we engage imperfectly, but that familiarity arrives before reflection. Seen this way, moving from unconscious incompetence to conscious competence is less about professional deficit and more about professional positioning. It shows up in small, often unremarkable practices: noticing where technologies are already shaping decisions, being clearer about boundaries in training and supervision, and staying alert to how administrative systems influence therapeutic work.

The phrase “familiarity arrives before reflection” feels like it concisely captures something I’ve been circling around for years without being able to quite express.

#AI #digitalScholarship #GenerativeAIForAcademics #psychotherapy #socialMediaForAcademics #sociotechnicalChange #technologicalReflexivity #TerryHanley #unconsciousCompetence

A New Year’s Resolution for Therapy: From Unconscious Incompetence to Conscious Competence with AI

The start of a new year often invites quiet stock-taking.

Counselling and Psychotherapy Stuff

“Generative AI for Academics is a brisk, sensible map for using LLMs in scholarly life”

Another good review of Generative AI for Academics, this time from the data scientist Bruno Gonçalves. Very interesting to see how this has been received from a more technical perspective:

Mark Carrigan’s “Generative AI for Academics is a brisk, sensible map for using LLMs in scholarly life. It avoids both hype and doom, treating generative AI as a set of tools that demand judgment, not blind adoption. The tone is practical and reflective—ideal for faculty, PIs, and grad students who need shared language and guardrails.

The book shines in how it organizes academic work (Thinking, Collaborating, Communicating, Engaging), then pairs each with concrete practices (rubber-ducking, draft refinement, critical oversight). It isn’t a prompt cookbook or a windy manifesto; it’s a clear framework for responsible use, culture-setting, and policy discussions in departments and labs.

#GenerativeAIForAcademics

Lovely review of Generative AI for Academics

This was such a kind review from Tom Redshaw. I feel a bit conflicted about this book a year on but Tom’s review reminds me of exactly what I was trying to do:

This is most evident in the final chapter, where Carrigan considers the consequences of widespread adoption in teaching and research. He paints a dystopian picture of where academia may be heading, speculating about a ‘coming crisis of scholarly publishing’ (p. 152) as generative AI accelerates output. He warns of a proliferation of low-value work, including ‘spam books’ and ‘spam articles’, as well as the growth of ‘salami-slicing’, where a single study is fragmented into multiple publications. He also highlights the risk of automating editorial processes, further diluting scholarly standards.

Yet Carrigan insists this future is not inevitable. If academics engage with generative AI critically and reflexively, they can help shape more meaningful norms around its use. He concedes that scholars are not the sole drivers of AI integration, given the broader socio-economic forces pushing adoption. But ‘it is within this ambiguous terrain that the norms developed by academics themselves, independently of university rule and policymaking, become particularly important’ (p. 162).

For sociologists, the interlocutor framing resonates with a long tradition of showing how technologies are socially shaped (MacKenzie and Wajcman, 1999). But the book’s appeal extends well beyond sociology. By reframing generative AI as a dialogue partner and urging scholars to share their reflective practices, Carrigan offers academics across disciplines a way to navigate the uncertainty of higher education today.

This was the point of Social Media for Academics as well really: sociotechnical change provides occasions for scholarly reflexivity. Indeed it necessitates it almost by definition. This isn’t sufficient to solve the ensuing challenges (to put it mildly) but I continue to believe it’s a necessary condition for dealing with them within the sector.

#GenerativeAIForAcademics

The visibility of academics will be shaped through LLMs as much as social media in future

This observation by the tech journalist Casey Newton got me thinking about how LLMs are increasingly shaping the visibility of academics:

Thinking models have gotten surprisingly good at identifying potential sources — potentially academic ones. When writing about Grok last month, I wanted to talk to someone who had studied relationships between people and chatbots. ChatGPT led me to Harvard’s Center for Digital Thriving, and suggested someone to talk to, along with their email address. I wound up interviewing them for the piece. The fact that thinking models can quickly analyze the academic literature about any subject and identify prominent researchers on the subject, along with their email addresses and phone numbers, is beginning to save me a lot of Googling.

I realised early on that I was more visible in model responses (ChatGPT and Claude) than other academics of a comparable age, career stage and influence* which I assumed was because 6000 blog posts hosted on wordpress.com were gobbled up in training. It could talk at greater length, with more accuracy, about my work then it could about other academics because my online visibility translated into model visibility.

I suspect this also means I’m more prone to being suggested by the model for a topical discussion in the way that Casey points to when looking for experts to interview, though I’m unsure how to go about establishing this. The value of a long term blog also means that I figure prominently as a source for ChatGPT and software like Perplexity. Interestingly, I don’t recall ever seeing a single referral from Claude. In the last year I’ve had more referrals to this blog from ChatGPT than I have from Facebook or Bluesky, though interestingly LinkedIn drives more traffic.

In other words there’s a complex relationship between online visibility and model visibility. Given that online visibility is the key driver which led social media to be institutionalised into higher education in the UK, this is very significant for academic careers even if it takes a long time for it to consolidate into a widely recognised incentive structure.

What other factors lead to increased model visibility? Ultimately this is a matter of visibility within the training data, but the patterns of visibility produced by this are challenging to conceptualise. What are the positive and negative outcomes of increased model visibility? Casey illustrates one in terms of visibility to journalists but there are many others.

*I did this in a very impressionistic way but it would be interesting to do this as a robust quantitative exercise.

This is an interesting overview of the rapidly developing field of SEO for LLMs: https://www.seerinteractive.com/insights/how-to-get-your-brand-in-chatgpts-training-data

#CaseyNewton #GenerativeAIForAcademics #higherEducation #SocialMedia #socialMediaForAcademics #trainingData #visibility #wordpress

What I learned about productivity this year

What I gave up, what I kept, and what's new. PLUS: How I'm using AI

Platformer

A review essay on Generative AI for Academics

Thanks so much to Milan Stürmer for this thought provoking and insightful reflection on generative AI for academics:

However, it might be that these capacities are acquired and maintained through just the kind of reading and writing practices that are in danger of disappearing with the widespread adoption of Large Language Models (LLMs). For those that have acquired advanced levels of literacy and trained their scholarly craft prior to their widespread adoption, the distinction between ‘thinking with’ and ‘substitute for’ might seem much more clear-cut than for those born into the age of LLMs. If and how the practice of ‘thinking with’ can sustain its own condition of possibility is still an open question.

Throughout the book, I find myself agreeing with Carrigan’s (2025) enthusiasm on an abstract level, while remaining consistently unable to engage with conversational agents in an equally meaningful and productive manner. The affordances of GenAI systems just seem much less suited to my own routines and habits. Which confronts me, as a reader, with a conundrum: How far am I willing to change my own practice to better accommodate conversational agents as collaborators? This, unfortunately, is unlikely to remain a question of personal preference. If/as these systems get adopted more widely and the academy accelerates even further, it might no longer be a choice, at least for those without permanent positions.

Read

#GenerativeAIForAcademics #largeLanguageModels #MilanStürmer #technologicalReflexivity

Generative AI for Academics

SAGE Publications Ltd

Thinking With Machines: How Academics Can Use Generative AI Thoughtfully and Ethically

https://www.youtube.com/watch?v=8w56GaAjaP4&t=218s

#digitalScholarship #generativeAI #GenerativeAIForAcademics #largeLanguageModels

Thinking With Machines: How Academics Can Use Generative AI Thoughtfully and Ethically

YouTube

Webinar: Is it possible for academics to use LLMs in a responsible and ethical way?

The emergence of ChatGPT and other generative AI tools presents both opportunities and challenges for academia. While these technologies offer powerful capabilities to support scholarship, their thoughtless adoption could undermine the very foundations of academic work. This talk introduces a framework for incorporating generative AI into academic practice in ways that enhance rather than replace human thought. Drawing on extensive practical experience, it demonstrates how conversational agents can serve as intellectual interlocutors rather than mere productivity tools, while examining the broader implications of these developments for the future of universities. There is an urgent need to establish what constitutes responsible and ethical use of LLMs for academics, which means taking seriously the argument that this might not be possible.

Register here: https://digitalsociety.mmu.ac.uk/event/ai-literacies-public-launch/

#GenerativeAIForAcademics

AI Literacies public launch - Digital Society Research Cluster

Location: Brooks Building, Room BR 2.18 and BR 2.19, M156GX Understanding the impact of AI on social, cultural and political life has been at the heart of our new initiative […]

Digital Society Research Cluster

Generative AI and the emergency remote scholarship of the Covid-19 pandemic

This is an extract from Generative AI for Academics

During those moments when change is taking place, it becomes easier to reflect upon the technology our scholarship depends on. We notice it far more during these periods of change than we do once it has faded into the background of our working environment. In his commencement speech at Kenyon College, the novelist David Foster Wallace (2005) began with a parable that been a repeated favourite of bloggers over the years: 

“There are these two young fish swimming along and they happen to meet an older fish swimming the other way, who nods at them and says “Morning, boys. How’s the water?” And the two young fish swim on for a bit, and then eventually one of them looks over at the other and goes “What the hell is water?”

The point Wallace was making is that “the most obvious, important realities are often the ones that are hardest to see and talk about”. For academics our dependence upon technology is one such reality, it is so intimately relied upon that we easily ignore how integral it is to what we do. We get frustrated when it breaks, upgrade devices in pursuit of better experiences and sometimes talk to each other about practical issues we encounter. There’s a particular sort of infantile rage which otherwise sedate academics can express when the office printer doesn’t work that has always fascinated me. But the manner in which our scholarship is digital at this point tends to go unremarked upon, apart from during those times when a dramatic shift is enforced upon us. 

The enforced digitalisation of the Covid-19 pandemic was one such event, we all became digital scholars by default because lockdown restrictions squeezed out those remaining arenas which were not entirely reliant on the digital (Carrigan, 2021). But rather than being the prelude to a newly reflective approach to digital technology, the emergency digital scholarship of the pandemic has faded. In using the term I’m drawing a connection to the emergency remote teaching which dominated pedagogy during the pandemic (Nordmann et al, 2020). It was a pragmatic response to circumstance that had little relationship to the rich repertoire of digital education which preceded the pandemic (Weller, 2020). Yet for many academics online learning is synonymous with the hastily improvised Zoom meetings and self-recorded videos of the pandemic, contributing to an understandable impulse to revert to the pre-pandemic norm. The same I suggest is true of digital scholarship, with the unwelcome technological reliance of the crisis now shaping the unexamined practice of academics in a hybrid work culture. When we are adjusted to the technical systems we work within, it “fades into the background, forgotten as it disappears into everydayness, just as, for a fish, what disappears from view, as its ‘element’ is water” (Stiegler, 2019: loc 887). But when that adjustment breaks down as the system changes, we are confronted with the fragile nature of the tools we use and our dependence on them. These are moments in which professional cultures can inadvertently establish practices which get locked in before the change dissipates. The challenge of GAI is an invitation for academics to grapple with the digitalisation of their practice more broadly. But the track record in many disciplines and fields does not give cause for optimism.

This matters for academics because technology is a disrupter of professional jurisdiction (Abbott, 1988). Each new development offers alternative ways to address the challenges traditionally within the purview of that profession. By advocating a reflexive approach to GAI, as an interlocutor rather than a tool, I am advocating a creative exploration of how our problem-solving activity might be changed and our professional jurisdiction redefined. This does not mean standardising our use of GAI, which I suspect would be impossible across diverse disciplines and fields, but rather recovering common questions of professional purpose which unite what we do as people who produce and communicate knowledge. While the purposes underlying our work might often recede in the mundane reality of university life, there are nonetheless purposes to research, teaching, service and engagement. These are values which can guide us in a complex and uncertain landscape.

#digitalScholarship #GenerativeAIForAcademics #pandemicUniversity #PostPandemicUniversity

Generative AI for Academics

SAGE Publications Inc

🖥️ Are you running a reading group on Generative AI for Academics?

I’m joining an online reading group in Sweden tomorrow who have been reading Generative AI for Academics together over recent weeks.

If you’re doing something similar, I’d be happy to come and discuss the book with you – just get in touch here.

#GenerativeAIForAcademics #markCarrigan #readingGroup

Get In Touch

Visit the post for more.

Mark Carrigan

Another review of Generative AI for Academics

Really thoughtful and balanced review of Generative AI for Academics from The Sociological Review’s Emma Craddock 😊

This book offers a very thorough and thoughtful consideration of the use of generative AI, particularly ChatGPT and Claude, in academia. It successfully balances intellectually rigorous debate with practical tips and guidance. It will be especially valuable for those unfamiliar with using these tools, while even more experienced users are likely to pick up some new ideas and benefit from engaging with the broader ethical and practical discussions. I particularly appreciated the emphasis on treating these programmes as conversation partners rather than replacements for our own intellectual labour, and the encouragement to use them critically and alongside other forms of academic work. However, significant ethical questions remain, and as the author notes, once you start using AI, it can become hard to imagine working without it. Therefore, I offer a word of caution – think hard before diving in and use this book to help you to assess the benefits and costs, alongside further research.

#EmmaCraddock #GenerativeAIForAcademics #TheSociologicalReview