Unconscious incompetence with technology

I really like this concept I was introduced by Terry Hanley, writing about AI and psychotherapy:

When it comes to artificial intelligence and therapy, I’m increasingly struck by how many of us may be operating in a place of unconscious incompetence. Not through negligence or lack of care, but through familiarity. Therapy has always absorbed new tools, new forms of language, new contexts for relating. Technology, in that sense, can feel like just more background noise – something that sits “over there” in admin systems, appointment booking, outcome measures, or risk protocols.

But, and this is quite a big but, AI is arguably not just another tool. It is quietly reshaping how information is produced, filtered, summarised, and interpreted – including information about people’s distress, identities, and lives. And when something becomes woven into the fabric of everyday systems, it becomes easy not to notice what we don’t yet understand.

Unconscious incompetence is a surprisingly comfortable place to be. If we don’t quite see where AI is operating, or we assume it is neutral, peripheral, or someone else’s responsibility, then there is little immediate pressure to engage. The risk, however, is that decisions about therapeutic work – ethical, relational, and practical – are being shaped in ways we haven’t fully thought through.

https://counselling.substack.com/p/a-new-years-resolution-for-therapy?utm_source=post-email-title&publication_id=869300&post_id=183794185&utm_campaign=email-post-title&isFreemail=true&r=hcf3&triedRedirect=true&utm_medium=email

This is exactly how I’ve always seen the challenge of digital scholarship. What I call technological reflexivity is an antidote to unconscious incompetence in the sense of deliberately practicing a reflective orientation to the use of technology in your work. Competence can often result as an outcome of that process but it’s not a necessity for it – what matters is the reflection itself. This maps onto what Terry says here about therapists and AI:

None of this requires perfect knowledge. What it requires is attention, humility, and a willingness to say, “I need to know more about this and understand this better.” This list is of course not comprehensive but some areas that I believe are important for us to have on our radars.

The risk is not that we engage imperfectly, but that familiarity arrives before reflection. Seen this way, moving from unconscious incompetence to conscious competence is less about professional deficit and more about professional positioning. It shows up in small, often unremarkable practices: noticing where technologies are already shaping decisions, being clearer about boundaries in training and supervision, and staying alert to how administrative systems influence therapeutic work.

The phrase “familiarity arrives before reflection” feels like it concisely captures something I’ve been circling around for years without being able to quite express.

#AI #digitalScholarship #GenerativeAIForAcademics #psychotherapy #socialMediaForAcademics #sociotechnicalChange #technologicalReflexivity #TerryHanley #unconsciousCompetence

A New Year’s Resolution for Therapy: From Unconscious Incompetence to Conscious Competence with AI

The start of a new year often invites quiet stock-taking.

Counselling and Psychotherapy Stuff

A speculative genealogy of accelerationist perspectives

Increasingly I think it makes sense to distinguish between different accelerationist positions. I rarely use the term to describe my own politics any more, both because I don’t want to risk association with far-right positions and because the potential vehicle for a left-accelerationist politics has been smashed into pieces. But my instincts remain left-accelerationist, in the sense of being inclined to ask how emerging technologies could be steered towards solidaristic and socially beneficial goals rather than being driven by the market. It means insisting we consider the technology analytically in ways which distinguish between emergent capacities and how those capacities are being organised at present by commercial imperatives. It means insisting we dive into the problems created by emerging technologies, going through them rather than seeking to go around them, rather than imagining we could hold them back by force of our critique.

In the mid 2010s this felt like quite an optimistic way to see the world but now it feels like a weirdly gloomy way to see the world, because the sense of collective agency underwriting such a future-orientation now seems largely, if not entirely, absent. It’s interesting therefore to see someone like Reid Hoffman, rare liberal member of the billionaire paypal mafia, offer a perspective which has some commonalities with this but could rather be described as a liberal humanist accelerationism. From pg 1-3 of the book Superagency, he’s written with Greg Beato:

We form groups of all kinds, at all levels, to amplify our efforts, often deploying our collective power against other teams, other companies, other countries. Even within our own groups of like-minded allies, competition emerges, because of variations in values and goals. And each group and subgroup is generally adept at rationalizing self-interest in the name of the greater good. Coordinating at a group level to ban, constrain, or even just contain a new technology is hard. Doing so at a state or national level is even harder. Coordinating globally is like herding cats—if cats were armed, tribal, and had different languages, different gods, and dreams for the future that went beyond their next meal. Meanwhile, the more powerful the technology, the harder the coordination problem, and that means you’ll never get the future you want simply by prohibiting the future you don’t want. Refusing to actively shape the future never works, and that’s especially true now that the other side of the world is only just a few clicks away. Other actors have other futures in mind. What should we do? Fundamentally, the surest way to prevent a bad future is to steer toward a better one that, by its existence, makes significantly worse outcomes harder to achieve.

The difference here is that he’s envisioning society as made up with more or less self-realised individuals, in a world in which power and vested interests is (primarily, at least) a matter of how those individuals interact rather than an enduring structural context to their interaction. But with this huge caveat, a lot of the assumptions and instincts here are similar to my own. This could in turn be contrasted to Tony Blair’s post-liberal accelerationism concerned with the role of the state under these conditions:

There’s a similar line of thought in this review by Nathan Pinkoski of Blair’s book on leadership. He describes Blair’s program as a “kind of post-liberal progressive rightism that promises to co-opt the progressive left while crushing the populist right”. Underlying this project is “a commitment to unlimited, unrestrained technological progress, and a belief that this will bring about a better world”.

And we might in turn distinguish this from the libertarian accelerationism of Marc Andreessen who seems to see little to no legitimate role ofr the state.

There’s a risk in distinguishing between these positions that we take them as doctrines, whereas I think they can better be understand as articulations of underlying instincts and orientations. How technology feels to people and how they feel about technology. Their inclination when presented with sociotechnical change etc.

#accelerationism #capitalism #ideology #instinct #MarcAndreessen #ReidHoffman #socialChange #sociotechnicalChange #technology #tonyBlair

Was Tony Blair the first effective accelerationist?

I don’t think it’s quite right as a description but I find it hard not to explore the thought after watching this interview: There’s a similar line of thought in this review by Na…

Mark Carrigan

📣 Call for abstracts: "Beyond short-termism: Strategies and perspectives for the long-term governance of socio-technical change"

📅 Deadline: 12 August 2024
https://www.tatup.de/index.php/tatup/announcement/view/59

#LongTermGovernance #SocioTechnicalChange #TechnologyAssessment
@ITAS_KIT

Call for Abstracts: "Beyond short-termism: Strategies and perspectives for the long-term governance of socio-technical change" | TATuP - Journal for Technology Assessment in Theory and Practice

Exploring #SocioTechnicalChange through #visual engagement

This dataset for "Innovation Doesn’t Work" by
@becerra_UNQ
and Hernán Thomas includes a video and presentations 🔄🌐

Access here: http://bit.ly/41USpHx

#STSInfrastructures #STSPedagogies #STSinnovation