The psychic structure of disciplinary imperialism
From Sherry Turkleâs classic The Second Self pg 229-230:
The first justification for AIâs invasions and colonization of other disciplinesâ intellectual turf was a logic of necessity. The excursions into psychology and linguistics began as raids to acquire ideas that might be useful for building thinking machines. But the politics of âcolonizationâ soon takes on a life of its own. The invaders come not only to carry off natural resources but to replace native âsuperstitionsâ with their âsuperiorâ world view. AI first declared the need for psychological theories that would work on machines. The next step was to see these alternatives as betterâbetter because they can be âimplemented,â better because they are more âscientific.â Being in a colonizing discipline first demands and then encourages an attitude that might be called intellectual hubris. You need intellectual principles that are universal enough to give you the feeling that you have something to say about everything. The AI community had this in their idea of program. Furthermore, since you cannot master all the disciplines that you have designs on, you need confidence that your knowledge makes the âtraditional wisdomâ of these fields unworthy of serious consideration. Here too, the AI scientist feels that seeing things through a computational prism so fundamentally changes the rules of every game in the social and behavioral sciences that everything that came before is relegated to a period of intellectual immaturity. And finally you have to feel that nothing is beyond your intellectual reach if you are smart enough.
See also the hostility of digital elites towards expertise.
#dataScience #digitalElites #disciplines #domainExpertise #epistemicHeirarchy #intellectualLabour #intellectualLife #work
Three modes of working with LLMs in higher education
Iâm enjoying this series by Anthropic, even if itâs largely a new language for things Iâve already argued in Generative AI for Academics. I like their description of three modes of working with LLMs:
In these terms my stance has been that augmentation offers tremendous intellectual possibilities for academic work but that the political economy of academic labour tends people towards automation and (eventually) agency. At best these can be short-term helpful for individuals but the proportion of automation and (AI) agency in organisations likely correlates with deprofessionalisation, dehumanisation of working life and all sorts of incredibly specific pathologies generated as a byproduct of using LLMs.
https://www.youtube.com/watch?v=4szRHy_CT7s&list=PLf2m23nhTg1NjL3-jL3s0qZCYzO07ZQPv&index=3
I thought this was helpful for thinking about different steps in using LLMs:
The problem with systems like Copilot is that they are geared together simplifying/constraining augmentation while pushing people towards automation and agency. They take responsibility for delegation from the individual and instead scaffold it through the affordances embedded in familiar software. Itâs a recipe for outsourcing labour and we shouldnât be encouraging it.
The political economy of these modes are different: description and discernment, as well as augmentation more broadly, presuppose domain expertise and existing practical knowledge. Whereas delegation and automation/agency tend to rendering that domain knowledge redundant, pushing it aside and generally obliterating it as an organisational value.
https://www.youtube.com/watch?v=W4Ua6XFfX9w&list=PLf2m23nhTg1NjL3-jL3s0qZCYzO07ZQPv&index=4
This is exactly what Iâve meant when I talk about reflexivity in relation to prompting. Perhaps I should drop the (essentially theoretical) language of âreflexivityâ and instead talk about âproblem awarenessâ in future training.
#agency #AIFluency #augmentation #automation #domainExpertise #knowledge