Is big tech a driver of inflation?

This blew my mind from Cory Doctorow’s enshittification book (loc 336-348):

Amazon also crushes its merchants under a mountain of junk fees that are pitched as optional but are actually effectively mandatory. Take Prime: a merchant has to give up a huge share of each sale to be included in Prime, and merchants that don’t use Prime are pushed so far down in the search results that they might as well cease to exist. Same with Fulfillment by Amazon, a “service” in which a merchant sends its items to an Amazon warehouse to be packed and delivered with Amazon’s own inventory. This is far more expensive than comparable (or superior) shipping services from rival logistics companies, and a merchant that ships through one of those rivals is, again, relegated even farther down the search rankings. All told, Amazon makes so much money charging merchants to deliver the wares they sell through the platform that Amazon’s own shipping is fully subsidized. In other words, Amazon gouges its merchants so much that it pays nothing to ship its own goods, which compete directly with those merchants’ goods.*

But Amazon’s fee isn’t 10 percent. Add all the junk fees together, and an Amazon seller is being screwed out of 45 to 51 cents on every dollar it earns on the platform. Even if a merchant wanted to absorb the “Amazon tax” on your behalf, it couldn’t. Merchants just don’t make 51 percent margins. So merchants must jack up prices, which they do. A lot. Now, you may have noticed that Amazon’s prices aren’t any higher than the prices that you pay elsewhere. There’s a good reason for that: when merchants raise their prices on Amazon, they are required to raise their prices everywhere else, even on their own direct-sales stores. This arrangement is called most-favored-nation status, and it’s key to the US Federal Trade Commission’s antitrust lawsuit against Amazon.

It’s such an obvious point but it hadn’t occurred to me before. The combination of monopsony and platform infrastructure maximises the capacity of giant buyers to exercise power over suppliers, creating costs passed throughout the system. Big tech is both inflationary and the opacity of platformised pricing makes it harder to measure the impact it is having on the system.

#bigTech #coryDoctorow #inflation #platformisation #platforms #politicalEconomy

Turning ChatGPT into the control room of a user’s digital life

I think this is spot on from Casey Newton about the vision guiding OpenAI’s recent development. It would be easy to read their developments as throwing a million things at the world to see what sticks (social video, online shopping, pulse, ad tech etc) but they are explicitly saying these are all part of a more or less unified vision:

OpenAI seems more likely to monetize its platform through revenue-sharing deals or auctioning off placement. Maybe you ask for help with algebra, OpenAI loops in the Coursera app, and takes a finder’s fee if you become a paid user of the latter.

To OpenAI executives, the move helps them pursue what they describe as the goal they had before they got sidetracked by ChatGPT’s success: building a highly competent assistant.

“What you’re gonna see over the next six months is an evolution of ChatGPT from an app that is really useful into something that feels a little bit more like an operating system,” Nick Turley, the head of ChatGPT, told reporters in a Q&A session on Monday. “Where you can access different services, you can access software — both the existing software that you’re used to using, but … most exciting to me, new software that has been built natively on top of ChatGPT.”

https://www.platformer.news/openai-dev-day-2025-platform-chatgpt/?ref=platformer-newsletter

What will optimisation look like for them on this model? It’s not quite user engagement in the same way as social media platforms but equally there will be an incentive structure facing the firm and a range of data-intensive methods through which to act on these incentives.

And I think he’s right there’s a huge risk of a massive data privacy scandal:

At launch, OpenAI is promising a more rigorous approach to data privacy. OpenAI will share only what it needs to with developers, executives said. (They essentially hand-waved through the details, though, so the actual mechanics will bear scrutiny.) Unlike Facebook, though, OpenAI has no friend graph to worry about — whatever might go wrong between you, ChatGPT, and a developer, it will likely not involve giving away the contact information of all of your friends. 

At the same time, the AI graph may prove even riskier. ChatGPT stores many users’ most private conversations. Leaky data permissions, either intentional or accidental, could prove disastrous for users and the company. It only took one real privacy disaster to end Facebook’s platform ambitions; I can’t imagine it would take much more to end OpenAI’s.

#CaseyNewton #ChatGPT #generativeAI #openAI #platform #platformisation

OpenAI’s platform play

Facebook's social graph went down in the flames of Cambridge Analytica. Will the AI graph fare any better? PLUS: Our new approach to links

Platformer

📣 Platform and Agency: Becoming Who We Are now available

The first chapter is available on Google Books here. Unfortunately the book is going to be expensive in print (though an eBook is available) so let me know if you have trouble accessing it and I’ll do my best to help.

Here’s the introduction to the book:

We live in a digital age. That statement can feel platitudinous, yet it expresses a defining feature of our contemporary world: an era shaped by digital technology, from smart phones and tablets to the consumer-facing internet. While the term ‘digital age’ can obscure the variety of lived experience across different contexts, it also insists upon a horizon of change that exceeds immediate empirical observation. It implies a meta-process that will be difficult to characterise without oversimplifying the empirical complexity which ultimately defines it (Archer 2013). We can point to the rapid expansion of internet access across the global population, the diffusion of smart phones as primary devices, or the rise of social platforms that now dominate what ‘the internet’ means in everyday life. The danger in talking about a ‘digital age’ is that it can obscure the fact that global internet access remains deeply uneven, with many still lacking reliable connectivity. The range of what ‘the internet’ means can too easily be subsumed into epochal generalisations about digital change. However, if we avoid terms like ‘digital age’ we risk failing to grasp an emerging reality which surpasses any single trend. Once you insist on a certain degree of empirical robustness, it becomes difficult to keep hold of the meta-process. 

The starting point for this project is that such a meta-process is unfolding, which we urgently need to grasp but that doing so is an epistemically complex undertaking. These are not isolated or easily quantified phenomena, but rather a qualitative shift in the parameters of social life (Couldry 2020). There is a change in the texture of the social which is widely felt, yet difficult to pin down in a robust or comprehensive way. Nearly three decades ago, Castells (1996: 508) noted the “unseen logic of the meta-network where value is produced, cultural codes are created, and power is decide”, suggesting that this “increasingly appears to people as meta-social disorder”. It is this ‘meta’ level that we evoke by talking about a ‘digital age’, imprecise as that term may be. Only at this higher level can we address how the “parameters of social life – of social interaction and even of socialisation” have begun to shift, rather than confining ourselves to discrete new forms of interaction (Couldry 2024: loc 1174). Otherwise we are left with “the detection of empirical patterns” in which social transformation is inferred when a pattern is “big and bold enough”. These are by their nature perspectival claims, even when methodologically robust in their statistics, relying on ‘striking’ observations which produce an intuitive sense of transformation in the analyst (Archer 2013:: loc 1232).

And this is the conclusion:

The problems with the detraditionalisation thesis arose from the grandiose poetics which left it captivated by its own pronouncements about epochal change. For this reason I believe we ought to as cautious as we can be about declaring an outcome to sociotechnical change, without dispensing with the recognition there will be an outcome. If anything the vast investment in LLMs and the data infrastructure which supports them, intersecting with a post-pandemic political economy which appears to be leaving neoliberalism behind, heralds an intensification of change rather than a diminution (Tooze 2021;, Varoufakis 2023). It’s possible this might be leading towards a perpetual polycrisis, a social order unable to stabilise itself amidst an accelerating climate catastrophe. But even this doom loop, suggested by Seymour’s notion of disaster nationalism, represents a social order of sorts, even if it’s an apocalyptic one. 

It is difficult to incorporate this horizon of crisis into our frame of reference without subordinating our analysis of the interaction phase through which it is being generated. However by approaching platformisation through the concepts of psychobiography and personal morphogenesis, I have argued that we can avoid both grandiose (and premature) pronouncements about a ‘digital age’ and dismissive rejections of the reality of genuine change. The analysis I’ve offered of distracted people and fragile movements explores how platforms reconfigure rather than replace human agency. By examining how reflexivity operates within platformised contexts, tracing its biographical unfolding rather than proclaiming wholesale transformation, we gain a more textured understanding of contemporary social life. This has meant breaking with an account of agency premised, as Savage (2021: 191) puts it,  “on this ontological temporal difference between past, enduring structures, and a contemporary contingent agency that breaks from them”. Unless we can surrender this baggage, we are left with a meta-process defined through the falling away of the past, operationalising ‘tradition’ as that which is experiencing a decline and thus squeezing out continuities through definitional fiat. The problem is not an epochal horizon, as much as ontological assumptions which lead to the epistemic mistakes of pronouncing epochal change in a grandiose and premature manner. A realist conception of the platform can acknowledge its emerging status as a condition of our social existence, while remaining clear that is we who must decide what to make of it.

#criticalRealism #digitalisation #PlatformAndAgency #platformisation

Platform and Agency

This book examines how digital platforms are reconfiguring the parameters of agency and reflexivity in contemporary social life. Drawing on Margaret Archer's social realist framework, it moves beyond treating platforms merely as tools or environments to conceptualize them as distinct sociotechnical structures with emergent properties and powers that shape human action without determining it.The book develops the concept of platform and agency to explore the temporal dimensions of sociotechnical change, tracing how platforms condition personal and collective reflexivity through mechanisms of distraction, cultural abundance, and multiplying communication channels. While affirming the analytical distinction between structure, culture and agency, it demonstrates how platforms constitute a fourth dimension necessary for understanding contemporary social morphogenesis. Through the conceptual pairing of psychobiography and personal morphogenesis, the book offers a nuanced account of how individuals become who they are within platformized lifeworlds. Rather than announcing an epochal break with previous social forms, the analysis illuminates the accumulating consequences of platform mediation across biographical timescales.This book will interest researchers and graduate students in social theory, philosophy of technology, digital sociology, platform studies, media and communication studies, critical data studies, internet studies, surveillance studies, sociology of knowledge, digital anthropology, and social informatics.

Google Books

Hostility to the techno-determinism as intellectual alibi for political and social harms

This aggressive interview with an incredibly defensive Nick Clegg was fascinating as an instance of contemporary tech politics. I was particularly struck by how he explicitly invokes ‘techno-determinism’ to dismiss claims about social platforms generating politics and social harms:

https://youtu.be/KAYM1arUzXY?si=RlrfpreRLVNwSFPh&t=1563

He argues it’s “patronising” to claim that platforms have a significant influence because “people have agency”. There’s an obvious straw man here, in which claims about platform power are construed as people being cognitively hijacked by whatever they find in their feeds*, gets used to dismiss any claim about platform power. Likewise the claim that the technology you use has an “automatic effect” on what you think and believe.

*I do think some people basically believe this. It’s a stupid and dangerous position which we need to oppose.

#facebook #NickClegg #platformPower #platformisation #socialPlatforms

Nick Clegg: What really happened at Facebook?

YouTube

OnlineFirst - "From platform capitalism to strategic place-based platformisation?" by Mike Hodson, Andrew McMeekin and Andy Lockhart:

#Placebased #platformisation #strategic

https://journals.sagepub.com/doi/full/10.1177/0308518X251342914

I want to recommend reading this paper on “Digital infrastructures for education: On sociotechnical entrenchment, pedagogy and the public interest”.

#edtech #datafication #privacybydesign
#platformisation #mediaeducation

https://journals.sagepub.com/doi/pdf/10.1177/14749041251332664

📣 CALL FOR PARTICIPATION:

- Will the future digitalised mobility be sustainable?
- Will it be inclusive for everyone in our societies?
- Will it be affordable and really serving the public?

These are some of the guiding questions of the workshop 𝗠𝗮𝗽𝗽𝗶𝗻𝗴 𝗘𝘁𝗵𝗶𝗰𝗮𝗹 𝗤𝘂𝗲𝘀𝘁𝗶𝗼𝗻𝘀 𝗼𝗳 𝗙𝘂𝘁𝘂𝗿𝗲 𝗠𝗼𝗯𝗶𝗹𝗶𝘁𝘆 on 26 March 2025 in Prague, organised by Karol Kurnicki and Iwona Janicka.

Interested to join? Register now! ➡️ https://leibniz-ifl.de/en/institute-1/news/dates-and-events/details/mapping-ethical-questions-of-future-mobility

#mobilities #futuremobilities #platformisation #digitalisation

Mapping Ethical Questions of Future Mobility

Workshop on current developments and possible future implications of digitising mobility.

On being realistic with students about platformisation

I’ve increasingly come to believe that studying educational technology without experiencing its constraints, workarounds and breakdowns is like trying to learn to swim by reading books about water. The lived reality of platforms is filled with minor breakdowns, awkward compromises and institutional constraints which shape how they’re used in practice. Pretending otherwise just perpetuates the fantasy of seamless platformisation which tech firms cynically sell to educational leaders who ought to know better.

Rather than hide these challenges, we should embrace them as learning opportunities which illuminate how technology actually gets used in educational institutions. This means being open with students about our rationales and constraints, explaining why things don’t work as planned and helping them understand the complex reality of implementing digital education. Otherwise we risk producing graduates who understand the theory but lack the practical wisdom needed to navigate the messy reality of educational technology. The alternative is to perpetuate unrealistic expectations which set people up to fail when they encounter the reality of working with technology in educational institutions.

#digitalisation #educationalTechnology #pedagogy #platformUniversity #platformisation #platforms #teaching

This is a useful concept from Andrew Dryhurst in a recent paper in JCR. I’ve been prone to arguing for the same framing by talking about the need to historicise AI, in terms of a broader history of digitalisation then platformisation. I think Dryhurst’s framing here helps me account for how a particular framing of AI emerges both from failing to historicise it, as well as contributes to making it more difficult to do so in the future:

Traditionally, a large amount of philosophical functionalism has pervaded the AI space (Bryson Citation2019; Searle Citation1984), which has served to underpin an instrumentalist understanding of AI technology in much of the social science literature on the topic. Instrumentalism here refers to AI being understood as a tool and solely in terms of what it does. This is of course necessary at a certain level, given the wide-reaching scope of AI-use cases, the diversity of models and training sets, and the opacity that frequently surrounds AI’s societal deployment (O’Neil Citation2017). Nevertheless, instrumental notions of AI are inescapably presentist in their analytical scope, and it is important to consider that different AI are themselves embedded in an enormous variety of material relations and processes. AI are constructed and deployed by agents who are imbued with their own structural and institutional contexts, interests, ideals, and situational logics. A particular company’s AI systems are necessarily intertwined with the dynamics of (inter)national regulations, supply chains, and (national) accumulation regimes, as well as corporate agents’ reflexive and culturally conditioned actions in and through time. That is, AI are open complex systems embedded in other open complex systems.

https://www.tandfonline.com/doi/full/10.1080/14767430.2023.2279950#abstract

I think you can make this point without the CR vocabulary but it is a very important point which is very powerfully made here:

there is a research gap to be filled through tracing AI’s conceptual and material development in relation to the morphogenetically derived systemic imperatives traversing the political economy of the Internet and its history. For example, the ubiquitous deployment of AI models across all aspects of society presupposes questions about attribution concerning the datasets that are fed into different models; the transparency of data collection and processing; and the complex regulatory challenges that widescale AI deployment creates

And this is exactly what I’m interested in addressing, particularly the notion of models as cultural technologies, even if I arrived there through a slightly different route:

Similarly, the recursive and emergent consequences of people’s interactions with powerful AI models across industry and society make the models akin to cultural substrates from which particular worldviews may be inscribed and cultivated. To paraphrase Marshall McLuhan, the model may well be the message (Bratton and Agüera y Arcas Citation2022). All of these connote significant economic and social outcomes, and also exemplify a situation where the rise of powerful AI companies, possessive of their own intellectual property, datasets, and modelling practices, ought clearly to be situated within the accumulation imperatives and systemically persistent dynamics shaping the Internet’s development in capitalism because they are intertwined with and shaped by AI’s regulation and deployment as well.

https://markcarrigan.net/2024/08/01/against-an-instrumentalist-understanding-of-ai-critical-realism-and-conceptualising-artificial-intelligence/

#artificialIntelligence #digital #morphogenesis #MorphogeneticApproach #ontology #philosophyOfTechnology #platformisation #platfromCapitalism

I’m in total agreement with Carlo Perrotta here that custom GPTs and AI agents constitute a familiar platform economy being cultivated by OpenAI:

In all scenarios, from the lowest API access tier to the highest enterprise one, proprietary assets and infrastructure must be hired from OpenAI’s closed development environment according to a Software as a Service (SaaS) model. Consistent with this model, monetisation may occur in two ways: on a revenue share basis and/or through the payment of licensing fees. In the case of custom GPTs, OpenAI operates as a traditional intermediary platform retaining total control over a single point of access: a paywall. Users pay directly OpenAI to use a Custom GPT and a portion of that revenue goes to the developer. In the case of fully custom AI assistants developed through an enterprise license, organisations pay OpenAI for API access and data control but are then free to either charge directly their customers for usage, or in the case of the universities mentioned previously, to offer custom affordances for administrative staff, research and teaching staff, and students.

https://automatedonline.org/2024/07/12/the-platform-economy-of-genai-in-education/?trk=feed_main-feed-card_feed-article-content

But I think Carlo’s observation about the lack of uptake of education GPTs is more broadly true. As far as I can see OpenAI aren’t publishing usage data. Furthermore, the developer forums seem to be full of conversations in which people are asking for more metrics which are less opaque. My experience of trying GPTs has been that unless they serve an extremely specialised function (e.g. producing diagrams), usually involving calling on an external service, it’s quicker and easier to just use the core model, at least if you’re familiar and comfortable with prompting. But if you’re not familiar and comfortable with prompting, you’re unlikely to be delving into an aspect of ChatGPT which likely seems quite arcane to many end users. Furthermore, the rapid development cycles mean that specialised functions are being incorporated into the main models rapidly e.g. GPT 4o can produce a flow chart just as well as a specialised GPT I used to rely on. It’s an accelerated version of the familiar tendency for platform operators to use their epistemic privilege to see what works and steal it for the core product, even if that might not be an intentional strategy in this case.

For this reason I think we should be careful about saying this is a platform economy. It has features which suggest one is emerging, but it also has aspects which don’t fit this picture. I’m not sure we really know what the model is yet, nor do the firms themselves. They’re throwing things at the wall in the hope something will stick, while being so overflowing with capital that there’s no real pressure yet to define a longer term commercial strategy. Which means I think this is astute analysis from Carlo but which perhaps overstates how defensive OpenAI are being in their current moves:

Despite being the place where the memo originated, Google is arguably a case apart because its interest in AI, while enormous, is somewhat ancillary to its core businesses: search and cloud infrastructure. However, as far as Open AI is concerned, a moat is definitely being built following a textbook implementation of platformed and infrastructural monopolism: the tiered licensing structures, the timid attempts to launch an “app store” of custom GPTs based on revenue sharing, and the creation of an enterprise-level ecosystem where large and medium-sized organisations become invested in – and dependent on – a proprietary environment.

Open AI’s retrenchment into the comfort of familiar platform economics can therefore be read as a defensive and conservative move that hides a growing anxiety about the real-world viability of generative AI, with companies and users beginning to realise the limitations of a technology that promised to deliver “magic” through universal applicability and knowledge but is proving tricky and laborious to tame

https://automatedonline.org/2024/07/12/the-platform-economy-of-genai-in-education/?trk=feed_main-feed-card_feed-article-content

It will be interesting to see how differentiation happens across the competing firms because Claude, Copilot and Gemini appear to be developing in slightly different directions reflecting the varied positions of the operators and the different positioning they have in relation to end users. I think we should be sensitive to the emerging platform economy cultivated by OpenAI but there’s a risk that applying the conceptual framework of platformisation (at this stage) could close down as much as it opens up analytically. For example I’m not sure I see how this constitutes a moat for a particular platform, or at least not an effective one, as much as a rapid institutionalisation of a cluster of technologies:

The universities inviting research and teaching staff to identify and test application scenarios for generative AI; the scores of custom GPTs dedicated to various aspects of education, from language learning to research literature summarisation and essay writing; the tech-savvy educators and consultants developing curricula and models of professional practice. All of it represents “epistemological” free labor that creates the much-needed network effects underpinning crowdsourced value creation – value which will be captured and monetised when the time is right.

It matters analytically, among other reasons, because of the space for agency left in these competing perspectives. There’s little room for professional steering of moats, whereas there’s a lot of room for professional steering of institutionalisation processes. There’s a broader issue here, which I’ve intended to write about for ages, in which the structuralist tendencies of the platform studies literature are being exacerbated by how its taken up within education, often in ways which intersect with epistemological apparatus of ‘critique’ in manner which renders agency opaque. This is major theme in the monograph of the Platform University I’m working on with Susan Robertson, Michele Martini and Hannah Moscotvitz but I’m increasingly keen to put together a paper on this in the meantime, identifying how there’s a much broader tradition of platform studies which would be very fruitful for digital education researchers.

https://markcarrigan.net/2024/07/22/some-thoughts-on-the-emerging-platform-economies-of-generative-ai/

#ChatGPT #openAI #platform #PlatformUniversity #platformisation

The platform economy of GenAI in education

I have been out of the social media limelight for the past 6 months.  In the face of the incessant and at times overwhelming discourse around GenAI I reacted – like I suspect many colleagues …

automatED