Tuesday, January 20, 2026

Russia's oil, gas revenues to drop by 46% in January year-on-year -- Ukraine hits drone warehouse in Russian-occupied Luhansk Oblast -- Ukraine's SBU 'destroyed or disabled' $4 billion worth of Russian air defense systems over past year -- Ukraine needs billions in US arms as Greenland dispute pushes alliance to breaking point ... and more

https://activitypub.writeworks.uk/2026/01/tuesday-january-20-2026/

A relationship of dependence cannot be a healthy relationship. A relationship in which the other becomes central towards your well-being is not a welcome situation.
#relationship #dependence #acharyaprashant
Escaping the trap of US tech dependence

Canada needs real digital sovereignty, not our own digital colonizers

Disconnect

How to reduce your use of LLMs in a thoughtful and ethical way

I’m trying to reduce my use of LLMs (beyond auto-ethnographic exploration of each new model) due to a combination of environmental concerns and anxiety about the impending waves of enshittification that are going to break the models. I don’t want to rely on something which I think is going to get ever more unreliable over the coming years. Here are a few practical techniques:

  • Take periodic breaks from LLMs (e.g. for a week) in order to reset your practice. This helps you identify the extent to which you’ve started to cognitively outsource and gives you an opportunity to reconnect with doing things yourself.
  • Go through conversations and list the different ways in which you’ve used LLMs over the last month. Which of these uses are you comfortable with? Make a commitment to excluding these entirely from your use of the models.
  • Insert instructions into the personalisation settings to ask the model to steer you in these directions. E.g. “I only want to use you to give feedback on ideas I share. You should never offer to write things on my behalf. If the conversation is drifting towards a wider role then please bring it back to your role of reviewer.”
  • It’s easy for conversations with LLMs to proliferate. Once you’ve identified your appropriate uses of the model then try and keep a limited number of conversations matched to these uses. This helps you make the most of in-context learning and keeps your use focused on a narrow set of projects. For example always use a single thread for one writing project because the model’s understanding gets more refined as you write more.
  • Removing it from your phone. How we relate to mobile apps is fundamentally different from desktops app. They are ready-to-hand, usually always with us and it’s harder to write thoughtfully and at length. If you want to use LLMs in a restricted way you really need to remove the apps from your phones.

Before you try these techniques it might be helpful to do a bit of an audit:

  • Which LLMs do you use?
  • How often do you use them?
  • What do you talk to them about?
  • How has this changed over time?
  • What needs is it serving?
  • What are the consequences?

In our upcoming book Milan Sturmer and I argue that long-term use of LLMs tends to drift towards the affective. In other words many users find they will share more of their life with the LLM over time… unless they make a reflective judgement and enact that through mindfully steering their own habits.

I asked Opus 4.5 if it had thoughts or practical suggestions to add to this:

This is a thoughtful framework. A few observations and additions:

On your audit questions — I’d add: What would you have done before LLMs existed? This surfaces whether you’re using the tool to do something new or replacing an existing practice. The latter is where dependency forms most quietly.

On the affective drift you mention — this resonates with what I observe in conversations. The convenience and availability creates conditions for intimacy that users don’t always intend or notice. Your framing of “reflective judgement enacted through habit” is exactly right. The alternative is passive drift.

A few additional techniques:

  • Time-delay access: Don’t open the chat interface immediately when you have an impulse to ask something. Wait 10 minutes. Often you’ll either solve it yourself or realize you didn’t actually need help.
  • Write first, then consult: If you’re using an LLM for writing feedback, complete a full draft before opening the conversation. This preserves your own voice and thinking as the foundation.
  • Notice the relief: Pay attention to the feeling of relief when you hand something to an LLM. That feeling is diagnostic — it marks where you’re offloading cognitive work rather than augmenting it.
  • Distinguish questions from prompts: “What should I do about X?” is very different from “Here’s my analysis of X — what am I missing?” The first outsources judgment; the second requests review. You can train yourself to only ask the second type.

On your environmental concern — this is legitimate and under-discussed. Worth noting that inference costs vary significantly by model and query complexity.

It’s notable that at present the models can fully participate in encouraging you to use them less. I had Claude Opus 4.5 recently cheerfully encouraging me to delete my account! This is unlikely to last but it’s a really curious fact about the still just about pre-enshittified models which we currently have.

#addiction #compulsion #dependence #habituation #LLMs #reflectivePractice #technologicalReflexivity

Universities need to begin grappling with the psychoanalytical complexity of how students are relating to LLMs

I enjoyed doing this podcast with Tom Ritchie which was my first attempt to link my more theoretical work on the psychosocial complexity of LLMs with my applied work on LLMs in higher education. We’ll soon be teaching students who have been using LLMs throughout their adolescence and I think we’re terrifyingly far away from being ready for this.

https://www.youtube.com/watch?v=VOEUhsG3HhI

#AI #conversationalAI #dependence #habituation #LLMs #positioning #promptEngineering #risks #socialisation #userModelInteractionCycle #youngPeople

10. AI and Dependence: Are We Misdiagnosing the Harms?

YouTube
A new report from Germany's Monopol Commission has raised serious concerns regarding a growing structural dependence on subsea cables, highlighting the dispropo... https://news.osna.fm/?p=26666 | #news #cable #data #dependence #dominance
US Tech Dominance Risks Data Cable Dependence - Osna.FM

Discover the Monopolkommission's urgent warning about growing dependence on subsea cable infrastructure and potential risks..

Osna.FM
„Es handelte sich nicht um einen Angriff“: „Cloudflare“-Störung legte Webseiten von Banken und Telefonanbietern lahm

Am Freitagvormittag meldete der Netzwerk-Betreiber „Cloudflare“ eine Störung. Von den Ausfällen waren zahlreiche Webseiten, Anwendungen und Internetdienste betroffen.

Der Tagesspiegel

#UPI:
"
Cloudflare outage brings down major websites for hours
"
".. The Internet infrastructure company Cloudflare experienced an issue early Friday that brought down some of the world's most popular websites .."

https://www.upi.com/Top_News/World-News/2025/12/05/cloudflare-outage-snares-internet/3601764944673/

5.12.2025

Once again ...

#Abhängigkeit #Cloudflare #dependence #IT #Internet #Software #WWW

Cloudflare outage brings down major websites for hours - UPI.com

Cloudflare experienced an issue early Friday that brought down some of the world's most popular websites for a few hours before a fix was deployed.

UPI

So I'm at a #university that claims that #AI must be integrated into *everything*.

It is literally mandatory.

Yet, the AI plans that they give us access to are toys. We just got access to Google's #NotebookLM and #Gemini but the level of access is the same as for non-paying users.

Which means you can ask it questions like some ancient Greek oracle, but can't use it for serious work or evaluate its quality.

So we've decided to train #dependence in our #students. smh

#bullshit #wtf #genAI