@simonmic Hi Simon, I just saw your hledger announcement and read a little, including your justification for AI coding assistance.

I must say that hl looks interesting and have noted it for the future, and from your feed I see we share values. Yet again though, I find someone like me, whose adoption of LLMs I don't get.

So I wanted to pick up on an area in your list of reasons. You may not wish to debate this, or even read and that's fine. I'm no bully or ideologue.

1/

@simonmic I don't necessarily accept the other points but one of the issues that puzzles me is folk who demonstrate and understanding of the problems we face wrt democracy, online surveillance, profiling and concentration of power through enclosure of everything online, then going on to fan the flames of the AI bandwagon *and even* hand money over to those who will become ever more powerful in the process.

But local LLMs some cry, and you mention this.

2/

@simonmic

You admit local LLMs won't cut it for a while. I contend that in the same way you say generalised computing always wins out, local LLMs will never really cut it. They will have use, but it will be much more limited and niche in the same way that many local things are. Which is not itself a problem but should be noted.

As a result there will be a large gap between those using local and those subscribing to LLM services which centralises power.

3/

@simonmic

Also, local doesn't mean safe, because the models themselves will mostly if not exclusively be built by those with massive resources, aka the same people who own subscription services and can use their models to influence all users, not just subscribers, who will not realise the trap they are in until it has sprung.

Which will be #enshitification cubed, and hurt everyone because of the universality of application and the power this bestows on the model owners.

4/

@simonmic Personally I can't make a case for any use of this technology at the moment. I think utility is the most convincing, but a flawed argument when you dig into it, for software development at least.

In so many respects that is in any case countered by downsides that affect the users as much as everyone else when I examine how this plays out, and all the unintended consequences of this path.

5/

@simonmic

I like the metaphor of #LLMs as an addictive kind of asbestos, but with greater reach in terms of harms - because the harms can be tailored and directed by the manufacturer, for every individual in real time.

Did you know that the first health concerns were being raised about asbestos in 1907, yet we were still legally putting it into UK buildings until 1999.

Thank goodness asbestos wasn't anything like as harmful as so called AI.

That's why I speak about this.

/End