With more and more web tooling being infected by genAI, I figured it's time to look for a low-complexity, minimal-dependency way to build my upcoming blog. So I spent the last couple of days cooking up a simple little template for building a blog with only a Makefile and some Lua.

It works pretty well, so here it is, available for use under the 0BSD license:

https://codeberg.org/lhinderberger/minimalistic-blog

#FOSS #blogging #noai #antiai

minimalistic-blog

A Minimalistic Blog Template

Codeberg.org

I also added a little AI refusal rationale:

Generative AI is a highly problematic technology for numerous reasons - just to name a few: It uses lots of resources, results are nondeterministic and deceptive, costs are being externalized, knowledge and economic power monopolized and labor rights undermined. By facilitating intellectual laziness, and reinforcing bias, amongst other things, generative AI actively contributes to the rise of fascist politics and the destruction of democracy. 1/2

In the context of Free and Open Source Software in particular, generative AI obfuscates code provenance, introduces dependencies on proprietary datasets and SaaS products and endangers software quality, security and freedom. The only ethical answer to this is to refuse the use of generative AI.

Feel free to use this for your own projects. 2/2

@lu_leipzig

Well put.

I’ll give you that—I see all the issues too.

Personally, though, I’ve come to a different, much more complex conclusion. Take Mistral (France) as an alternative to U.S. AI companies, for example.

But I don’t want to start a discussion right now.

I like your proposal; AI is a huge problem in the open-source community (and in other communities).

@sam4000 Where an AI company is located makes very little difference, mainly because AI companies work hard on quickly becoming "too big to fail", thus undermining regulation. Besides, all problems mentioned still remain present, regardless of location.

@lu_leipzig

I doubt that generative AI will ever disappear from our lives, even if the AI bubble bursts.

The question is simply how we can make it as ethical and safe as possible, which includes European sovereignty. That brings me back to Mistral.

I doubt AI can replace humans, but it can certainly take a lot of work off their hands. A new problem: blind trust in AI.

"Too Big to Fail" is a U.S. problem, which is why Mistral publishes its models as open-source. We must Unplugged Big-Tech

@lu_leipzig

A few examples of AI in use today:
- Identifying zero-day vulnerabilities, including exploiting them

- Rapid analysis of large amounts of data (Palantir, sometimes in combination with LLMs) for military purposes

In addition, U.S. companies are trying to evade regulation by moving into space. They would rather clutter space with data centers than be subject to regulation.

Deeply entrenched in the military and the economy -> there's no way they're going to leave.

@lu_leipzig

Europe should use existing tools to stand up to these corporations, including hefty fines.

Right now, we’re doing the exact opposite—just so we don’t upset Donald.

-----

I hope I’ve briefly outlined why I’ve come to the conclusion that ai is not going anywhere. I don’t want to spam you, after all.

@sam4000 AI is not here to stay because it's not profitable and probably never will be, read some Ed Zitron. Too Big to Fail is not a US-only problem, see bank bailouts in Germany in 2008. An open company can easily be turned into a closed one, see OpenAI. There's a difference between traditional machine learning, data mining and generative AI - genAI fans tend to attribute the successes of the former to the latter. And data centers in space are a libertarian pipe dream just like private cities.
@sam4000 Apart from that, all my initial points still stand, and your answer is to claim there's somehow a way of making them all go away and use genAI ethically. There isn't; many of the problems mentioned are inherent to the technology, no amount of R&D, "digital souvereignty" or good governance will make them go away. Now stop relativizing genAI in my mentions or be blocked.

@lu_leipzig

To be honest, I’d actually prefer it if Gen AI failed.

If it were to succeed in becoming financially viable in the long term, it’s extremely likely that it would be US companies that survive. This is particularly true given that companies like Microsoft, Amazon, Google and Meta have an interest in the technology and are backing it. Big Tech is betting on technology that could make them even more powerful.

.

@lu_leipzig

Personally, I just think that a future dominated by ever-more-powerful Big Tech is more likely.

Let’s just both hope you’re right.

@sam4000

Man, if you want genAI to fail, then stop using it. We all want it to disappear from our lives, and people saying "I'd really want that to happen, but it never will, so I'll just use it." isn't helping the cause.

@lu_leipzig

@fedithom @lu_leipzig

It’s not that simple.

It’s still a tool that can be really useful in certain situations.

Even so, I rarely use it outside of specific projects. My project was a bot that mirrors information from Twitter/x to Mastodon, including new alt text. Hardly anyone wants to use Twitter voluntarily anymore.

The project is working, so I don't need any AI for scripting at the moment.