Jakob Bak

@jakobbak
31 Followers
116 Following
101 Posts
Robotics and Synthesizers. Electro, Techno, Bass

❓Have you noticed that digital products and services are getting worse? So have we!

➡️We have published a report about enshittification, on how and why digital products and services keep getting worse - and how we can turn the trend (hint: open tech, enforcement, public policy++)

Obviously @pluralistic is a big inspiration and help in this work.

More than 80 groups in Europe and the US has joined in a call to action.

More here: https://www.forbrukerradet.no/breakingfree

Enjoy this short film!

Probably mostly not my audience here, but I'm currently looking for a business-side/CEO cofounder for an encrypted radio startup. There are solid funding options, but while I can build the team to make the product happen, I'm not the person to navigate the money side of things. I'm looking for a cofounder with previous startup CEO experience and executive-level experience (although not necessarily CEO) in a hardware startup specifically. EU citizen and resident, although it'll be a remote team, so where is less important.

fuck off ai music

"welcome to the fuck off ai music movement
in this movement we believe that ai music should fuck off.
you are welcome to join if you’d like"

https://fuckoffaimusic.com

via la_mettrie on pmc IRC chan

fuck off ai music

fuck off ai music

fuckoffaimusic

Hacktivists tried to find a workaround to Discord’s age-verification software, Persona. Instead, they found its frontend exposed to the open internet, and that was just the beginning.

https://www.therage.co/persona-age-verification/

Hackers Expose Age-Verification Software Powering Surveillance Web

Three hacktivists tried to find a workaround to Discord’s age-verification software. Instead, they found its frontend exposed to the open internet.

The Rage

Programming, writing, math, art are all ways I use to structure my thoughts and to test and cement my understanding.

The idea that these domains are limited by how fast I might type, or sketch are fantastical.

In all cases my ability is fundamentally limited by my understanding of the subject before me.

As soon as I understand something, actually know it, then capturing it's essence in code, in prose, through theorems or in medium cannot be a chore, because it must already be done.

Well, today is the day. I'm finally "sorta happy enough to pull the trigger" on publishing the book I've been working on for a very long time. It's a technical history book: by a techie, for techies (although I think that between all the code samples, there is plenty of meat for "tech-adjacent" and "tech-interested" people). It tells the story of the Lisp programming language, invented by a genius called John McCarthy in 1958 and today still going strong (to the extent that many people see it as the most powerful programming language in existence).

And this is a time for shameless self promotion, even if you don't plan on buying the book, please repost :-). Self-publishing is self-marketing, so there we go.

If you do buy and read it, please let me know how you liked it!

The book landing page, https://berksoft.ca/gol, has links to all outlets where you can buy the book,

To make some of these points more concrete:

A choice is being made when governments & municipalities are providing tax rebates and subsidized deals on land and/or energy/water for AI data centers, often to the detriment of the local population. Data centers provide little in terms of employment and the rising energy prices are carried by everyone else, also impacting other existing businesses. This is not counting other important aspects like re-zoning, noise pollution, construction of supply roads/lines, (dirty) energy sources, all of which (in total) should be considered to counter-balance any promised gains from mass building such infrastructure. Many of these data centers are built in areas already suffering water stress, which is only going to get more intense!

A choice is being made when education policymakers and universities decide to become AI sales people and that AI education should be constrained to students becoming effective/uncritical users of AI tech, rather than using higher education as a platform to critically/objectively research and examine this technology and its costs/impacts on many different aspects of society (energy, ethics, inequality, legal issues, security, sovereignty...)

A choice is being made when company bosses are forcing staff to adopt AI or be pushed out, even if this adoption arguably has at the very least a considerable risk of damaging the health of the company and employees long term (via various well-documented risk factors, incl. chronic fatigue or the introduction of company-wide core dependencies on external, unsustainable, VC-backed subscription-based infrastructure/services, whose fees are almost guaranteed to sky-rocket in the foreseeable future). Also worth mentioning here are two new buzzwords terms surfacing currently: Cognitive Debt and Semantic Ablation

A choice is being made by governments adopting AI for policing and preparing for social unrest, investing billions into surveillance instead of lowering inequality and improving social services, social mobility & cohesion (e.g. financed via higher taxes obtained from the super rich).

A choice is being made when investment & grant opportunities for anything but AI-related businesses are deemed too risky and not worthwhile, essentially forcing AI features into any new business idea which requires external financing and thereby funneling more and more people/infrastructure into this growing spiral of dependencies and into this pyramid system of subscription-based computing, with less than a handful of companies at the very top.

Choices...

#AI #HouseOfCards

Reflect Orbital wants to destroy the night sky to deliver "sunlight as a service". SpaceX wants to destroy Low Earth Orbit to launch one million "AI datacentres"

The only way to formally protest these two ideas is to file a comment with the US FCC, which is horribly complicated, but the American Astronomical Society has detailed instructions posted here: https://aas.org/posts/advocacy/2026/02/how-submit-comments-satellite-applications-fcc

Comments due March 6 for SpaceX and March 9 for Reflect Orbital. Write! Write! Write!

AI bros are just loving open source — loving it to death... maybe quite literally! (Godot being latest popular example[1])

More and more projects are impacted by floods of bogus AI pull requests and resulting discussions, stealing precious time and nerves away from their maintainers doing actual productive work. More buggy and insecure software (incl. commercial offerings) due to slopcoding, more websites getting attacked daily by AI crawlers in desperate search for any new bits (literally) to add to their already astronomically large training data sets, never mind copyright or any other licensing terms, IP laws...

Yet, our politicians, regulators, media and even many people in academia are continually shoving these issues and any other more urgent criticism of this tech (and its real impacts) to the side, always deemed irrelevant and distracting from all the "amazing" daily breakthroughs being achieved, from the ridiculous amounts of money being made... Blind FOMO and salivation is all there is — without ever publicly questioning/acknowledging/debating from where & whom this data and its monetary and cultural value has been extracted/stolen from... Where is the public framing and balanced discussion of this entire industry as the largest wealth & resource transfer and deskilling exercise in our history? Where are the responsible adults with a modicum of critical thinking and foresight in these rooms of power?

Honestly, I still don't fully understand how we got here and there being so many morally weak, corrupt, gullible and quite frankly _unreasonable_ and unqualified people in charge in literally every field which matters for some form of just & healthy society to continue (or rather to still aiming to get there in the first place)...

What a house of cards we're all living in, and the winds are rising...

There're so many techniques in the field of machine learning which are truly outstanding innovations, able to provide genuine quality of life improvements for the sciences, for the arts, for accessibility/disability etc. — however, these developments have not been part of the public AI discourse in the past few years (which is almost exclusively focused on the LLMs and now shifted to their "agentic" facades/wrappers), nor do these techniques require any planetary-scale infrastructure or intellectual/cultural/physical resource theft, just to be barely operational... I think it's still important to stress these major conceptual differences, even if it often feels that train has left a long time ago and language/semantics have been co-opted by now...

[1] https://pouet.chapril.org/@dallo/116090489977797402

#AI #HouseOfCards

dallo (@[email protected])

Open-source game engine Godot is drowning in 'AI slop' code contributions: 'I don't know how long we can keep it up' | PC Gamer https://www.pcgamer.com/software/platforms/open-source-game-engine-godot-is-drowning-in-ai-slop-code-contributions-i-dont-know-how-long-we-can-keep-it-up/ > Projects like Godot are being swamped by contributors who may not even understand the code they're submitting. #slop #ai #llm #Godot #gamedev #programming #technology #foss #openSource

Mastodon Chapril

For some years now the UI design and interaction methods shown in sci-fi movies have become ever more abstract & complex, driven by minimalist aesthetics utilizing high-res graphics, a design philosophy based on total omniscient access, heavy information density and interactive realtime visualizations (and/or visual ways of browsing) to aid exploration & expose patterns/relationships, the use of layered/spatial layouts, capable of customization/personalization, programmability (of sorts), shared state with contextually morphing representations and relying on gestural/voice controls, all designed to empower people in changing (often urgent) situations and enabling them to manage/filter a large information space.

Whilst a lot of these fantasy UIs are also just that: eye candy, clichéd kitsch and completely nonsensical graphic design wanks, in the real world instead, mainstream UI/UX design has largely moved towards becoming a bastion of boring blandness, caused by a frequently encountered design attitude which outright patronizes potential users and rejects their agency by considering everyone to be part of an homogeneous group and designers only ever wanting to cater for the lowest most "simple" (but actually simplistic!) approach. Design by template (and I don't just mean the graphic parts). This is amplified by lack of user interaction research & testing, but also by designing for the lowest common denominators of mobile OS platforms, by the guidelines imposed by app store approval processes etc.

The same patterns are everywhere, from smart watches, to ovens, cars, even in a lot of games, but especially bad on mobile. Entire generations of _people_ (not "users"!) are being conditioned to interact with machines and information systems in the form of endless self-similar sequences of deeply nested menus, popovers/modals, lists & grids and a handful of standard widgets (mostly buttons of some sort) via which every single task, large or small has to be solved. Scrolling everywhere, and maximizing whitespace of course, because what else are all these modern high res screens really for? We expect so little by now, no wonder people are considering LLMs a magic breakthrough! Meanwhile, basic text editing on a phone is still unfathomably bad and a constant, impossibly fiddly source of frustration... A general lack of undo/redo too (not just mobile, also most web apps).

We have the most capable of machines ever, incl. sensors which could be used to augment mobile interactions for so many people. Yet for many in the current breed of "professionals" in UI/UX, the design space and horizon of imagination what's possible and/or acceptable has shrunk down to the size of a shoe box and it's being defended and post-rationalized for the most inane of reasons... Liquid Glass™ on one hand, "LLM-ing all the things" on the other, relying on planet-scale infrastructure to pull off some form of glossy "intelligent assistant/slave" is NOT a scalable or even desirable solution! These are the wet dreams of aesthetes, graphic & product designers who never seem to use their own creations even just once (else they would very quickly realize the error of their ways...)

It just doesn't have to be like that! It also wasn't always like that... As with politics, we've allowed the bad-faith and/or lazy players to take over and let them dictate their overly simplistic (and frequently inconsiderate, if not abusive) world view on everyone and everything...

"I still have a dream..."

#UI #UX #Interaction #Design #Technology