33 Followers
162 Following
269 Posts
Into Rust, ML and weird music.

We must assume that Europe is becoming the world's last democratic major hub. Imperfect, perhaps, but democratic.

As such, Europe is the prime target of the new American administration.

Everything this administration will do will be aimed at harming us.

Washington's primary objective is now to hamper Europe's ability to exert a counter-power, even if it means joining forces with the worst tyrants.

We still have allies in the United States, but not within the administration.

For a short moment, ML had a future at bluesky, but the allure of the fascist-owned community square is just too strong. I am disappointed at the weakness on display...

I don't know much about large language models, but this new Chinese one sounds interesting:

"DeepSeek-R1 scored as high as or higher than OpenAI’s o1 on a variety of third-party benchmarks (tests to measure AI performance at answering questions on various subjects), and was reportedly trained at a fraction of the cost (reportedly around $5 million), with far fewer graphics processing units (GPU) that are under a strict embargo imposed by the U.S., OpenAI’s home turf.

But unlike o1, which is available only to paying ChatGPT subscribers of the Plus tier ($20 per month) and more expensive tiers (such as Pro at $200 per month), DeepSeek-R1 was released as a fully open-source model, which also explains why it has quickly rocketed up the charts of AI code sharing community Hugging Face’s most downloaded and active models.

Also, thanks to the fact that it is fully open-source, people have already fine-tuned and trained many variations of the model for different task-specific purposes, such as making it small enough to run on a mobile device, or combining it with other open-source models. Even if you want to use it for development purposes, DeepSeek’s API costs are more than 90% lower than the equivalent o1 model from OpenAI.

Most impressively of all, you don’t even need to be a software engineer to use it: DeepSeek has a free website and mobile app even for U.S. users."

https://venturebeat.com/ai/why-everyone-in-ai-is-freaking-out-about-deepseek/

(1/n)

Why everyone in AI is freaking out about DeepSeek

DeepSeek has a free website and mobile app even for U.S. users with an R1-powered chatbot interface similar to OpenAI's ChatGPT.

VentureBeat

This is quite a neat demonstration of the counter-intuitive dangers of using current ML tech in critical situations.

You'd think that more reflective = more visible, but visibility is a problem for humans. For ML more reflective means more unusual, which means the system doesn't know what to do.

This creates a whole host of problems. For example when fashion rapidly changes, systems will get less good at recognizing people.

https://usa.streetsblog.org/2025/01/10/alarming-report-shows-that-two-auto-braking-systems-cant-see-people-in-reflective-garb

Alarming Report Shows that Two Auto-Braking Systems Can't See People in Reflective Garb — Streetsblog USA

The safety strips are useless in the eyes of automatic braking systems on two very popular car models.

If we require AI companies to get permission to scrape content, things are just gonna be the same. They're just gonna make deals with large websites to license the work of their users, like Reddit, TikTok, and such. It'll cost them money, sure, but they'll pay for it, and none of that money will go to artists.

Unless your artworks never ever get published on a large corporate website, mere permission will not save us. If someone reposts your work on X, they're just gonna say "ah, our terms and conditions say we can license this".

But worse, it's gonna make free software AI very difficult to make because they can't afford such a cost. It would essentially consolidate (legally-made) AI to a few companies. Now AI is even worse of a monopoly!
OpenAI supports policies requiring works to be obtained with permission because they know it will decimate the competition.

We need to come up with smarter policies to deal with the AI debacle. I don't know what. Just spitballing, we could require models to be released as free software (which would remove their profit incentive), require platforms to obtain informed consent from users, require AI works to be labelled clearly, heck maybe we could ban or restrict the use of power-hungry AI. It should probably be a combination of several of those.

But no matter what, I couldn't care less as to whether AI companies can freely scrape or if they need to pay money to corporations first.

But the most convenient feature of C is line numbering:

https://godbolt.org/z/dfsKGqYGz

5/5

Compiler Explorer - C (x86-64 gcc (trunk))

typedef int BASIC[]; int main() { if ((BASIC) { [20] puts("cruel"), [30] puts("world"), [10] puts("hello"), }); }

Is fundamental physics really experiencing a Great Stagnation? In this thread, let's look at the history of fundamental physics from the dawn of the 20th century to now.

Experimental discoveries that were later accounted for by theories will be shown in yellow. (Mustard?)

Theoretical predictions that that were later confirmed by experiment are in green.

Experiments that confirmed theoretical predictions are in black.

Experiments that are still not accounted for by theories are shown in red.

(Yes, I'm a theorist. So to me, green means "success!" while red means "hey, we gotta do something here!)

One could argue endlessly about what to put on this list, and also what counts as "fundamental" physics. To me, the "fundamental" laws of physics are those that *in principle* we could use to compute all the physical quantities that we can compute at all.

The words "in principle' are carrying a lot of weight here. There are many laws, like formulas for turbulent fluid flow or masses of short-lived particles made of quarks, that we can't yet derive from the so-called "fundamental" laws. Yet most physicists think these are just signs of limitations in our ability to work with the fundamental laws, not new fundamental laws.

There is a long conversation to be had here about computability, chaos, etc. But that's not what these posts are about! Let's go back to the turn of the 20th century, and see how fundamental physics has grown since then.

Actually we should start in 1897.

(1/n)

“Inefficient, yet straightforward” is a perfectly acceptable engineering philosophy.
Also works for a personal brand.

Will the "no" win this poll?

Boost if you like formal logic.

yes
53.5%
no
46.5%
Poll ended at .

Wikipedia is a problem for Musk/Trump. Not, as Musk says, because it's "woke." Because it's one of our last reliable tethers to a consensus reality. Therefore, an antidote to disinformation.

It's not making anyone money. It's not enshittified. Of course it's not perfect—it's an endeavor of imperfect cooperating humans. But it needs protection and support.

https://www.newsweek.com/elon-musk-takes-aim-wikipedia-fund-raising-editing-political-woke-2005742

#wikipedia

Elon Musk takes aim at Wikipedia

Musk has denounced Wikipedia as "Wokepedia" on X and urged people not to donate to the platform.

Newsweek