Blume

@BiNotBoth@queer.group
249 Followers
161 Following
3.3K Posts
likes to make bad puns and good things happen 💪 | white and cis | viruses aren't only a threat to the already disabled 😷 | avatar by @arocalyptic
Pronomen/Pronounssie, ihr / she, her
🗺️Berlin (Posts in German and English)

Der Bauernpräsident sieht Möglichkeiten bei Ausnahmen vom Mindestlohn bei Saisonarbeiter.

Die schuften bis zu 12 Stunden, kein Kündigungsschutz, den Barackenschlafplatz und das tägliche Essen wird vom Lohn vorsorglich abgezogen und Mindestlohn ist zu viel 🤮.

Das Ganze ist übrigends in den Reihen der #AfD entstanden - und nun wird es vom #Bauernverband vorgeschlagen.

Die Saat geht auf.

#Agrarminister
#Deutschland
#Politik
#Mindestlohn

https://www.tagesschau.de/inland/innenpolitik/agrarmininisterin-saisonarbeiter-mindestlohn-102.html

Agrarminister Rainer offen für Mindestlohn-Ausnahmen für Saisonarbeiter

Nur 80 Prozent vom Mindestlohn für Saisonarbeiter - das schlägt der Bauernverband vor. Agrarminister Rainer zeigt sich offen für die Forderung. Die SPD und Gewerkschaften reagierten mit deutlicher Kritik.

tagesschau.de
Gesucht: eine Plattform, wo ich eingeben kann, wann ich fertig war, mit Schule, und mir dann – sinnvollerweise sortiert nach Themenbereichen – gezeigt wird, welche Sachen, die ich damals als Stand der Wissenschaft gelernt habe, längst überholt sind. Und mir dann die Gelegenheit gibt zu lernen, was heute der wissenschaftliche Konsens dazu ist
Wie ich darauf komme? Die Große macht nächstes Jahr Abi, und es gibt Fächer wie Bio, wo das, was ich gelernt habe, mittlerweile völlig überholt ist.
Repost if you can hear this image blaring

Hallo Alle!
Wir sind #neuhier !

MCS-Betroffene aus Schleswig-Holstein

Hier findet Ihr mehr über uns https://www.mcs-atemluftinitiative-sh.de - und ab jetzt auch hier auf mastodon!

#Behinderungen
#Barrierefreiheit
#Duftfrei
#Rauchfrei

Wenn am Anfang der Aufführung extra eine "Bitte Handys in den Flugmodus stellen"-Ansage kommt und dann währenddessen ZWEI Telefone laut klingeln, ey Leute 😤

Änhlich wie in Trumps USA sollen #trans Menschen in Deutschland bei Ämtern,
Rentenversicherung und Finanzamt dauerhaft zwangsgeoutet werden. Das sieht ein Entwurf des Bundesministerium des Innern und für Heimat zum Meldewesen vor. Bisher waren diese Informationen nur den Standesämtern und bei medz. Maßnahmen den Krankenkassen bekannt.

#transrightsarehumanrights #LSBTIQ

https://ogy.de/q7z7

Stellungnahme des LSVD⁺ – Verband Queere Vielfalt

Why #Ao3 was down yesterday:
Menschen die einen anderen Namen führen als bei ihrer Bank hinterlegt ist, das könnte für euch relevant sein. Ab Oktober müsen Banken bei Überweisungen den Namen mit der IBAN abgleichen ... https://www.mgp-steuerberater.de/iban-abgleich-mit-zahlungsempfaenger-ab-oktober-2025-was-sie-wissen-muessen/
Verification of Payee

Ab Oktober 2025 wird die EU-Verordnung „Verification of Payee“ (VoP) wirksam, die einen IBAN-Abgleich mit dem Zahlungsempfänger bei SEPA-Überweisungen vorschreibt. Diese Neuerung soll den Zahlungsverkehr sicherer machen und Betrug verhindern. Erfahren Sie, was Sie tun müssen, um sich vorzubereiten – von der Prüfung der Stammdaten bis zur richtigen Kommunikation mit Ihren Kunden. Verstehen Sie die praktischen Auswirkungen und Haftungsfragen im Falle von Falschüberweisungen.

Steuerberater Berlin – mgp Merla Ganschow & Partner mbB
@PanOfBroth
×
Why #Ao3 was down yesterday:

@vashti and this is why we shouldn't store numbers in fixed width fields, or compute using 32 or 64 or 128 bit or any other fixed size integers.

Bignum arithmetic has been a solved problem in computing since Maclisp in the 1960s.

#Lisp

@simon_brooke @vashti 264 microseconds is approximately 580 000 years (more than that, rounding down), so 264 is more than plenty

@catgirlQueer @vashti As I myself wrote, years ago,

"At nanosecond resolution (if I've done my arithmetic right), 128 bits will represent a span of 1 x 10²² years, or much longer than from the big bang to the estimated date of fuel exhaustion of all stars. So I think I'll arbitrarily set an epoch 14Bn years before the UNIX epoch and go with that. The time will be unsigned - there is no time before the big bang."

So, yes, if you're content with nanosecond resolution...

https://github.com/simon-brooke/post-scarcity/wiki/cons-space#time

cons space

Prototype work towards building a post-scarcity software system - simon-brooke/post-scarcity

GitHub

@simon_brooke @catgirlQueer @vashti Just to be safe you should follow RFC2550.

https://www.rfc-editor.org/rfc/rfc2550.txt

This discusses future granularity and extended time range requirements. It errs on the side of caution eliminating any risk of obsolescence. As the RFC says:

In any case, the prevailing belief is that the life of the universe (and thus the range of possible dates) is finite.
However, we might get lucky. So, Y10K dates are able to represent any possible time without any limits to their range either in the past or future.

@simon_brooke @catgirlQueer @vashti it would be incredibly funny if "there is no time before the big bang" would ever end up in that famous "Falsehoods programmers believe about time" blog post 😅

@kmohrf @catgirlQueer @vashti perhaps. It's a convenient assumption, however, for beings whose observations are bounded by the Big Bang. When we learn to see beyond it will be time enough to revisit that assumption.

Ideally, perhaps, we might have a logarithmic measure of time. We're more interested in very short intervals in the recent past, less interested in them in the distant past.

@simon_brooke @catgirlQueer @vashti

https://en.wikipedia.org/wiki/Network_Time_Protocol

NTPv4 introduces a 128-bit date format: [...] According to Mills, "The 64-bit value for the fraction is enough to resolve the amount of time it takes a photon to pass an electron at the speed of light. The 64-bit second value is enough to provide unambiguous time representation until the universe goes dim."

Network Time Protocol - Wikipedia

@simon_brooke @catgirlQueer @vashti
The only larger thing I know of (aside from BIGNUM) is Babbage's design for the Analytic Engine, for which he required 50 decimal digits. This is 166 bits. It is an absolutely insane amount of precision in any time, let alone in 1837, when the grand total of computers in the world is MINUS 100 YEARS. Daft bugger.
@simon_brooke @catgirlQueer @vashti Anyway the compelling thing about using NTP (64 or 128 bit versions) is that the top half is just "seconds" so it's relatively readable by humans.
@TomF @simon_brooke @catgirlQueer @vashti it took a bit before my brain was able to grasp the concept and proper meaning of "time it takes a photon to pass an electron at light speed"
@jay_peper @TomF @simon_brooke @catgirlQueer @vashti I'm still not sure I grasp it. Is it the same as expressing the "width" of an electron in light-femtoseconds or something?

@simon_brooke @catgirlQueer @vashti

Fun fact:

Until 1972-01-01Z we used rubber-time, because astrometry is not nearly as constant as most people seem to think.

But it is worse than that.

We literally have no idea how long nanoseconds took before 1958-01-01Z

If you go back before observations of solar eclipses, we even barely know how long days took.

Any epoch before 1972-01-01Z by defintion causes wrong timekeeping.

@simon_brooke @catgirlQueer @vashti

Also:

Until we ditch leap-seconds, we cannot predict how many seconds there will be until some timestamp in the future, since that depends on what the director of the Paris Observatory decides twice a year.

And on top of that, time-zones are political decisions, so even without leap-seconds, it is anyone's guess how long time there is to 2026-01-01 09:00 in Mississippi.

@simon_brooke @catgirlQueer @vashti

So there is no "fix it once and for all" solution to timekeeping, and the best and most robust strategy will always be store timestamps the way the user provided them, and interpret them as best you can, given the knowledge available to you, when you do.

@simon_brooke @catgirlQueer @vashti

And as for nanosecond resolution: Now you also need to think about which relativistic reference frame to use.

@bsdphk @simon_brooke @catgirlQueer @vashti time is a fundamentally local phenomenon. Outside of our local gravity well, how long nanoseconds are is mostly extrapolation and guesswork
@bsdphk @simon_brooke @vashti I thought leap seconds are excluded from Unix time. So we know many Unix seconds it is until a certain date, but we don’t know how long a Unix second is (currently it’s around 1 real second or a little bit more, but it will fluctuate over time)

@jornane @simon_brooke @vashti

That is a slightly less precise way to say the same thing.

@jornane @bsdphk @simon_brooke @vashti One of UNIX's inventors was an amateur astronomer, so time() follows leap seconds. POSIX makes this clear and warns of the consequences.

The Linux manual pages warn similarly: `man 2 time`.

Jam Karet (Rubber Time) - Know Thyself, Heal Thyself - Medium

In American culture (and many other modern ones), time is a fixed concept. It is limited and linear. We could have plenty of time or don’t have it. Last year is further away from the past than the…

Know Thyself, Heal Thyself

@simon_brooke @catgirlQueer @vashti

No, rubber time in the sense of "I know the two timestamps, but I do know how long time there was/is/will be between them.

Timestamps and time-scales are merely labels we stick on time to the best of our ability, they are not actually time.

@bsdphk @catgirlQueer @vashti so a canonical ordering of events, without precise periods between them?

@simon_brooke @catgirlQueer @vashti

Ordering is an entirely different issue.

@bsdphk @catgirlQueer @vashti ok, now I'm not understanding. When you say "you know of two different timestamps", do you know which (within a locality) preceded the other? Or do they simply float in an unordered soup?

@simon_brooke @catgirlQueer @vashti

What I'm trying to say is that timestamps are conventions, what they mean depends entirely on the convention, and conventions change over time and space.

From 1958 to 1972 the UTC timescale used SI-seconds, but we adjusted the lengths of those SI-seconds to match the erratic rotation of the planet.

So if you define a timescale based on SI-seconds and extend it back in time, what does that mean ?

@simon_brooke @catgirlQueer @vashti

It literally means that the years 1973 and 1967 did not have the same duration.

So what is your timescale measuring ?

The physical duration we barely have measurements to know precisely ?

Or the conventional difference between the timestamps, disregarding physical reality ?

Talking about nanosecond timescales prior to 1958 simply makes no sense.

@bsdphk @catgirlQueer @vashti like any other human measurement, it's arbitrary. Is the kilogram of platinum carefully stored in Paris getting heavier or lighter? Is the new definition, based on Planck's constant, more durable? We're used to measuring with arbitrary scales.

I think.

Don't you?

@simon_brooke @catgirlQueer @vashti

They all important difference is that we could pull the meter or the kilogram out of the vault and measure them as often as we want, with increasingly sophisticated methods.

When measuring time we get only one chance, in real time, to measure the duration of any specific time-interval.

There can be no "do-over", no "try again with a better clock" and no "best out of N".

@simon_brooke @catgirlQueer @vashti

But time is still different, because it is a real-time running accumulation of non-repeatable measurements of duration.

@simon_brooke @catgirlQueer @vashti Good point.

Once Upon A Time two second resolution was fine, because nobody could create two files within two seconds, because floppy disks just weren't that fast.

Then millisecond resolution was good enough, until software started writing multiple log lines in the same millisecond, and their order isn't preserved if you're using Elasticsearch for your logging database.

We seem to have learned from this and skipped microseconds, but yes, the day might come when nanoseconds aren't good enough.

@simon_brooke @vashti I assume it's using SQL for the database. Does SQL even have a variable width datatype for numbers?

@Rusty @vashti strictly speaking, no, although some databases can store extremely large numbers.

https://www.postgresql.org/docs/current/datatype-numeric.html#DATATYPE-NUMERIC-DECIMAL

8.1. Numeric Types

8.1. Numeric Types # 8.1.1. Integer Types 8.1.2. Arbitrary Precision Numbers 8.1.3. Floating-Point Types 8.1.4. Serial Types Numeric types consist of …

PostgreSQL Documentation
@simon_brooke @vashti Yeah, AO3 is pretty old though so I'd imagine it would make sense to just use the standard int for the table ID since why waste the resources to make it a bigint? Surely we'll never cross 2 billion. 

@simon_brooke @vashti what a shockingly bad take, most programs are packed full of algorithms that are already optimal and would completely fall over if you used bignums. Memory pressure at all scales is a very real issue and pervasive bignum use would make it an order of magnitude worse for any program doing any meaningful amount of computation.

Would it be absolutely amazing to have better tooling that helps you analyze what word size a particular problem needs, and error checking in databases that warns you if your field width is approaching exhaustion? Absolutely, but "bignums everywhere" is an absolute farce of a non- solution.

@kevingranade @vashti no, absolutely it wouldn't. Any sensible language runtime can trap arithmetic overflow exceptions and silently convert smaller numerical representations to larger.

There's no cost at all in either performance or space until that exception is thrown, and, when it is, instead of having to rebuild the whole system (see top of this thread), the program just smoothly continues.

@simon_brooke I'd love to live in your fantasy land where you can do that without pervasively tanking memory locality.
Unfortunately I live in a world where I have to take hardware constraints into account and be careful about memory access patterns, which pretty solidly rules out having the runtime change memory layout behind my back.
@kevingranade why are you working with such constrained hardware? The phone in my pocket has 6,000 times the processing power, and 16,000 times the core storage, of the machines on which I was doing bignum arithmetic back in the 1980s.

@simon_brooke All hardware is constrained if your problem domain is ambitious enough. Memory speed/capacity has never kept up with Moore's law even when that was a thing for CPU cycles, and the place your "Bignums everywhere damn the consequences" leads is unconstrained memory thrashing because you have maybe-int-maybe-bignums scattered all over your heap because that's the way to satisfy your need for "everything must be dynamic enough to have the runtime silently upgrade it in the background" goal.

As for the "terribly constrained hardware" where this has been an issue:
The largest available AWS compute instances.
A slew of modern and less than modern desktop and laptop systems since my target audience is "anyone with a computer" not "people that refresh their computers every other year".
The phone you're talking about since I'm not talking about either processing power or "core storage", I'm talking about cache levels within the CPU that are ALWAYS insufficient for any even moderately demanding system.

@kevingranade @simon_brooke There was an article about how increasing index sizes took down the servers.

https://archiveofourown.org/works/11689068

Bigint would probably do the same plus there are limited SQL engines that handle bigint natively.

The Technical Architecture of the Archive of Our Own - zz9pzza - The Archive - Fandom [Archive of Our Own]

An Archive of Our Own, a project of the Organization for Transformative Works

@kevingranade @simon_brooke I can only imagine how well our codes would perform after naïvely replacing standard uniform-width numeric types with corresponding Big types. The column of fire would probably be visible from orbit :)

Honestly, this isn't a numerics problem, it's an information carrying capacity problem. A bookmark is an identifier so why use a numeric type for this other than out of habit and convenience? I'd need to know more specifics of what was expected out of a bookmark but it seems that either the system had well exceeded its design capacity or the use of a numeric type for nonnumeric data was not sufficiently thought through. The former is more forgivable than the latter.

@simon_brooke

The machines you did bignum arithmetic on in the 1980s weren't serving millions of users, nor did they have database tables with over 2 billion rows. (The IDs alone require 8GB of storage.) Even small costs add up very quickly at the scale of a popular central-server-based application.

@kevingranade

@simon_brooke @kevingranade @vashti i would imagine this depends on the runtime? This only works if your language is very object-y, since bigints are handled through indirection, and you'd need to tag the difference between the pointers and integers.

And I don't think the overhead is a question of catching overflows, it's dealing with all the conditions, now you would have an if-statement at least at every reciever from the DB. (unless integers are already indirected in that lanaguage, in which case there is dynamic dispatch overhead for every arithmetic operation)

And BigInt support is software land, not hardware, which means you can't re-use registers and such, but have to cycle them in and out of memory, which I would presume to have a pretty decent overhead in compute time and memory bus (the actual bottleneck in modern computing).

Or at least that's what I'd figure, I'm not a professional or anything so I'd love to hear your thoughts.

@LW44 @kevingranade @vashti obviously it depends on the runtime. But the idea that this is slow is false.

I published a gist recently comparing times for computing the factorial of 1000 - a number whose decimal representation has over 2500 digits - using recursive vs iterative algorithms, on a number of runtimes; and the worst case time was 0.002 milliseconds, on a perfectly normal modern laptop: an AMD Ryzen 5 with 6 cores clocked at 1.3GHz.

https://gist.github.com/simon-brooke/fcb59705950c5ad515e18fba065510ae

Comparison of recursive vs iterative performance in various Lisp family languages

Comparison of recursive vs iterative performance in various Lisp family languages - recursive-iterative-performance.md

Gist

@simon_brooke

Okay but now try benchmarking the performance of bignum arithmetic versus 64-bit integer arithmetic. I'm fairly certain the latter will win by orders of magnitude.

@LW44 @kevingranade @vashti

@argv_minus_one @LW44 @kevingranade @vashti it won't win for (positive) values below the 63 bit boundary, because below that boundary the overflow exception is never thrown and the bignum is never created; so performance is identical. It won't win for numbers above the 63 bit boundary because then the exception is thrown, and your solution grinds to a halt

@simon_brooke

I thought you meant 32-bit + bignum. 64+bignum seems unnecessary.

Ao3 is 16 years old. The database table in question has accumulated an average of 134,217,728 rows per year in that time. To accumulate 2⁶³ rows at that rate would take about 68.7 billion years.

Even if 8 billion people each added 1000 rows per day, it would still take about 3,156 years to reach the limit. (Also, the database would fall over under the load.)

@LW44 @kevingranade @vashti

@simon_brooke @kevingranade @vashti and now my pointer is pointing to …what, exactly?
@simon_brooke @kevingranade @vashti except the number isn't there anymore is it?

@purple @kevingranade @vashti well that depends. But in a system designed for multiple threads, no, but it wouldn't be anyway, because to make multiple threads work well you need data immutability.

The storage representation is not the problem of people writing at the application layer.

I should write another essay about the principle of "don't know, don't care"; without that principle, no meaningful software can be written.

@simon_brooke @kevingranade @vashti threads have nothing to do with this.

But I see you've decided to reframe this with ‘application layer’ so I suspect you know full-well what the problem is here and are being obtuse :)

Here in the real world we very much don't live in a frictionless plane of lisp symbols, however nice that might be. Memory layout is very important for all sorts of ‘application’ things, especially when operating at the scale of something like AO3.

If you decide to swap out an i64 for some form of big-int at runtime, I either don't know that happened and everything is bad, or we never really had an i64 — it was some form i64+flag, taking potentially double the storage, or more, and now we are pointer chasing and branching all over the place. For all operations. Hardly ‘no cost’ that.

@purple @kevingranade @vashti I've written a wee essay on why you already don't know, and don't care, about what's really going on underneath your software, and why you will be a better software engineer if you embrace that reality.

https://www.journeyman.cc/blog/posts-output/2025-07-05-don-t-know-don-t-care/

Don't know, don't care

The Fool on the Hill
@simon_brooke @purple @vashti good for you, keep being wrong and leave the fun stuff to those of us who do care.