yt comment:
> Remember: The dumbest person you know is being told 'you are absolutely right' by a LLM right now.
yt comment:
> Remember: The dumbest person you know is being told 'you are absolutely right' by a LLM right now.
Don't use LLM generated code in your projects yet! If for no other reason than that the legal case law is NOT ESTABLISHED YET.
I know there was the "copyright laundering" thing that went around a lot, but we actually don't know.
You'll see commenters everywhere on the internet say that "the US Supreme Court ruled that AI generated output is in the public domain". That's misinfo: they *declined to take on* a case from a lower court coming to that conclusion. The US Supreme Court hasn't yet ruled.
And this hasn't shaken out in an international setting yet either.
You may be surprised to hear: I actually think it's more dangerous and empowers centralized AI companies even more if it *isn't* the case that AI output is in the public domain (I'll follow up about that), but regardless, right now we just don't know.
But despite that, I'm STILL saying that you're putting yourself in legally dubious territory right now if you include LLM generated code, for now. We don't know yet.
Evides N.V. https://www.tenderned.nl/aankondigingen/overzicht/395626 @bert_hubert dit is zo'n voorbeeld. Toekomstbeeld: "Azure Native"; onder het kopje "Digitale soevereiniteit en risico’s" staat dan totale wartaal: "Om deze risico’s te beheersen en digitale autonomie te versterken, werkt Evides aan maatregelen zoals:• Voortzetten van Cloudtransformatie-activiteiten"
Ja echt: om de #digitaleAutonomie te versterken, zetten wij de migratie naar Azure voort. (tekst uit "Bijlage 4 - Beschrijving van de opdracht")
So, I recently saw some quiet discussion about a paper where researchers reverse-engineered and disclosed some attacks against PhotoDNA, the very-super-duper-secret algorithm used by tech megacorps to scan for illegal images.
They didn't make any code public, and so... I did: https://github.com/ArcaneNibble/open-alleged-photodna
A _complete_ reverse-engineering and commented Python reimplementation of the algorithm from publicly-leaked binaries.
This means that studying the algorithm and any potential flaws is now much more accessible.
This took only about two days (once I knew that there even _was_ a leaked binary to compare against), which just goes to again show that security through obscurity never works.
🔁 encouraged
We’ve been saying this for years now, and we’re going to keep saying it until the message finally sinks in:
mandatory age verification creates massive, centralized honeypots of sensitive biometric data that will inevitably be breached.
Every single time.
And every single time it happens,
the politicians who mandated these systems and the companies that built them act shocked—shocked!
—that collecting enormous databases of government IDs, facial scans, and biometric data from millions of people turns out to be a security nightmare
Building, making, crafting - these are Benedictine acts. They're labor undertaken for its own sake and for the immediate good of a knowable community, and they produce a satisfaction that no amount of MRR can replicate.