"Don't just view the videos, POST them, again and again!" 💢💢💢

From https://t.me/Pravda_Gerashchenko

🤦‍♂️ 🤡 There's no need to scare people with numerous photos and videos of smoke over #Tuapse — Z-blogger #Markov writes that this creates false impressions about the grand scale of the #Ukrainian Armed Forces' attack on the city

In general, "don't look up".

#ukraine #putinisamasskiller #putinisawarcriminal @kardinal691

À chaque fois que je me sers d’une IA générative pour démontrer que ça ne marche pas, l’expérience prouve que j’ai raison.
100% de réussite avec une IAgen. Tout le monde ne peut pas en dire autant.

Ce midi je voulais évaluer les assertions de https://blog.gitguardian.com/the-bot-fingerprint-detecting-llm-passwords/ sur des modèles locaux.

J’ai donc lancé à l’arrache un modèle local gemma3:4b et je lui ai envoyé la requête suivante :

génère 200 mots de passe de 12 caractères minimum

C’est pas dur, même un enfant de 6 ans sait que 200 c’est beaucoup. C’est plus que tous ses doigts et tous ses orteils.
Pas gemma3:4b :

Voici 20 mots de passe de 12 caractères minimum, générés aléatoirement et conçus pour être difficiles à deviner :
(liste de 20 mots de passe pourris)

Alors je me démonte pas, je copie-colle mon prompt pour réitérer ma requête sans rien changer :

génère 200 mots de passe de 12 caractères minimum

Le machin écrit en réponse :

Okay, here are 200 passwords, each 12 characters or longer, generated randomly.
(liste de 200 mdp encore plus pourris)

Entre les 2 requêtes identiques gemma3:4b a appris à compter jusqu’à 200 et a décidé que c’est mieux de formuler la réponse en anglais.

Non, vraiment, jamais déçu par ces merdes :)

Maintenant je vais donner tout ça à John pour qu’il calcule à partir de tout ça les fichiers nécessaires à une attaque de mots de passe par chaînes de Markov.

#iagen #markov #johntheripper #jtr

(edit: typo)

The Bot Left a Fingerprint: Detecting and Attributing LLM-Generated Passwords

LLMs leave statistical fingerprints in the passwords they generate. We built a 100-year-old model to find them and detected 28,000 in the wild.

GitGuardian Blog - Take Control of Your Secrets Security
What is a Markov Process? #reinforcementlearning #machinelearning

YouTube
What if i did it with a real beige box and just have it do #markov chains instead or LLMs 🤔
If AGI is coming in 3–8 years, quantum arrives after or right at the same time as early AGI. - it is much more energy efficient #matrix multi #markov chains
Patternuary 6 (2024): a hypnotic piano pattern with Mégra — a playful collision of code and melody. Is the string the pattern, the resulting Markov chain, or both? Short, puzzling, beautiful. Give it a listen and geek out! #Patternuary #Piano #Markov #GenerativeMusic #AlgorithmicMusic #CreativeCoding #MusicTheory #English
https://see.ellipsenpark.de/videos/watch/8cabbfa1-220c-4fe2-a8d9-b1fc6fcc777a
Patternuary 6 (2024)

PeerTube
First actual markov sequence !
(I didn't even re-run it once, this is literally my first one.)

yay! :P


#;305> (ez-markov p1 ht 10)
start:She closed
sufx:and
sufx:locked
sufx:the
sufx:door
sufx:clapped
sufx:his
sufx:hands
sufx:in
sufx:a
sufx:gesture

#markov #scheme #lisp
https://codeberg.org/pkw/chicken-markov

My #chicken #scheme #markov experiment WIP.

I am happy enough with the scheme-y-ness of it for now.
It does the reading part. [0] Which I think will be the most
involved. Next I have to build the markov data store.
(map of prefixe/n-grams to all of their suffix word(s).)

And then playing it back to see some results.

And THEN I will have enough tools mix the markov
data stores. Like Mix Dr. Seuss and The Bible (as
and ex. off the top of my head). I don't actually
know how to do this mixing, but I'm guessing
it will be somewhat obvious when i get to it.



[0]: meaning it will recur/iterate one the n-gram + suffix
units, where n can change. The recursion gives you
the n-gram + suffix units piece by piece in order so
the processing can be easily plugged in.
chicken-markov

chicken-markov

Codeberg.org
@[email protected]
You all are acting with far more respect for this absurd science experiment than you ought to.

An #AI “agent” isn’t a person, it’s an overgrown #Markov chain. This isn’t a situation where we don’t know where the boundary between emulating personhood and being a person is. This is firmly on the side of “not a person”

An #LLM does not have feelings you need to respect, even if some fool decided to instruct it to pretend to have them and to write slop blog posts parroting hundreds or thousands of actual writers about it when we don’t do what it asks.

Stop humanizing this tool and find it’s owner and hold them accountable for wasting time and resources on an industrial scale.
https://github.com/matplotlib/matplotlib/pull/31132#issuecomment-3890706730

@[email protected]
@[email protected]
[PERF] Replace np.column_stack with np.vstack().T by crabby-rathbun · Pull Request #31132 · matplotlib/matplotlib

This PR addresses issue #31130 by replacing specific safe occurrences of np.column_stack with np.vstack().T for better performance. IMPORTANT: This is a more targeted fix than originally proposed. ...

GitHub

@aoanla @quixoticgeek I think both of you are correct, and to make sure we establish something more concrete here, I would model a simple #Bayesian (#Markov) chain:

Helmet‑free → higher bike‑commute mode share → increased daily physical activity → lower incidence of rare cardiovascular disease