50 Followers
95 Following
93 Posts
'92 | Belgian | Computer Security Engineer (mostly Azure)
Websitehttps://adr.iaan.be

Security firm Cybereason has open-sourced owLSM, an EDR-like agent for Linux, an eBPF LSM agent to run Sigma rules

https://github.com/Cybereason-Public/owLSM

GitHub - Cybereason-Public/owLSM: Sigma Rules Engine inside the Linux Kernel using eBPF. Focusing on prevention capabilities

Sigma Rules Engine inside the Linux Kernel using eBPF. Focusing on prevention capabilities - Cybereason-Public/owLSM

GitHub

My biggest problem with the concept of LLMs, even if they weren’t a giant plagiarism laundering machine and disaster for the environment, is that they introduce so much unpredictability into computing. I became a professional computer toucher because they do exactly what you tell them to. Not always what you wanted, but exactly what you asked for.

LLMs turn that upside down. They turn a very autistic do-what-you-say, say-what-you-mean commmunication style with the machine into a neurotypical conversation talking around the issue, but never directly addressing the substance of problem.

In any conversation I have with a person, I’m modeling their understanding of the topic at hand, trying to tailor my communication style to their needs. The same applies to programming languages and frameworks. If you work with a language the way its author intended it goes a lot easier.

But LLMs don’t have an understanding of the conversation. There is no intent. It’s just a mostly-likely-next-word generator on steroids. You’re trying to give directions to a lossily compressed copy of the entire works of human writing. There is no mind to model, and no predictability to the output.

If I wanted to spend my time communicating in a superficial, neurotypical style my autistic ass certainly wouldn’t have gone into computering. LLMs are the final act of the finance bros and capitalists wrestling modern technology away from the technically literate proletariat who built it.

Accidentally making a Perl onion logo in my latte art this morning.

(Cat was not impressed)

All of these AI coding advocates talking about creating good docs and APIs, yes, please. Programming in natural language? OK, let my ADHD take you somewhere unexpected.

Larry Wall studied linguistics at Berkeley with the intent of discovering an unwritten language on a Christian mission to Africa and developing a written language for it. For health reasons, he couldn't make the trip and stayed in the US where he joined the JPL and created Perl. I worked with Larry at craigslist and attended many Perl conferences where he spoke. One of the guiding principles of the design of the language was natural language. I'm probably misquoting, but the phrase I remember was, he wanted "a language that mimicked the sloppiness and unpredictability of natural language so it could grow with you." I happen to love Perl because of this. Some of my earliest contributions to perlmonks.org were Perl Poetry [1](https://perlmonks.org/index.pl?node_id=40275), [2](https://perlmonks.org/index.pl?node_id=37997).

What's it got to do with AI? Whenever I hear someone explain to me they want to use natural language to write code, I think of Larry and Perl. I posted this story and asked "Can someone explain to me how using AI generated code is better than Perl?" And now none of the AI people want to talk to me!

#fuck_ai #ai #fuck_with_ai #perl

longing(4, $you);

@SwiftOnSecurity Reminds me of that scene in Weeds
I'll make a more detailed blog post this weekend for creating a dancer2 application in the small container. It's a bit misplaced to put all that inside the example perl code since it's specific to fatpacking dancer2 and not to the module.

Released a first version of my #perl module Container::Builder on CPAN to build containers from scratch (like googles distroless containers).

See https://metacpan.org/pod/Container::Builder

There's a dancer2 example in the examples/ folder of the module. which makes a +-30MB compressed container image containing the dancer2 webapp.

It's also made to scrub timestamps so there's a lot of reuse in container layers within podman/docker since the hashes are stable if your inputs are stable.(Both TAR as well as GZIP have timestamps that could mess with this if not scrubbed)

Client Challenge

Finally got my Dancer2 app working in my distroless perl container (25MB oci-archive). There were a lot of quirks in the fatpacking process which I didn't anticipate, such as XS dependencies (can't have those), incorrect paths (prefixed with x86_64-linux breaks INC), dancer2 behaving weirdly (could also be my container) and making the bin/ folder the cwd, resulting in breaking the other paths (views/, public/, ..).

But now that I have working container, I can start trimming some more excess from the container so i'm not shipping anything that isn't going to be used by the container.

Quirks of making container images from scratch and then trying to use them.

It turns out the Config JSON needs to contain a history array or podman refuses to make a new layer. Even though the OCI spec says "history" is optional. (It's enough to make objects with a created string of epoch 0, giving 0 extra information to podman but it does unblock it)

I've been implementing my own Tar file library to do some quirky stuff but somehow all implementations handle the checksum as being 7 bytes and leave the last byte as \x20 ? It doesn't seem to say this in the spec file I'm using (https://www.gnu.org/software/tar/manual/html_node/Standard.html) where they mention you first set everything as \x20 (all 8 bytes of the field) but then overwrite it with the checksum value you calculate... I don't see an apparant reason for leaving the last byte as \x20 but if I follow the spec then my tar file doesn't work with `tar'. >.>

Maybe it has to do with "the precision of which shall be no less than seventeen bits" which I have no clue what it means.