cynicalsecurity 

1.5K Followers
275 Following
9.2K Posts
IT Security, cynically aged. Maths. Some nukes. Four languages. Longing for Symbolics and Connection Machines. Keeper of Ancient Computing Lore. Ⓐ
Butterflyplacehttps://bsky.app/profile/cynicalsecurity.bsky.social
Homepagehttp://arrigotriulzi.ch/
First 0day1986

With Windows 9x Subsystem for Linux you can run all your favourite Windows and Linux apps side-by-side with a modern Linux kernel running cooperatively with the Windows kernel in ring 0. And unlike modern WSL, no hardware virtualisation is used so even your 486 can run it!

Please enjoy, I think this might be one of my greatest hacks of all time

https://codeberg.org/hails/wsl9x

vibe-coding report:

Early afternoon:

Started looking at a set of scripts doing data handling in ksh on OpenBSD with Ollama + opencode

Mid-afternoon:

Ollama + opencode messed up the scripts enough that nothing worked, reverted to my version which might not have been so sexy and elegant re: error processing but at least didn't cobble the data.

Late afternoon:

OK, let's try Claude code. The first iterations removed good code (mine), created random regex which were clearly wrong, tried using gawk features on OpenBSD awk despite being told it was OpenBSD.

Evening:

Finally coaxed Claude code to "do the right thing". Scripts run and do what they were supposed to do.

Still had to edit the scripts by hand to add in a missing comparison to avoid printing log files when nothing has changed every couple of minutes.

I sense that #bhyve/#ARM64 will become very interesting in the 25 hours, thanks to John Baldwin doing John Baldwin-y things.
I made some variations on Montgomery multiplication with redundant representations. As an illustration, I made some codegolfed ECDSA signature verification (curve P-256) on 64-bit architectures (x86, Arm and RISC-V); I got that down to 848 bytes of code on x86 with a still usable runtime cost. Moreover, there is a comprehensive range analysis (automated) that proves that the computation cannot overflow.
AI was not used, but it was defeated.
Paper is here: https://github.com/pornin/small-ecdsa/blob/main/tex/mmul.pdf
More generally, the repository contains the paper, the code, and the proof (in Python): https://github.com/pornin/small-ecdsa

I really enjoyed working on this audit for @freedomofpress with @robbje. Quite an important piece of software!

https://x41-dsec.de/security/research/job/news/2026/04/21/securedrop-review-2026/

X41 Reviewed SecureDrop

X41 releases the audit report of SecureDrop.

X41 D-Sec - Penetration Tests and Source Code Audits
Today’s poem is called ‘An Invention of Collective Nouns’.

As a crude first approximation, the problem-solving component of mathematical research (which, one should stress, is not the *only* aspect of such research) can be decomposed into three subcomponents:

1. Proof generation (finding a solution to a given problem);
2. Proof verification (checking that a proposed solution actually works); and
3. Proof digestion (understanding the essence of a solution, placing it in context with previous literature, summarizing and explaining it effectively, and gaining insights on other related problems and topics).

Until recently, all three of these subtasks were rather difficult and time-intensive to perform; but a human mathematician (or a collaboration between several mathematicians) who had invested the effort to both generate a proof and verify it usually gained enough understanding into the structure of that proof that they could also digest it effectively. Because of this, our community has been generally content to emphasize the proof generation and verification aspect of problem solving, as the proof digestion tended to be created naturally as an organic byproduct of these first two aspects. This was also convenient for assessing proof efforts, as the generation and verification tasks had well-defined objectives, whilst proof digestion was a more subjective and open-ended process. [Though "the ability to present the result at a research conference and take questions" is a rough first approximation of a metric for whether a proof has been digested,] (1/3)

However, recent advances in both AI and proof formalization have begun to vastly accelerate and automate the first two components of this process. This is leading to a new type of "impedance mismatch": problems for which solutions can be rapidly generated and verified in a mostly automated process, but for which no human author has understood the arguments well enough to initiate the (much slower) digestion process.

In fact, with the current cultural incentives that reward the first authors to "solve" the problem, rather than the later authors who "digest" the solution, one may end up with the perverse situation in which an AI-generated (and formally verified) solution to an problem that is presented to the community without any significant digestion may actually *inhibit* the progress of the field that the problem lies in, by discouraging any further attempts to work on the problem, simplify and explain the proof, and extract broader insights. (2/3)

We may be in the market to hire a part-time FreeBSD and Bastille sysadmin (~20hrs week) specifically in the EMEA or APAC timezones (eventually both).

The roles require experience with FreeBSD, Bastille, nginx, and at least one useful coding language.

Timeline is mid-to-late 2026 to start.

Any of our EU / APAC friends want to come work part-time with the Bastille creator on a cybersecurity startup?

#FreeBSD #BastilleBSD #Cybersecurity

Je recherche un travail dans l'#adminsys (plutôt #Linux mais d'autres OS ne me font peur), centré ou pas sur la sécurité et/ou #DNS #DNSSEC et ou #IPv6. plutôt dans les Hauts de France (#HDF). Merci pour les repartages éventuels.