Nikita Abdullin

7 Followers
53 Following
148 Posts
Security connoisseur, full-stack security specialist, hereditary tech-priest.
BUG-BOUNTY.md: we stop the bug-bounty end of Jan 2026 by bagder · Pull Request #20312 · curl/curl

Remove mentions of the bounty and hackerone. There will be more mentions, blog posts, timings etc in the coming weeks.

GitHub
For all its faults, I think using AI to code has made me a better engineer. Whenever I have it work on something I don’t want to learn or implement properly it calls me on it immediately by doing such a bad job that I am forced to admit I actually *do* care.

RE: https://mastodon.bsd.cafe/@stefano/115858721747358539

We have been living in peak cyberpunk for quite a while

WTF of the week (yes, it's only Tuesday):

Microsoft warns that Windows 11's agentic AI could install malware on your PC: "Only enable this feature if you understand the security implications"
https://www.windowscentral.com/microsoft/windows-11/microsoft-warns-security-risks-agentic-os-windows-11-xpia-malware

Microsoft warns that Windows 11's agentic AI could install malware on your PC: "Only enable this feature if you understand the security implications"

Microsoft is pushing ahead with its plan to add agentic capabilities to Windows 11 but has issued an important security warning for anyone who is interested in trying it out.

Windows Central

and beyond just the triumph of capital over any alternative, it really breaks my heart that computers are just objectively worse today than they were in the time of Chuck Moore. I try and not be an old man yelling at the cloud about this but we've given up on stability, soundness, maintainability. these are non-goals of modern computing, sacrificed at the altar of shareholder value.

it is wild that an official update of the operating system could break otherwise working code in a way that is impossible to determine even what is happening, let alone what to do to fix it. but this is what we've come to expect. computers break all the time, software breaks all the time, stuff crashes, you restart, whatever. and this isn't even factoring in the incoming wave of vibe-coded systems which make no attempt at correctness.

this isn't what computing was, there were attempts -- serious attempts! -- at developing theory and practice to build systems that were stable and correct in the face of usage and updates. we put half a century into that. and now we live in a kind of collective surrender. it's really depressing. as someone who has dedicated a life to computing, it's really fucking depressing.

While cleaning a storage room, our staff found this tape containing #UNIX v4 from Bell Labs, circa 1973

Apparently no other complete copies are known to exist: https://gunkies.org/wiki/UNIX_Fourth_Edition

We have arranged to deliver it to the Computer History Museum

#retrocomputing

The positioning of LLM-based AI as a universal knowledge machine implies some pretty dubious epistemic premises, e.g. that the components of new knowledge are already encoded in language, and that the essential method for uncovering that knowledge is statistical.

Maybe no one in the field would explicitly claim those premises, but they're built into how the technology is being pitched to consumers.

The technical consensus is clear: you can't create a backdoor that only lets the "good guys" in. However they're dressed up, these proposals create cybersecurity loopholes that hackers and hostile nations are eagerly waiting to exploit. 6/
@wren6991 @lofty one trick i've seen for dealing with fpga toolchains that i really like is to LD_PRELOAD this bad boy. makes them much less flakey.

Nix is just Gentoo for gen Z

*runs away*