166 Followers
119 Following
6.8K Posts

Functional programming, TypeScript, tooling.
 He/him. Views and opinions my own and do not reflect those of my employer.

I usually only follow folks with pronouns in bio.

LinkedIn Is Illegally Searching Your Computer

Microsoft is running one of the largest corporate espionage operations in modern history. Every time any of LinkedIn’s one billion users visits linkedin.com, hidden code searches their computer for installed software, collects the results, and transmits them to LinkedIn’s servers and to third-party companies including an American-Israeli cybersecurity firm. The user is never asked. Never told. LinkedIn’s privacy policy does not mention it. Because LinkedIn knows each user’s real name, employer, and job title, it is not searching anonymous visitors. It is searching identified people at identified companies. Millions of companies. Every day. All over the world.

BrowserGate
apologies to anyone who already read me saying this on bsky but: for the price of claude code, I could pay off my mortgage 3-4 years earlier. so as a notional labour saving device, it is competing with "four years of additional uninterupted leisure time", which I would say is a pretty high bar to clear
Probably going to get a viral blog out of this experience, I'm trying to report a 4tb exposed cloud bucket to a company using their responsible disclosure programme... but they replaced the people with a GenAI ticket system that refuses to discuss the case as it thinks exploring open buckets is unethical and against its rules.
I suspect that LLMs are never going to completely disappear from software dev, but I think this whole agents-controlling-agents vibe coding chaos is going to pretty quickly be recognized as a disaster by everywhere but the shittiest dev shops

oh. hm. that seems bad. "workers aren't affected by the parent's tool restrictions."

It's hard to tell what's going on here because claude code doesn't really use typescript well - many of the most important types are dynamically computed from any, and most of the time when types do exist many of their fields are nullable and the calling code has elaborate fallback conditions to compensate. all of which sort of defeats the purpose of ts.

So i need to trace out like a dozen steps to see how the permission mode gets populated. But this comment is... concerning...

holy shit there's another entire fallback tree before this one, that's actually an astounding twenty two times it's possible to compress an image across nine independent conditional legs of code in a single api call. i can't even screenshot this, the spaghetti is too powerful

continuing thoughts in: https://neuromatch.social/@jonny/116328409651740378

one thing that is clear from reading a lot of LLM code - and this is obvious from the nature of the models and their application - is that it is big on the form of what it loves to call "architecture" even if in toto it makes no fucking sense.

So here you have some accessor function isPDFExtension that checks if some string is a member of the set DOCUMENT_EXTENSIONS (which is a constant with a single member "pdf"). That is an extremely reasonable pattern: you have a bunch of disjoint sets of different kinds of extensions - binary extensions, image extensions, etc. and then you can do set operations like unions and differences and intersections and whatnot to create a bunch of derived functions that can handle dynamic operations that you couldn't do well with a bunch of consts. then just make the functional form the standard calling pattern (and even make a top-level wrapper like getFileType) and you have the oft fabled "abstraction." that's a reasonable ass system that provides a stable calling surface and a stable declaration surface. hell it would probably even help the LLM code if it was already in place because it's a predictable rules-based system.

but what the LLMs do is in one narrow slice of time implement the "is member of set {pdf}" version robustly one time, and then they implement the regex pattern version flexibly another time, and then they implement the any str.endswith() version modularly another time, and so on. Of course usually in-place, and different file naming patterns are part of the architecture when it's feeling a little too spicy to stay in place.

This is an important feature of the gambling addiction formulation of these tools: only the margin matters, the last generation. it carefully regulates what it shows you to create a space of potential reward and closes the gap. It's episodic TV, gameshows for code: someone wins every week, but we get cycles in cycles of seeming progression that always leave one stone conspicuously unturned. The intermediate comments from the LLM where it discovers prior structure and boldly decides to forge ahead brand new are also part of the reward cycle: we are going up, forever. cleaning up after ourselves is down there.

Tech debt is when you have banked a lot of story hours and are finally due for a big cathartic shift and set the LLM loose for "the big cleanup." this is also very similar to the tools that scam mobile games use (for those who don't know me, i spent roughly six months with daily scheduled (carefully titrated lmao) time playing the worst scam mobile chum games i could find to try and experience what the grip of that addition is like without uh losing a bunch of money).

Unlike slot machines or table games, which have a story horizon limited by how long you can sit in the same place, mobile games can establish a space of play that's broader and more continuous. so they always combine several shepherd's tone reward ladders at once - you have hit the session-length intermittent reward cap in the arena modality which gets you coins, so you need to go "recharge" by playing the versus modality which gets you gems. (Typically these are also mixed - one modality gets you some proportion of resource x, y, z, another gets you a different proportion, and those are usually unstable).

Of course it doesn't fucking matter what the modality is. they are all the same. in the scam mobile games sometimes this is literally the case, where if you decompile them, they have different menu wrappings that all direct into the same scene. you're still playing the game, that's all that matters. The goal of the game design is to chain together several time cycles so that you can win->lose in one, win->lose in another... and then by the time you have made the rounds you come back to the first and you are refreshed and it's new. So you have momentary mana wheels, daily earnings caps, weekly competitions, seasonal storylines, and all-time leaderboards.

That's exactly the cycle that programming with LLMs tap into. You have momentary issues, and daily project boards, and weekly sprints, and all-time star counts, and so on. Accumulate tech debt by new features, release that with "cleanup," transition to "security audit." Each is actually the same, but the present themselves as the continuation of and solution to the others. That overlaps with the token limitations, and the claude code source is actually littered with lots of helpful panic nudges for letting you know that you're reaching another threshold. The difference is that in true gambling the limit is purely artificial - the coins are an integer in some database. with LLMs the limitation is physical - compute costs fucking money baby. but so is the reward. it's the same in the game, and the whales come around one way or another.

A series of flashing lights and pictures, set membership, regex, green checks, the feeling of going very fast but never making it anywhere. except in code you do make it somewhere, it's just that the horizon falls away behind you and the places you were before disappear. and sooner or later only anthropic can really afford to keep the agents running 24/7 tending to the slop heap - the house always wins.

i love this. there's a mechanism to slip secret messages to the LLM that it is told to interpret as system messages. there is no validation around these of any kind on the client, and there doesn't seem to be any differentiation about location or where these things happen, so that seems like a nice prompt injection vector. this is how claude code reminds the LLM to not do a malware, and it's applied by just string concatenation. i can't find any place that gets stripped aside from when displaying output. it actually looks like all the system reminders get catted together before being send to the API. neat!

If i can slip in a quick PSA while my typically sleepy notifications are exploding, these are all very annoying things to say and you might want to reconsider whether they're worth ever saying in a reply directed at someone else - who are they for? what do they add?

  • "why are you surprised"/"even worse than {thing} itself is people being surprised at {thing}": unless the person is saying "i am surprised by this" they are likely not surprised by the thing. just saying something doesn't mean you are surprised by it, and people talking about something usually have paid attention to it before the moment you are encountering them. this is pointless hostility to people who are saying something you supposedly agree with so much that you think everyone should already believe it
  • "it's always been like this": slightly different than above. unless someone is saying "this is literally new and nothing like this has happened before" or you are adding actual historical context that you know for sure they don't already know, you're basically saying "hey did you know this thing you care enough about to be paying attention to and talking about frequently has happened before now as well." this is so easy to frame in a way that says "yes and" rather than "i assume you dont know about the things i know about due to being very smart." eg. "dang not again, they keep doing {thing}"
  • "{thing} might be bad, but {alternative/unrelated, unmentioned, non-mutually exclusive thing} is even worse": multiple things can be bad at the same time and not mentioning something does not mean i don't think it's also bad
  • "funny how people who think {thing} is bad also think {alternative/unrelated, unmentioned thing} is good": closely related to the above, just because you have binarized your thinking does not mean everyone else has.

anyway if the mental image you are conjuring for your interlocuters positions them as always knowing less than you by default, that might be something to look into in yourself!

(those numbers are also totally fucking wrong, the query engine is not 46ksloc, i have no idea what those numbers correspond to, as far as i can tell "nothing" and this is just hallucinated dogshit that is what i guess passes for high quality public comment nowadays)