RE: https://mastodon.scot/@simon_brooke/116232834837623434

“The researchers suspect that Glassworm—the name they assigned to the attack group—is using LLMs to generate these convincingly legitimate-appearing packages. “At the scale we’re now seeing, manual crafting of 151+ bespoke code changes across different codebases simply isn’t feasible,” they explained. Fellow security firm Koi, which has also been tracking the same group, said it, too, suspects the group is using AI.”

What I don’t get is how this snippet passed code review regardless.

I mean, it’s clearly dodgy and the last line basically meaningless without the code being evaluated.

The real story here isn’t the invisible Unicode characters, it’s the lack of proper code review on code submissions.

@aral I agree, I had the same thought, first of all an eval function is super suspicious; "eval is evil", on the other hand what the earth is the const s doing there. And in the name of love what is the reviewer thinking about when someone does a PR with this kind of "functionality". I will check but I think that a simple SAST is going to complain about this PR.
On a second thought maybe the snippet provided by the security researchers is just a non realistic example to illustrate the concept.