29 Followers
77 Following
604 Posts

Interested in privacy and security, tech and research stuff, and making our world a better place. SWE #Google, ex research #Aalto, #JKU, #fhhgb. Opinions are mine.

In the past migrated from the now-hilarious birdsite-->@rainhard-->here.

On a bit of a social media/online break atm.

Tags: tfr

Build software better, together

GitHub is where people build software. More than 150 million people use GitHub to discover, fork, and contribute to over 420 million projects.

GitHub
GitHub backs down, kills Copilot pull-request ads after backlash

Updated: Letting Copilot alter others' PRs was the wrong judgment call, says product manager

The Register

Anthropic lost a class action suit for scraping books. Writers can register with Anthropic to be compensated for their pillaging of our copyrights.

The compensation system was AI-coded.

Anthropic can't keep track of our submissions. They don't know who wrote what.

Their customer support is AI-driven. Send a mail! Log in to a nonexistent page! Resubmit and it'll be fine!

This will be fine.   

I teach cybersecurity. And I genuinely don't know what to tell my students after this one. Federal reviewers spent years trying to get basic encryption documentation from Microsoft for its GCC High government cloud. They couldn't get it. One reviewer called the system a "pile of spaghetti pies," with data traveling from point A to point B the way you'd get from Chicago to New York: a bus to St. Louis, a ferry to Pittsburgh, and a flight to Newark. Each leg is a potential hijacking. They knew this. They said this out loud in writing. Then they approved it anyway in December 2024, because too many agencies were already using it. 🔐 That's not a security review. That's a hostage negotiation. Two things in this story should make every CISO and CIO uncomfortable:

🧩 Microsoft built its federal cloud on top of decades of legacy code that it apparently can't fully document itself
👮 "Digital escorts" often ex-military with minimal software engineering backgrounds are the firewall between Chinese engineers working on the system and classified U.S. networks 🤦🏻‍♂️

The scariest line in the whole ProPublica investigation isn't the "pile of shit" quote. It's this: FedRAMP determined that refusing authorization wasn't feasible because agencies were already using the product. Read that again. The security review process reached a conclusion based on sunk cost, not risk. Ex Post Facto Fallacy

If that logic holds, the compliance framework is just documentation theater. And right now, CISA is being hollowed out, so there are fewer people left to even run the theater.

https://arstechnica.com/information-technology/2026/03/federal-cyber-experts-called-microsofts-cloud-a-pile-of-shit-approved-it-anyway/
#Cybersecurity #Microsoft #FedRAMP #Leadership #RiskManagement #security #privacy #cloud #infosec

Federal cyber experts called Microsoft's cloud a "pile of shit," approved it anyway

One Microsoft product was approved despite years of concerns about its security.

Ars Technica

Wow:

arxiv.org preprint: "The data heat island effect: quantifying the impact of AI data centers in a warming world".

Excerpt: "We estimate that the land
surface temperature increases by 2°C on average after the start of operations of an AI data centre, inducing local microclimate zones, which we call the data heat island effect."

https://arxiv.org/pdf/2603.20897

"AI data centres can warm surrounding areas by up to 9.1°C. Hundreds of millions of people live close enough to data centres used to power AI to feel warmer average temperatures in their local area."
https://www.newscientist.com/article/2521256-ai-data-centres-can-warm-surrounding-areas-by-up-to-9-1c/

Trying to convince my students that having all your security policy changes include a design doc describing the status quo, the desired outcome, why this change will achieve it, why alternatives were rejected, and then implementing it via some automation schema so it can't accidentally be reverted for no obvious reason is good actually
As the number of LLM-generated patches in my inbox increases, I am starting to experience the sort of maintainer stress that has long been predicted. But there's another aspect of this that has recently crossed my mind.

Just over a week ago, a new personality showed up with a whole pile of machine-generated patches claiming to fill in our memory-management documentation. A few reviewers had some sharp questions, the response to which has been ... silence. This person doesn't seem to have cared enough about that work to make an effort to get past the initial resistance.

Once upon a time, somebody who had produced many pages of MM documentation would be invested enough in that work to make at least a minimal attempt to defend it.

Kernel developers often worry that a patch submitter will not stick around to maintain the code they are trying to push upstream. Part of the gauntlet of getting kernel patches accepted can be seen as a sort of "are you serious?" test.

When somebody submits a big pile of machine-generated code, though, will they be *able* to maintain it? And will they be sufficiently invested in this code, which they didn't write and probably don't understand, to stick around and fix the inevitable problems that will arise? I rather fear not, and that does not bode well for the long-term maintainability of our software.

SCOOP: Someone has found new samples of the iPhone spyware DarkSword and published them on GitHub, putting millions of iOS users at risk.

A cybersecurity researcher told us that the leaked spyware is "way too easy to repurpose" and "we need to expect criminals and others to start deploying this."

"The exploits will work out of the box," iVerify's Matthias Frielingsdorf said. "There is no iOS expertise required."

http://techcrunch.com/2026/03/23/someone-has-publicly-leaked-an-exploit-kit-that-can-hack-millions-of-iphones/

Someone has publicly leaked an exploit kit that can hack millions of iPhones | TechCrunch

Leaked "DarkSword" exploits published to GitHub allow hackers and cybercriminals to target iPhone users running old versions of iOS with spyware, according to cybersecurity researchers.

TechCrunch

I just learned that a new release of the decentralized, open source Android (and iOS, but that requires a centralized Apple service) key attestation library warden-supreme has landed. It explicitly supports alternative/custom roots of trust for the attestation chain now and comes with a test for @GrapheneOS keys: https://github.com/a-sit-plus/warden-supreme/blob/development/serverside/roboto/src/test/kotlin/GrapheneOsTests.kt

Nice! That's a good match to our academic research direction on digital identity (https://digidow.eu) - avoiding points of centralization for better resilience (against many types of threats). We'll most probably use this for our prototype Android apps that require or benefit from key attestation guarantees and can't/shouldn't use Play Integrity (e.g., because they only communicate over Tor hidden services with each other, and having a Warden backend included on one side is much easier than coming up with a form of mixnet proxy service for querying central instances while retaining an unlinkability guarantee).

What could possibly go wrong?

The US Federal Reserve (that regulates US banks) is about to reduce the capital adequacy requirement for banks (raised in the wake of the 2008 financial crisis).

Of course, nearly 20 years after the global financial crisis, regulators & bankers may say they've learned the lessons of 2008, but what they really mean is they've (wilfully) forgotten them.

Just one more act bringing a crisis nearer (as if attacking Iran wasn't;t enough)!

#politics #banking
h/t FT