Stefan Gast

223 Followers
319 Following
1,005 Posts

PhD Candidate in the CoreSec group at #TUGraz, focusing on side-channel security. Apart from that, I also post #Linux and #privacy related stuff.

Opinions posted here are my own and do not necessarily reflect those of my employer.

Websitehttps://stefangast.eu
I personally consider "I asked ChatGPT to generate a response to you" not witty but a form of an insult. Don't do that please. If that is how you want to talk to people at least don't tell me. It's offensive.

in case you need a taste of how fucked the tech industry is right now, I'm being required to use AI at work. if I talk about how it fucks up or overcomplicates basic asks, it's because I "don't know how to use it" which indicates a "lack of growth mindset", and thus poor performance. I've been told this directly to my face, starting immediately.

so not only must I use Claude, I have to cover for Claude's mistakes, and then go the extra mile to pass off my own work as Claude's.

Ever wondered how Linux manages process states, especially uninterruptible sleep ("D" in ps/top/etc)? What if I told you that -- despite what most #linux users think -- you can often actually interrupt or kill them?

https://chrisdown.name/2024/02/05/reliably-creating-d-state-processes-on-demand.html

Creating controllable D state (uninterruptible sleep) processes

tl;dr:

You must ask for consent before you collect someone's information or do something to them.

Informing them after the fact, then offering them to delete the information, isn't respecting consent. And in many cases, isn't legal either.

#Consent #Privacy

i love that we went from "zero trust" as a fundamental buzzword to "trust autonomous nondeterministic agents everywhere in your stack"

Not only do generative AI models replicate human biases, but they also tend to amplify them. While significant efforts are being made to correct these discriminatory biases, they remain difficult to identify because they are not always explicit.

In this article, we’ve taken a closer look at the “black box” ahead of a public conference at EPFL on the issue of algorithmic discrimination: https://actu.epfl.ch/news/the-myth-of-neutrality-in-generative-ai/

🕠 Roundtable | Tuesday, March 24: https://memento.epfl.ch/event/racial-and-discriminatory-bias-in-generative-ai-ro/

The myth of neutrality in Generative AI

As part of the Week of Action Against Racism, EPFL is exploring the issue of algorithmic discrimination. Ahead of a public conference on campus dedicated to the topic, we’ve taken a closer look at the “black box.”

As the number of LLM-generated patches in my inbox increases, I am starting to experience the sort of maintainer stress that has long been predicted. But there's another aspect of this that has recently crossed my mind.

Just over a week ago, a new personality showed up with a whole pile of machine-generated patches claiming to fill in our memory-management documentation. A few reviewers had some sharp questions, the response to which has been ... silence. This person doesn't seem to have cared enough about that work to make an effort to get past the initial resistance.

Once upon a time, somebody who had produced many pages of MM documentation would be invested enough in that work to make at least a minimal attempt to defend it.

Kernel developers often worry that a patch submitter will not stick around to maintain the code they are trying to push upstream. Part of the gauntlet of getting kernel patches accepted can be seen as a sort of "are you serious?" test.

When somebody submits a big pile of machine-generated code, though, will they be *able* to maintain it? And will they be sufficiently invested in this code, which they didn't write and probably don't understand, to stick around and fix the inevitable problems that will arise? I rather fear not, and that does not bode well for the long-term maintainability of our software.
The `left-pad` incident was 10 years ago today.

https://en.wikipedia.org/wiki/Npm_left-pad_incident

Thankfully, we've completely solved software supply chains in the years since.
npm left-pad incident - Wikipedia

Does anyone know where to find more info on the surveilance economy online? I was looking for an update on the unfortunate Debora Silvestri who crashed so badly yesterday, and of course, was met with "We value your privacy" banner where I could consent to giving away… something?

The Privacy Policy talks about two cookies - both Google Analytics, and two partners for gaining "audience insights". The actual cookie pop-up list 1.709 (!) so-called "partners", many with "legitimate interest". Basically all these are companies nobody has ever heard of.

I know I'm leaking info like IP-address, browser and device details. What I can't understand is how all these 1.709 little leeches can possibly deliver enough value and generate revenue based on this information. Who pays them, and for what?

Thanks!

#advertisements #surveillance #cookies

Something I really really want to emphasize here:

The "age verification" bit doesn't really matter. Even as far as mandating the existence of parental controls is debatable in many contexts.

The "developer liability to know personal information" bit is an existential threat to free software. And the bit that deserves defending from every possible angle (freedom of speech, expression, and privacy law to name a few)

https://mastodon.social/@sarahjamielewis/116178334950236964