Jonathan Birch

@Seibai@infosec.exchange
101 Followers
16 Following
106 Posts
Penetration tester and security researcher currently working for Microsoft. Opinions and views expressed are my own and not my employer's.

If you want to be part of The Resistance,
it will cost you your smart speaker, your Ring doorbell, your smart glasses, and your AI assistants.

Mass surveillance is incompatible with revolution. Get rid of it.

#BeTheResistance #Privacy #MassSurveillance

The charger for wife's hearing aids has a weird accessibility fail.

The charger is a little black plastic box that plugs into the wall. It has a slot for each hearing aid and a lid that snaps shut. Getting the hearing aids into position so that they will charge is rather fiddly - if they're slightly out of place, they won't charge. They each have a light that turns on when they're charging, but closing the lid of the case will often dislodge one sufficiently to break the connection.

Someone involved with the design of the charger or the hearing aids clearly recognized this problem, so there's a backup mechanism. It seems that if one hearing aid is charging but the other isn't, after a few minutes it starts making a sound.

A very quiet sound.

It sounds, roughly, like a single lonely cricket somewhere outside.

A sound which no one who needed the hearing aids could possibly perceive while not wearing the hearing aids.

This creates a weird dynamic where I occasionally have to communicate to my wife "crickets" so that she knows to fix the issue.

And all of this could be much more easily solved by just making the lid of the box out of clear plastic. Or adding a light on the outside of the box to indicate that the connection is bad.

#hearingaids #accessibility

(And writing this, I'm realizing that this was probably a heroic attempt at a software fix to a hardware problem. A fix which even sort of works for us. It's still a definite fail though.)

I will definitely be doing more versions of my cyberfascism keynote, mostly because I want more opportunities to get on stage and yell, "Less complying in advance and more 'Fuck you, make me!'"

Prepare your Tails in advance, you might need it soon.

#Privacy #Safety

Using Tails When Your World Doesn't Feel Safe Anymore

When browsing the web at home becomes dangerous to your safety, there are tools that can help minimizing your digital traces to stay safe. Tails is one of these tools. Here's why, when, and how you can install and use Tails.

Privacy Guides

In honor of the late Robert Redford, "Sneakers", in high def ANSI with full subtitles:

ssh sneakers@ansi.rya.nc

(needs a terminal with 24 bit color support)

When giving #security guidance to developers, be sure to impart an understanding of the underlying problem, not just which API's to use or not. If you don't do this, enterprising developers will often reintroduce the problem by adding the problematic capabilities to an otherwise safe API.

To give a specific example, and to try to atone for some of my past sins:

The underlying problem in unsafe deserialization that leads to remote code execution is user-provided data telling your application what type it wants to be. When data can choose what type it is, it can choose types that have exploitable side effects in their constructors, setters, or destructors. Polymorphic deserializers are inherently unsafe.

In the past I've told people "use this API, it's safe". But when that API is safe because it doesn't allow polymorphism, developers inevitably modify the API to add polymorphism when it makes the overall design of the application simpler.

The portion of the #OWASP #serialization cheat-sheet on .NET is based on a talk I gave in 2017 before I understood this problem. (https://cheatsheetseries.owasp.org/cheatsheets/Deserialization_Cheat_Sheet.html#net-csharp )

It's much more difficult, but training developers to write secure code requires teaching them what the real problems are.

#appsec #training

Deserialization - OWASP Cheat Sheet Series

Website with the collection of all the cheat sheets of the project.

After learning yesterday about the island of Stroma, I've become increasingly convinced that it needs to be the setting for some sort of survival horror game.

I mean, it's an entire abandoned island, with an abandoned 19th century village (Nethertown!), mostly still standing, that's frequently unreachable. And the map just feel perfect*.

The only real drawback I'm seeing is that it feels like the monster would have to be hungry sheep.

*map taken from Wikimedia's copy of the 2013 UK Ordnance Survey - https://commons.wikimedia.org/wiki/File:Stroma_OS_map.png

#stroma #caithness #scotland #sheep

We have to keep extra chairs near our computers for the cats. (Bandit is content with a folded up sheet on the coffee table).

This is necessary in large part because Minnie (the black cat in the foreground) has made a game of trying to jump into chairs just before I sit in them. If I wait until she's settled though, I can move that chair with her on it and swap a different one into its place.

#cat #caturday #catsofmastodon

My daughter, who has had a degree in computer science for 25 years, posted this observation about ChatGPT on Facebook. It's the best description I've seen:
I want to make one thing clear: | Jessica Weiland | 211 comments

I want to make one thing clear: Women get a seat at the table. You do not get to push us into a corner. This isn't "your world". We are not marketing pawns. We are not going to "submit to our husbands" and play coy. We are not all one size, shape, color, mindset, beauty standard. I thought we were past this. But it's clear. We aren't. And that needs to change. Ada Lovelace. Grace Hopper. Radia Perlman. Katherine Johnson. Annie Easley. Hedy Lamarr. Elizabeth Feinler. Margaret Hamilton. Karen Sparck. Women are part of the backbone of modern technology and computer networking as we know it. So tell them your thoughts about women in the industry and see what they say to your vision of "yours" | 211 comments on LinkedIn

×
My daughter, who has had a degree in computer science for 25 years, posted this observation about ChatGPT on Facebook. It's the best description I've seen:
@DrewKadel Love this. We want so badly for ChatGPT to produce answers, opinions, and art, but all it can do is make plausible simulations of those things. As a species, we've never had to deal with that before.

@ngaylinn @DrewKadel to be fair, most people on the Internet given a question, write the right answer rather than everyone consistently writing the same wrong answer!

(* For most things, I'm sure there are several compelling counterexamples)

@anizocani @DrewKadel Oh, totally. Often the most statistically likely response is also the correct response. Just... not always.

@ngaylinn @anizocani @DrewKadel

I wish more people would understand this.

That's why I really dislike the take that it's "just fancy autocomplete". In order to have a *really* good autocomplete, you'd have to in one way or another internalize much of the world's knowledge. So, "just fancy autocomplete" is really not the witty criticism some think it is.

@maltimore
The reason we call it spicy autocomplete is because it is just a prediction model on the stuff it's ingested. What you seem to miss is that we already had all this, with Google. All LLMs have done is make it feel like something is answering, instead of being honest and returning results that match a search query.

@ngaylinn @anizocani @DrewKadel

@markotway @maltimore @ngaylinn @DrewKadel babe. it's two years later. you've been in a coma ever since the accident.

@ngaylinn @DrewKadel Well we had - conceptually this is *exactly* the same as with Eliza - just two orders of magnitude more sophisticated and two orders of magnitude more connected.

At its core it's really just people lacking technical understanding hallucinating an antropomorphization of a conditional probability distribution.

With ChatGPT, the interface is the innovation, *not* the model.

@ftranschel @DrewKadel Oh? What makes the ChatGPT interface so innovative? :)

@ngaylinn @DrewKadel Using LLMs interactively is surely an innovative way of using them.

Case in point: When reduced to token completion (which ChatGPT hides away), the magic goes away... quite fast (:

@ftranschel @DrewKadel That's certainly true. They do a lot of "theater" to make it seem like you're talking to someone.

@ngaylinn @ftranschel @DrewKadel

Theatre is an excellent description. There's clearly NLP in front of the prediction engine itself, and some application of post-predictive review steps. But it's all presented as though this composite application is the LLM. It's like attending a seance.

@dhobern @ngaylinn @DrewKadel I agree with your perspective. For me it's more like a computer version of the Oracle of Delphi, but that's essentially just a different representation of the same idea.
@ngaylinn @ftranschel @DrewKadel I prefer to use the chatgpt model for token completion 😅

@ngaylinn @ftranschel @DrewKadel

Is the 'man behind the curtain'
Innovative?

IMHO; it's all more smoke and mirrors to discombobulate an already gaslit world.

@DrewKadel @ftranschel @ngaylinn And like how many orders of magnitude more energy consumed …

@drdrmc

Think you need to invent a new word, "orders of magnitude" doesn't quite cover it.

@ngaylinn @DrewKadel
Actually we as a species have had to deal with that before.

We call them grifters.

@bornach @DrewKadel I dunno. A grifter still wants something from you. They're hiding their intent, but it's something you can imagine, understand, and detect if you're paying attention.

LLMs are unreadable and unpredictable because they have no intent. They may switch between friend and grifter depending on what sounds right in the flow of the conversation, without any conception of what's good or bad for you or for them.

On the other hand, if a grifter asks an LLM to write them script to achieve something specific, that's another thing entirely...

@ngaylinn @bornach @DrewKadel They have been specifically marketed as question-answering and search engine tools. The people misrepresenting them are the grifters.

@grvsmth @ngaylinn @bornach @DrewKadel

Agree - this is a tool that can be used by grifters. It seems like ChatGPT is a relatively trivial problem, generate responses that fit the pattern of language found in authoratative sources. I believe that Open AI is using it as a demonstration, and generate lots of free media coverage to get paying customers to buy their product. The grift is the misrepresentation.

@ngaylinn
LLMs are never your friend

They are always in what can best be described as "the grifter" mode. The entire training regime of a generative AI chatbot is geared towards getting one thing, an upvote from a human rating the quality of the conversation

Admittedly this is an over-simplification. Reinforcement Learning with Human Feedback involves training a reward policy - a 2nd neural network that is ultimately responsible for rewarding the chatbot for giving "good" responses.

Can ChatGPT be a doctor? Bot passes medical exam, diagnoses conditions

ChatGPT's latest software upgrade, called GPT-4, is "better than many doctors I've observed" at clinical diagnosis, one physician said.

Insider

@GordanKnott @ngaylinn @DrewKadel
Yet another flawed benchmark in which the LLM very likely memorised the answers
https://wandering.shop/@janellecshane/110104164829618120

Without any knowledge of how much the training dataset was contaminated by the medical exam questions/answers (and OpenAI's own whitepaper admits there is contamination)
https://youtu.be/PEjl7-7lZLA?t=4m0s
we cannot really know how it would perform in the real world if say a novel virus were to start spreading

Janelle Shane (@janellecshane@wandering.shop)

Attached: 1 image Remember seeing something about GPT-4 doing well on standardized tests? It turns out it may have memorized the answers. https://aisnakeoil.substack.com/p/gpt-4-and-professional-benchmarks #gpt4 #AIHype #ThisIsWhyWeDontTestOnTheTrainingData

The Wandering Shop
@ngaylinn @DrewKadel This is why I’m so tired of people calling machine learning “AI”. True AI doesn’t exist, and even if it did, there’s no reason to believe we can or should use it as a tool. ChatGPT is machine learning being used outside of its useful domain and media hyping it beyond credibility.
@deriamis @ngaylinn The "AI" label is misleading, but I'm pretty confident that we will never get to true Artificial General Intelligence - this is only a mirage of being close. It might well be that it could be enhanced to parse questions & redirect queries to best tools & use its LLM to make it sound smooth & chatty, or something like that- so that you'd have a generally useful & accurate research assistant, but not artificial intelligence. We may be stuck with the label however.
@DrewKadel @ngaylinn I suppose I should clarify that I think there exist strong and weak definitions of AI. The strong definition is the one to which you refer, and even if we somehow achieve it, there are moral and ethical concerns surrounding employing it as a tool. The weak definition is just ever more sophisticated ML with no sapience. AI is a “close enough” label where the distinction makes no difference. My point is that it clearly does for how ChatGPT is often being used.
Can ChatGPT be a doctor? Bot passes medical exam, diagnoses conditions

ChatGPT's latest software upgrade, called GPT-4, is "better than many doctors I've observed" at clinical diagnosis, one physician said.

Insider

@ngaylinn @DrewKadel

Sounds like we've been dealing with that in human form as misinformation media.

Like any machine optimization algorithm, it all depends on the fitness function. Maybe that's why algorithms (machine and human) are kept secret, it would break the illusion that ChatGPT is smart, that DALL-E is an artist, that the birdsite isn't biased, that Facebook is helping you, that faux news is truthful...

@ngaylinn @DrewKadel I’ve been asking both it and bard to write a small piece of code for a iot project I’m working on. The responses look plausible but are wrong. I can’t code so i can’t tell how or why they are wrong. The answers look impressive and 90% functional but they fall at the last step.
@Maddogeco I've been using ChatGPT for work (as a programmer) for a while. Yeah, you can never just trust it. It's great if you need to change how something works, it's great for generating test data, or writing SQL statements, but for code you need to learn how to use it. People think you can use it for everything and that's just not how that works.
@ngaylinn @DrewKadel So, you have never had to ride herd on 8 year olds?
@ngaylinn @DrewKadel Have you met religion? Literally inventing stories to explain reality.

@scottgal For sure! Back in the olden days if you had questions about the lights in the sky, you had to find somebody with low moral fiber to make up a plausible answer. Now we've built a machine that will instantly give you comforting nonsense. If that isn't progress, I don't know what is.

@ngaylinn @DrewKadel

@DrewKadel Does that image have alt text? I don't see the usual tooltip that I expect to pop up over an image...

Anyway, I like the response. I also think it's worth keeping in mind that (many) people are hoping this line of machine learning research is going to lead to a system that *does* give real answers, or at least is good enough at completion that its responses are indistinguishable from real answers. So there's always going to be a drive to test how well the models are doing and to push them toward giving more realistic and reliable responses, regardless of the fact that they're not actually designed to do that.

@diazona @DrewKadel No alt-text, and as a screen-reader user, I now don't know what the poster's daughter actually said, unfortunately...

@FreakyFwoof @diazona @DrewKadel

Here you go,
"Something that seems fundamental to me about ChatGPT, which gets lost over and over again:
When you enter text into it, you're asking "What would a response to this sound like?"
If you put in a scientific question, and it comes back with a response citing a non-existent paper with a plausible title, using a real journal name and...

@FreakyFwoof @diazona @DrewKadel

...an author name who's written things related to your question, it's not being tricky or telling lies or doing anything at all surprising! This is what a response to that question would sound like! It did the thing! But people keep wanting the "say something that sounds like an answer" machine to be doing something else, and believing it *is* doing something else. It's good at generating things that sound...

@FreakyFwoof @diazona @DrewKadel like responses to being told it was wrong, so people think that it's engaging in introspection or looking up more information or something, but it's not, it's only, ever, saying something that sounds like the next bit of the conversation. "
@simon_lucy @diazona @DrewKadel Aah, thanks very much. I appreciate it.
@FreakyFwoof @diazona I just woke up and added the alt text
@DrewKadel @diazona Thank you for doing so. It does help.
@DrewKadel @diazona And as a result, I feel confident in sharing it. As a screen-reader user without any sight whatsoever, it's sometimes hard to know whether what I *think* is true, is true. Without proper context, I could be sharing text that, in a post, says one thing, but the image could well be anything, from hate-speech to well, you can imagine. Alt-text isn't just for Christmas, it helps so many, many people.
@diazona @DrewKadel But why? What would be the purpose of a machine learning system that does give good answers?

@newrambler @diazona @DrewKadel

The purpose is to gain attention. The Turing Test was shattered when ChatGPT accused a real lawyer with a fake crime.

@j7748283 @newrambler @diazona But real people defame real people all the time! Look at Sidney Powell, Fox News and Dominion election systems!
@newrambler @DrewKadel It'd be like having a personal research assistant, I guess - it could save you some time searching the internet for random facts, and depending on what integrations it has, it could handle things like checking your schedule (or someone else's schedule, when you need to meet with them), finding times when you can go see a movie, looking up directions, ordering food, etc. Not that any of this seems especially high-impact, but it *does* make a real difference when mundane tasks become simpler and quicker.
@diazona @newrambler There are possibilities for sophisticated text generation and research in companies etc. With excellent operators and supervision it could do white papers, reports etc. But rolling it out as a $10k/mo subscription for business wouldn't get much action or attention so they market to a naive public.

@DrewKadel @diazona I research and write for a living, so I'm likely biased here. I can see the scheduling thing being useful, but how do you get a bot to do good research when it is prone to making up citations to articles that don't exist?

There's so much terrible writing out there that I suppose having machines write things won't make much of a difference in the white papers full of stakeholders and proactive solutions and so on.

@DrewKadel @diazona And I suppose if I get replaced by ChatGPT, I can go work at the P&G plant down the street.
@newrambler @DrewKadel We're talking about a (hypothetical) bot that gives good answers, which means - among other things - it will not make up citations to articles that don't exist.
@diazona Alt text added this morning
@DrewKadel Thanks! I can confirm I see the tooltip now.

@DrewKadel

It's so important for everyone to understand this

@DrewKadel No introspection and only waiting to generate the next line of a conversation. Sounds like quite a few humans!
@PCOWandre I think that's why so many are fooled, they don't even notice the role of listening, understanding or introspection in thought or communication. Of course they have some., quite a bit really, but they don't value it and ignore it.
@DrewKadel Has she made it public? And if so, would you mind sharing the link? I know a lot of people who really need to see this.