Developers should make "abuser stories" a thing.

"As an stalker,

I want to track my ex's every move,

so that I can 'coincidentally' run into them at any time."

"As a thief,

I want to be able to reset passwords using SMS verification,

so that I can compromise any account by bribing a telco employee."

@ryanc isn't that basically how we do threat modeling? Except we use third person, it's weird otherwise

@aris It is, but this distills it down to something that is easy to understand by most people.

Who is the threat actor?

What do they want?

Why do they want it?

@ryanc decent way to distill threat models down to the bare minimum needed to convey the idea. I like it.
@gsuberland This is why I get paid enough for picking fights in court with entities which have thirteen figure annual budgets not to be completely insane.
@ryanc seems to me that this would be good for helping more people involved in software Get It
Design Under Pressure – Simply Secure

@simulo Oh, I like these. "Miscreant" is a term of art used in some circles to cover basically the same sorts that "persona non grata" is being used describe.

@simulo @ryanc
Nice. I was looking for something similar.

I played for some time with the idea that there should be a "game" helping development teams doing threat modelling and creating abuse stories (incorporating stalkers and abusive partners)

Something with elements of #EoP and #BackdoorsAndBreaches but with the persona non grata in mind.

#SeriousGames

@ryanc this would certainly make more sense than my old manager's approach of contorting then into user stories that make no sense.

"As a spammer, I want to sign up with a bot, so I can post links to my website" makes way more sense than "as a user, I want to complete a captcha, so that I can sign up". No they don't. No user has ever wanted that. "As a user, I want a spam-free website" just about works but really, the users are not involved in this one, leave them be

@ryanc If you're not developing with documented abuse cases, you're neglecting your threat model.
@ryanc Are there developers that don’t do this already? I suppose that explains the poor security of a lot of products.

@david_chisnall It's not just security - I included the stalker for a reason.

People have a "how would my partner's creepy ex use this" conversation for major features.

@ryanc I'd include that under the heading of security. It's closely related to the 'can an attacker leak information from my smart device that tells them when my house is unoccupied' threat.

I'm still waiting for an organised crime syndicate to provide a service that aggregates a load of data from Facebook and similar to tell petty criminals which houses near them are unoccupied.

@david_chisnall @ryanc I agree, in both cases the product is being used "as intended", in the sense that no bypass of any security control takes place. They are either both security or both not security, but safety/privacy, depending on your definition of the term security.

In any case, I agree with the original idea as well, and including stalkers in the threat model is one of the most important things for security professionals to do.

@sophieschmieg @david_chisnall @ryanc Dana Fried calls these “interface bound attackers” (@ not completing; I know she’s on fedi)
@ryanc At least some engineers *are* always looking for failure modes.
@ryanc @thedarktangent Developers are seldom in charge of such decisions.
@ryanc I created these “vulnerable user stories” a while back, in that vein : https://github.com/mhoye/minimum-viable-user-stories
GitHub - mhoye/minimum-viable-user-stories

Contribute to mhoye/minimum-viable-user-stories development by creating an account on GitHub.

GitHub

@ryanc In the UX world, this is sometimes referred to as “abusability testing”.

(Although it’s more than just testing, of course)

@ryanc I just discussed adding that type of (un-)acceptance scenarios to our specs this morning. I'll report back on how it goes down.

@ryanc I've been in teams where we called these "negative user stories". Very useful for distilling threats.

Sometimes you also need to translate them into "headlines we never want to see our company's name in" for further up the leadership chain.

@ryanc actually more a tool for defensive security. But definitely a great idea. Can I steal?
@7heo I probably stole it from somewhere, go wild.

@ryanc Hiya! A friend passed this toot along to me, and if you're interested I have a post about it: https://24ways.org/2018/be-the-villain/

In my experience the main barrier is organizational attitudes towards suggesting this can potentially happen 😰

Be the Villain

Eric Bailey asks us to take on the role of King Herod in our projects to consider how the products, services and processes we design could be abused to cause harm rather than to do good. No matter how good our intentions, we must remember that not every user has pure motivations and that every tool is a weapon if you hold it right.

@ryanc Makes perfect sense to me, the queer engineer who always had security as a special interest. What's always amazing to me is that nearly everyone else working on products needs to be presented with the idea of thinking about this as a valid use of their time. At least in good organizations, they often seem receptive to it, although they sometimes get fixated on one particular issue (e.g. memory safety) instead of the bigger picture
@recursive Sometimes, it can help to sell it to describe it as a "red-teaming exercise". @ryanc

@ryanc
"As a data breach buyer,

I want a few personal background questions to bypass all other account security,

So that I can access thousands of people's accounts with the marriage and vehicle registration records I already have."

@ryanc those are called "executive directives"
@ryanc some of us do - it doesn't always go over well.
@ryanc there is a very long history of "abuse cases" going back 22 years (at least) in the agile literature. That said, you are right!

@ryanc @matthewskelton this specifically is a thing we've been doing at a company I used to work at a few years back. Our security team consulted with the dev teams to show them, help them and teach them how to do it.

Now we'd probably call it an enabling team.

One of the coolest initiatives I've ever been part of.

@cmw @matthewskelton @ryanc How many stories did you end up with? Do you have public samples/catalogs? I feel like I see this idea and then there’s about four stories, it’s high impact once, and hard to sustain. (Versus other forms of threat modeling like TM every story). Not that every project has to be a program to be helpful. Just looking to understand

@adamshostack @matthewskelton @ryanc iirc we did come up with these when breaking down epics into stories, so depending on the epic it could be just two or three or a whole number of them.

I don't think calling it threat modeling is appropriate though for what we did. We didn't go much deeper than client side validation abuse and sql injections.

Unfortunately it's been so long, I doubt I have any material left anywhere from that time.

If I find samples, I'll send them your way.

@ryanc Uhhh, I really like that. Few years ago I coined the term Breakstorming, where ppl sit together and verbally discuss how to break a piece of software. Abuser Stories complement them very well!

@ryanc ooooh I love this.

Also, one could make it into a website. With specific examples from specific companies implementing these user stories in their systems.

@ryanc
I see people in the comments here talk about using this as threat modeling, but as a developer who think philosophy class was my most valuable subject, I see this as a useful tool for ethics in sysdev.

Less can I do this and more should I do this.

@worldwidewerner Yeah, it's threat modeling, but it's a very limited form of threat modeling, and it's also other things.
Machine Augmented Humanity — Superversive

One definition of product strategy is a clear policy articulating what you won’t permit, tolerate, or condone. In that case, Product teams with a sense of responsibility and accountability who employ data governance may inadvertently put themselves in a box. The leadership box says, “we’re not doi

Superversive
@ryanc that is called threat modeling #threatmodeling
@ulf Of a sort, yes. I posted a more fleshed out version that said as much on linkedin a few minutes after I posted here.

@ryanc This is pretty much https://insights.sei.cmu.edu/blog/the-hybrid-threat-modeling-method/

We've developed this into building security awareness in the dev teams by having them workshop the PnGs as a light-hearted team activity, building the bios and so on.

Then those are used as threat modelling as part of the agile design process, with the co-ordinator asking "What would 'Kevin' do with this feature?" and the team working through 'Kevin''s personality, goals, assets and so on because they *know* 'Kevin'.

The Hybrid Threat Modeling Method

Modern software systems are constantly exposed to attacks from adversaries that, if successful, could prevent a system from functioning as intended or could result in exposure of confidential information....

SEI Blog
@ryanc @PwnieFan I now desperately want to use that for any user story I disagree with! 😄
@ryanc it's also another reason to have diversity in teams. If your lived experience is to look for ways something can hurt or kill you, you're more likely to see problems or attack vectors in something seemingly benign. Or, to have someone on the team to advocate why preferred surname needs to be an option (or not basing unchangeable identifiers on current name or email address) when the decision-makers are largely cishet men who won't ever experience changing their name.

@ryanc

Meanwhile one of my devs complained about certs and asked if I could get one with a 20 year expiration.

Also, they'd stored the previous cert in outlook for the last three years 😭

@ryanc
There's a YouTuber with a focus on EDC, who generally does good objective factual reviews of multitools, flashlights, and other pocket-sized gadgets.
In a recent video, he rounds up some recently-released items, including two new flashlights that have a built-in (loud) audio alarm.
He spends several minutes (out of a 24 minute, monetised video) wondering why anyone might ever need this feature. He definitely doesn't mention the lack of a "pull out the pin" mechanism for the alarms.
@ryanc but but, it doesn't add value!
@ryanc I learned that from an agile security guy to write better tests - I use this a lot in platform engineering too to take on a destructive mindset.