Today was the #PapersInSystems event on the paper:
How to Perform Hazard Analysis on a "System-of-Systems" by Nancy Leveson

Thanks to @adrianco I learned a lot about #STPA & #STAMP (but much more is there still to learn)

And I got some more ideas how it fits or can be applied to #cybersecurity.

The following is my (probably flawed) understanding

Lets dive in :

In STAMP (System-Theoretic Accident Model and Processes) safety is treated as a dynamic control problem rather than a failure prevention problem.

This leads to the following generic abstraction (model) of a safety relevant system as a socio-technical system

Source: Engineering a Safer World: Systems Thinking Applied to Safety
by Nancy G. Leveson

This model is directly map-able on IT systems

See Failing Over Without Falling Over by @adrianco
https://github.com/adrianco/slides/blob/master/FailingWithoutFalling-9.29.pdf

slides/FailingWithoutFalling-9.29.pdf at master · adrianco/slides

Slide decks with editable source files. Contribute to adrianco/slides development by creating an account on GitHub.

GitHub

To analyse possible failure or rather inadequate controls you can go the a s"standard" set of hazard specific to certain point in the model

E.g. you check the sensors of your system against the STPA Hazards regarding Sensor Metrics:

  • Missing updates
  • Zeroed
  • Overflowed
  • Corrupted
  • Out of order
  • Updates too rapid
  • Updates infrequent
  • Updates delayed
  • Coordination problems
  • Degradation over time

Or check your Human Control Actions against the Human Control Action Hazards:

  • Not provided
  • Unsafe action
  • Safe but too early
  • Safe but too late
  • Wrong sequence
  • Stopped too soon
  • Applied too long
  • Conflicts
  • Coordination problems
  • Degradation over time

I'm not completely clear how to map this model to #Cybersecurity and how to integrate an attacker. But you could see #STRIDE as possible Data Plan Hazards:

  • Spoofing
  • Tampering
  • Repudiation
  • Information disclosure
  • Denial of service
  • Elevation of privilege

So you "just" need hazards lists for the other planes and interaction points ;-)

Thanks again to @adrianco, @tianijones, @yvonnezlam, @RuthMalan and the other for your impulses.

And @adamshostack for the ideas regarding #cybersecurity

@realn2s @tianijones @yvonnezlam @RuthMalan @adamshostack There are some specific examples of applying this to security topics on the MIT site and in previous conference papers from the last few years. It’s fairly well developed from what I’ve seen, but not my personal focus.

@realn2s @tianijones @yvonnezlam @RuthMalan @adamshostack http://psas.scripts.mit.edu/home/mit-stamp-workshop-presentations/ and in particular search for security e.g.

2021 STPA Applied Before the SolarWinds Attack
Michael Bear (BAE)
John Thomas (MIT)
William Young (U.S. Air Force - USAF)

2021 Cybersecurity Incident Analysis by CAST using the Report of Unauthorized Access to the Information System
Tomoko Kaneko (National Institute of Informatics)

@adrianco @realn2s @tianijones @yvonnezlam @RuthMalan

There's an important difference between prospective discovery and reactive discovery. "We were able to find this problem with this tool when we knew where to look" is not a low bar in cyber, but "We were able to find this problem, is popped out of the noise the tool provided when the tool was used by our normal staff" is a different, and higher bar.

@adrianco @realn2s @tianijones @yvonnezlam @RuthMalan

Also, the question of 'how do we prospectively do this' is a complex one - my essential argument is that the 'stpa hazards' Adrian presents at the middle of his (excellent) deck are, for many engineers, not precise enough about the cyber hazards to elicit/discover what problems will happen.

@adamshostack @realn2s @tianijones @yvonnezlam @RuthMalan I haven’t looked in detail at the security examples of how to use STPA, but I agree that it’s a different set of generic hazard from the ones used to detect “out of control” hazards that I’ve been focusing on.