New blog post: The death of the line of death

The "line of death" is a security boundary in web browsers about separating trustworthy browser UI from untrusted web content; I think the concept is waning in utility over time.

https://emilymstark.com/2022/12/18/death-to-the-line-of-death.html

The death of the line of death

The line of death, as Eric Lawrence explained in a classic blog post, is the idea that an application should separate trustworthy UI from untrusted content. The typical example is in a web browser, where untrustworthy web content appears below the browser toolbar UI. Trustworthy content provided by the web browser must appear either in the browser toolbar, or anchored to it or overlapping it. If this separation is maintained, then untrusted content can’t spoof the trustworthy browser UI to trick or attack the user.

Emily M. Stark

@estark having had to teach this concept to a fair few folks, I can say that I both dislike it's currently filmsy justification, but also that I'm not thrilled with the idea of removing it.

A better path forward seems like it would be grounded in something like the TLS interstitial research of a decade ago (which I know you were hoping to find for this piece).

Until then, "death to" seems premature.

@slightlyoff I’m not advocating for removing it in the post, just arguing that it’s becoming increasingly irrelevant (though not yet dispensable) and was never all that useful to begin with

(yes, the title is clickbaity but it was just too catchy not to use 🤪)

@estark we had debates about it relative to Edge's sidebar too, as you might anticipate and on the back of that experience, can guarantee you that your post will be used to argue that we don't need to abide by it any more, even though that's not what you're arguing.

*sigh*

@slightlyoff well, I mean, apparently both Edge and Chrome’s security teams decided that we don’t need to abide by it, soooo…

@estark well, we didn't decide that, but it was a long conversation and required iterating back from The Bad Place.

There's an argument from my experience of this that team education is lacking and that as we cycle out our old-timers, we need to get better about the principles we preach.

Seeing this in the perf area in a really dire way.

@slightlyoff if you do see people citing it in that way, I recommend pointing to the section that says “There are some attacks in which the line of death concept is really the best we know how to do. It’s fundamentally impossible to have a secure application environment without some trustworthy UI.”
@estark sure sure...I'm just saying...it's going to happen. It's a meeting I will have to be in and pre-emptively wince at. Can I invite you instead? ;-)

@slightlyoff absolutely! seriously — the whole reason I wrote this is because I want to have conversations with feature teams that are more nuanced than “you must obey this arcane security rule that obviously doesn’t translate to real users or modern browsers”

(I think I will retitle though to “the death of the line of death”. captures the facts of the situation more and sounds less like I’m on some kind of crusade)

@estark cheers! May take you up on this if/when there's a situation where I can.
@slightlyoff Btw a related piece of documentation you might find helpful is https://chromium.googlesource.com/chromium/src/+/HEAD/docs/security/security-considerations-for-browser-ui.md#prefer-existing-ui_ux-patterns_avoid-introducing-new-ones (particularly relevant to side panel) and other guidelines on that page
Chromium Docs - Security Considerations for Browser UI

@estark The embedded line of death concept around Payment Handler (and presumably FedCM, etc) is a great insight. I'd suggest that Chrome's experiments with embedding Wikipedia content in the page info bubble are another example of something similarly line-blurring.
@estark Great read, Emily. Thanks for putting this together and +1 on all of this.
@estark This is very good. Negative security indicators are very interesting as a concept. I wonder how the idea applies more broadly (to blue checks, for example).

@estark I hope you can feel my look of disapproval over that headline. ಠ_ಠ

But yeah, negative security indicators are generally more effective. That stated, I still think we don't have a good handle on when and how we should make use of positive indicators. Instead we clutter up the security UI space with a variety of things conveying numerous different messages, only a handful of which are even security relevant. I would prefer a very small, clear area delineated by the line of death.

@jschuh this was the toned-down headline, believe it or not! agree about very small clear area.
@estark This was the toned down headline? 😲 Clearly you're suffering under the loss of discussing this topic ad nauseum with me in the office. Yes... that must be it. 😜
@estark Wonderful blog post! And this made me laugh out loud: "Theoretically, social security numbers could be replaced with unphishable public key credentials, but I’m not holding my breath."

@estark One bit that doesn't get much discussion is the role of experts vs. novices. There's no question that novices have no understanding of the difference between trusted UX and untrusted content, but in a design with no trustworthy pixels, even an expert can be completely fooled.

The advantage of allowing an expert to distinguish between trusted and untrusted is that they can "pull the alarm" and escalate to mitigations we know work for novices (URL Reputation interstitials), for example.

@estark I'd had this discussion with VPs in Windows 8 when we created the "Metro" browser with zero trusted pixels. They asked "How many people will even understand the trusted pixels" and I admitted that the number was low. The problem, I argued, was that some journalist was going to embarrass our security experts on stage at Blackhat with two screenshots: One real, and one fake, and even *they* would be unable to demonstrate that our product could be used safely vs. legacy IE.
@Ericlaw that’s true though I wonder if browsers have passed the point that even experts would notice the difference in an adversarial situation!
@estark @Ericlaw perhaps we should all be testing each other regularly? 😂 Even a full synthetic test with time-based scoring could be both fun and maybe useful.
@RickByers @estark @Ericlaw A problem here is that successful social engineering often uses a sense of urgency, and triggers emotions that can override all your logical best practices. So while a few pixels that delineate trusted UX is good for when you want to do a careful review of a page, it's not so good when you're being hurried. I'm confident I'd succeed at the former, and fail at the latter.

@parkern @RickByers @estark

Oh, for sure, I don't think we can hope to build something that would /never/ fool an expert, but I think we should continue to aspire to build experiences that /might/ be discernable by an expert.

I've been meaning to share some examples I've been collecting this year.

@Ericlaw @RickByers @estark I imagine you have a shelf with jars full of formaldehyde with spooky phishing sites preserved for our fascination. Do share.
@estark @nasko the big tipping point for Edge was the Vertical Tabs feature. It made it clear we had to rethink the line of death and some interesting proposals came out of it but were never released. It also doesn’t help that OS prompts are inconsistent. We certainly need a better way forward.
@estark Nice blog post!
You should be interested by my thesis: Involving the end user in access control: from confined processes to trusted human-computer interface https://www.ssi.gouv.fr/uploads/2018/04/salaun-m_these_manuscrit.pdf (Google Translate is your friend 😉)
The second part, and particularly chapter six, is focused on what security guarantees we may want for secure systems handling different trust sources exposed to users, and a formalization of what we need to implement to provide such guarantees to users.