"Thus, determining the correct rate
at which to refresh DRAM cells has become more difficult, as also
indicated by industry [45]. This is due to two major phenomena, both
of which get worse (i.e., become more prominent) with technology
scaling. First, Data Pattern Dependence (DPD): the retention time of a DRAM cell is heavily dependent on the data pattern stored in itself and in the neighboring cells [69]. " - worth reading: https://arxiv.org/pdf/1703.00626.pdf
@HalvarFlake this is beautiful: basically non-deterministic RAM. It means that all the work Sun Microsystems had done on self-healing suddenly becomes of great relevance because you have to assume RAM is not trustworthy unless verified.
@cynicalsecurity @HalvarFlake RAM is also not trustworthy if verified then presumed safe, e.g. tOCTOU bus hijacking. This is provable using lower bandwidth RAM technologies like PSRAM where there are fewer pins and lower speed. I've done it in closed environments as a PoC. The only solution is RAM with a TPM embedded or the ability to encrypt all before it traverses the bus. Either solution is expensive although Atmel's crypto RAM is a good step.
@donb @HalvarFlake that is a very good point: you cannot trust the bus. How are we doing on external attacks against the bus à-la-Rowhammer?
@cynicalsecurity that's a better question for @HalvarFlake. My solution is to not let adversaries load code or finesse execution environments
@donb how on Earth do you prevent adversaries from running code, assuming your computers are used by humans? @HalvarFlake
@cynicalsecurity @HalvarFlake it's a lot easier to do in IoT architectures where you control all executable code :)
@cynicalsecurity @HalvarFlake @donb Side-question: How far can we trust the IOMMU? References would be very much appreciated!
@Kensan @HalvarFlake @cynicalsecurity counter: how can we trust silicon? :)
@donb @cynicalsecurity @HalvarFlake I would be fine with having a sane way to trustworthy firmware...

@donb @HalvarFlake @Kensan ah, now /that/ I can answer because my dad worked on this precise issue in the 1970s. The short answer is "Mykotron" (those old enough will remember this as the NSA fab which made the Clipper Chip), i.e. build your own tightly controlled fab. The longer answer is a verifiable design built by two/theee separate fabs which are then subjected to 100% coverage with test vectors.

There are issues with this too: how do you cover what has been "added and removed here"?

@cynicalsecurity @Kensan @HalvarFlake not just addition/removal, but implementation of the security model, side channel issues exploitable by software, and more. Some friends of mine at Tortuga Logic are doing great work on silicon verification, but it's not exhaustive. You can control what silicon is manufactured, but design verification is a different beast altogether.

@Kensan @HalvarFlake @donb (part 2) This has the obvious potential for "flies with littler flies on top" when you attempt to verify the unknown so a solution was required. My dad, as the "QA wizard" had authority to grab anything from the production line and torture it, literally. I have stacks of war stories of GECOS (not the field, the OS) being taken to its edge cases and microcoding on the DPS-6 (http://www.ricomputermuseum.org/Home/equipment/honeywell-dps-6)…

(con'td)

@donb @HalvarFlake @Kensan Now, ten years ago my dad told me there was no way Intel (or AMD) had 100% test vector coverage on their microprocessors and we set about asking ourselves "well, then what are the missing and how can we use it?". The answer, which does not fit in a toot, is surprisingly obvious if you have the correct mindset.

So, verification of silicon is key at the fab and playing fab A vs. fab B was how it was done in the 70s when the issue was defects not security.

@Kensan @HalvarFlake @donb (missing background: dad is a Physicist but built detectors for CERN as his thesis in the 60s, then went off to Honeywell when they built hardware to design CPUs, then off to build chemical plant real-time control systems via a stint at a systems integrator, then America's Cup "electronics wizard". He now builds steam trains).
@donb @HalvarFlake @Kensan This "defects" issue brings me back to a core belief he implanted into me: defects are security vulnerabilities via opportunity therefore building a reliable (aka "dependable") system buys you security. He used to test my code in a way I have honestly never seen again (although Tavis definitely is in his league) and demand that my code pass tests which still scare me. Traumatised enough that I went to read Pure Mathematics at university to feel safe…
@Kensan @HalvarFlake @donb Personally I believe the answer lies in a mixture of returning to the old discipline in building systems & writing code, applying dependable computing methodologies to everyday code, and borrowing from what has been done before (e.g. Tandem running processors in lockstep: not trivial to hack microcode and keep the timing for lockstep, for example). All the pieces of the puzzle are there: if you look at my Chimaera design there's a method to my madness…
@donb @HalvarFlake @Kensan Good Lord, I wrote an essay, I apologise :( This is really the subject for a set of lectures with a suitable audience, not late night rambling.
@cynicalsecurity @Kensan @HalvarFlake actually I've enjoyed reading your thoughts. I've also enjoyed that you could adequately write within the Mastodon character limit :)
@HalvarFlake @Kensan @cynicalsecurity also: the CFP for 44con is still open... cc @stevelord
@donb @stevelord @Kensan @HalvarFlake I am
allergic to conferences, my place is at the top of the Jungfrau after mining the train line. :)
@cynicalsecurity @HalvarFlake @Kensan @stevelord didn't you just give a talk on nuclear security recently? :-P
@donb @stevelord @Kensan @HalvarFlake my knowledge of nukes is not as good as I wish it was and my library on the subject has seriously been neglected since 2010. Missed some excellent texts on the South African nuclear programme, for example. Ah, regrets.

@donb @HalvarFlake @Kensan I truly do wonder how expensive it would be to start performing 100% coverage of a Xeon core as opposed to the current probabilistic approach.

The next essay should be on firmware, I guess. Then we can discuss peripherals (or, as would be more appropriate, "service processors" recycling the name from the 60s).

On the issue of busses: I remember my dad working on a "secured RAMBUS". Serial bus made it "easy" to add security packets…

@donb @HalvarFlake @Kensan P.S. I was made to study microprocessor design on a huge yellow-covered book which had superconducting Josephson junctions in the last chapter after GaAs. This dates me. Badly.
@cynicalsecurity @HalvarFlake @donb Really enjoying your "essay"! Organizing a talk for you here at HSR is still on my list of things I should have done already...