const AES_BLOCK_SIZE: usize = 16;
const PAR_BLOCKS: usize = 8;
const NONCE_SIZE: usize = AES_BLOCK_SIZE - 4;
const PAD_SIZE: usize = PAR_BLOCKS * AES_BLOCK_SIZE;
pub struct Aes256Ctr32 {
aes256: Aes256,
ctr: [u8; PAD_SIZE],
pad: [u8; PAD_SIZE],
pad_offset: usize,
}
i was gonna say "oh maybe he's C-brained" but no you can still do sizeof(arr) in C
they're describing this so generically as if it's theoretical and new
Practical Relevance of Randomness Manipulation
In addition to exposures of locally stored state secrets, randomness for generat-
ing (new) secrets is often considered vulnerable. This is motivated by numerous
attacks in practice against randomness sources (e.g., [11]), randomness genera-
tors (e.g., [23,7]), or exposures of random coins (e.g., [22]). Most theoretic ap-
proaches try to model this threat by allowing an adversary to reveal attacked
random coins of a protocol execution (as it was also conducted in related work
on ratcheting). This, however, assumes that the attacked protocol honestly and
uniformly samples its random coins (either from a high-entropy source or using
a random oracle) and that these coins are only afterwards leaked to the attacker.
In contrast, practically relevant attacks against bad randomness generators or
low-entropy sources (e.g., [11,23,7]) change the distribution from which random
coins are sampled. Consequently, this threat is only covered by a security model if
considered adversaries are also allowed to influence the execution’s (distribution
of) random coins. Thus, it is important to consider randomness manipulation
(instead of reveal), if attacks against randomness are regarded practically rele-
vant.
and i mean it is i think since this paper described it
Examples for
countermeasures are replacing bad randomness generators via software updates
simply sudo pacman -S openssl
Please note our distinction between key agreement and ratcheted key exchange protocols.
oh i am so fucking ready for this i am RIVETED
While the security provided by Signal is sufficient in most real-world scenarios,
this is the START of the sentence
we focus in this work on the theoretic analysis of the (optimally secure) primitive ratcheting with respect to its instantiability by smaller building
blocks.
Consequently, fixing syntax and oracle definitions, no stronger security definitions exist.
can you say that? is that allowed? this claim was not under dispute in the 60-page paper that sucks
the paragraph actually clarifies
All of the above mentioned works define security optimally with respect to
their syntax definition and the adversary’s access to the primitive execution
(modeled via oracles in the security game). This is reached by declaring secrets
insecure iff the adversary conducted an unpreventable/trivial attack against
them (i.e., a successful attack that no instantiation can prevent). Consequently,
fixing syntax and oracle definitions, no stronger security definitions exist.
Thus, they do not require recovery from state
exposures – which are a part of impersonation attacks
the ratcheting paper using an em dash to indicate a significant addendum
state exposures => impersonation attacks is described regarding random state, which then can be used to infer the subsequent result of an "ephemeral" key, salt, etc
so: is it just a coincidence that these terms also directly bear upon anonymity
Furthermore, both works neglect bad randomness as an attack vector
this paper continues and i'm more and more convinced
ok so no it's not a coincidence but it's a separate mechanism entirely
https://circumstances.run/@hipsterelectron/116269867302883821
the analogy my brain was making was "impersonation" => deanonymization, which is not a thing
random state is both consumed and constructed in the process of constant-bandwidth noise generation
"state exposure" was another mistake. i don't really know that the constant-bandwidth noise is an obvious target for attack, given that we can construct the interface which describes a baud rate and then sends bytes we like while filling the rest with noise and avoids timing variations. this doesn't seem problematic but it wouldn't be something exposed directly to like the cpu or threading or whatever. this is solvable and someone else i'm sure has considered this
the "state exposure" i was thinking of was the state of message progress that links message source to sink i.e. alice to bob, s to t, me to you
i think that's not the attack that would be relevant but rather identifying the variants of message channel used between neighboring peers, and from there inferring types of messages
so that does make constant-bandwidth noise the target for attack. which is good because i didn't know where that term was going and it seems like an eminently solvable problem
i think source onion routing with progress notifications and route status seems pretty solid with the right choice of responses
also hero-coded behavior from this behavior in a footnote
We explicitly cite the extended version [19] for results that are not captured in the
CRYPTO 2018 proceedings [20].
there is of course two version of the initial (not eurocrypt) paper with the cryptographer fuckboy, one of which references i'm pretty sure an earlier version of the same paper (the initial) at a conference, specifically to justify "40 bytes" (no context) as "from signal". earlier version of this paper? if you read it? specifically says "oh yeah signal doesn't do user research so i created this interaction graph with uniform distributions to model sender and recipient interactions"
spends like half a page on this:
(1) like "oh yeah signal doesn't do user research" doesn't sound like the most tech bro way to say "i didn't ask users because i don't care" and pretend anyone thinks signal is gonna start adding malware if they ask users or provide some opt-in?
(2) zero citations for the entire stretch, like LITERALLY NO ONE has EVER modelled sending a message back and forth. probabilistically.
(3) choice of uniform distributions for whether someone sends a message over the hour/day and absolutely zero attempt to acknowledge that a """mathematician""" could spend a whole paper on distributional assumptions, particularly CRYPTOGRAPHER
so then he's like ok now i'm going to forget that these are random variables and create this other set of models defined in terms of these distributional assumptions you know what i need to find the way he abuses italics too
yeah love this shit https://www.usenix.org/system/files/usenixsecurity25-auerbach.pdf
A new method to quantify security.
sir i am a scientist we don't need new methods to quantify things that is the last thing anyone needs
oh this is so great
The mismatch between epochs and compromised messages discussed above
will have a rather simple solution in this work,
definitely not allowed to say that about the paper which is currently on page 2 and hasn't finished talking about how simple it is
where we will define SM security
brain destroying acronym usage. "secure messaging". defined as?
applications, including WhatsApp, Signal, Google RCS, and Facebook Messenger, have taken over the world.
defined as corporations. ok
Used by billions of people daily, these appli-
cations achieve extremely strong security properties,
honestly who do you think you are? google? facebook? whatsapp? billions => security?
ok. hit me with fuckboy. define SM security
in a way which explicitly looks at the set
of exposed messages.
it's SO much more fucking obnoxious to italicize half a sentence in the math latex font
explicitly looks at the set of exposed messages
how "explicitly"? duh. that's what the simulations based on our distributional assumptions are for
everything this guy says is obfuscation
Mixing In Quantum Security
In order to make our security guarantees stand up to quantum attacks, we need to mix in secrets generated from quantum secure algorithms.
with you there. kyber is fucking sick. that was cool
In PQXDH, we did this by performing an additional round of key agreement during the session-initiating handshake, then mixing the resulting shared secret into the initial secret material used to create Signal sessions.
here's the thing: when he does not want you to look at the alternative, he makes sure you know it. (1) "additional round" huh that sounds facetious and incorrect, let's check. oh PQXDH? no link to that 5-letter initialism. that's normal. https://signal.org/docs/specifications/pqxdh/ it's literally just x3dh, EXCEPT! it defines an entire precedence policy for key availability.
and then you realize (2) "additional round" is good, because those are identities. but the cryptographer tech bro uses the word that sounds like more work.
so after he makes this double double toil and trouble shit sound like it's not "just add a lattice key".
To handle FS and PCS,
shut the fuck up. this is a blog post. expand acronym
To handle Forward Secrecy and Post-Compromise Security,
you don't "handle" these, you generally "achieve" them--but of course, "handle" means he gets to pretend that wasn't done before
we need to do continuous key agreement, where over the lifetime of a session we keep generating new shared secrets and mixing those keys into our encryption keys
insane. completely insane shit. balderdash

We are excited to announce a significant advancement in the security of the Signal Protocol: the introduction of the Sparse Post Quantum Ratchet (SPQR). This new ratchet enhances the Signal Protocol’s resilience against future quantum computing threats while maintaining our existing security guar...