idk how x86 works. was it x86? maybe it was some arm shit
i fail to see how "AES-256-GCM" involves implementing a hash by hand when it's the first implementation?
const AES_BLOCK_SIZE: usize = 16;
const PAR_BLOCKS: usize = 8;
const NONCE_SIZE: usize = AES_BLOCK_SIZE - 4;
const PAD_SIZE: usize = PAR_BLOCKS * AES_BLOCK_SIZE;

pub struct Aes256Ctr32 {
aes256: Aes256,
ctr: [u8; PAD_SIZE],
pad: [u8; PAD_SIZE],
pad_offset: usize,
}

i was gonna say "oh maybe he's C-brained" but no you can still do sizeof(arr) in C

oh omg there's another good eng doing good changes
that's 2
they have a numerical advantage over the cryptographer
who literally acts like i'm not gonna read his 60-page paper and puts everything at the end? like a meme? like in a movie?
if anyone knows any practical applications of adversarial randomness i have a great prank
christ https://eprint.iacr.org/2020/148.pdf poettering jumpscare. this is the paper that afaict describes adversarial randomness
and it appears to describe adversarial randomness as like the specific thing that ratcheting key protocols are resistant to?
if that's true this cryptographer whose benchmarks were completely fabricated owes me tomorrow's lunch money too

they're describing this so generically as if it's theoretical and new

Practical Relevance of Randomness Manipulation
In addition to exposures of locally stored state secrets, randomness for generat-
ing (new) secrets is often considered vulnerable. This is motivated by numerous
attacks in practice against randomness sources (e.g., [11]), randomness genera-
tors (e.g., [23,7]), or exposures of random coins (e.g., [22]). Most theoretic ap-
proaches try to model this threat by allowing an adversary to reveal attacked
random coins of a protocol execution (as it was also conducted in related work
on ratcheting). This, however, assumes that the attacked protocol honestly and
uniformly samples its random coins (either from a high-entropy source or using
a random oracle) and that these coins are only afterwards leaked to the attacker.
In contrast, practically relevant attacks against bad randomness generators or
low-entropy sources (e.g., [11,23,7]) change the distribution from which random
coins are sampled. Consequently, this threat is only covered by a security model if
considered adversaries are also allowed to influence the execution’s (distribution
of) random coins. Thus, it is important to consider randomness manipulation
(instead of reveal), if attacks against randomness are regarded practically rele-
vant.

and i mean it is i think since this paper described it

Examples for
countermeasures are replacing bad randomness generators via software updates

simply sudo pacman -S openssl

Please note our distinction between key agreement and ratcheted key exchange protocols.

oh i am so fucking ready for this i am RIVETED

the distinction they noted........yes, it is one that i identified above

While the security provided by Signal is sufficient in most real-world scenarios,

this is the START of the sentence

we focus in this work on the theoretic analysis of the (optimally secure) primitive ratcheting with respect to its instantiability by smaller building
blocks.

@haskal detected another good cryptographer.......makes sense they would be more difficult to find
wow oops lmao there's no way active adversarial input with intent to deanonymize could be analogous to adversarial randomness..................
i think it's not quite a proof
my favorite "not quite a proof" in the entire world is that "two stacks are turing-complete because you can put them back-to-back and generate the turing tape"
they can emulate a turing machine. it does not lead to any incorrect generalizations. but iirc i could not immediately prove it when <3 jeremy spinrad mentioned it like it was a fun trick
this guy poettering who does ratcheting crypto seems like he does cool shit and collabs with tons of others to define novel frameworks of security. not gonna lie easily my #1 poettering rn

Consequently, fixing syntax and oracle definitions, no stronger security definitions exist.

can you say that? is that allowed? this claim was not under dispute in the 60-page paper that sucks

the paragraph actually clarifies

All of the above mentioned works define security optimally with respect to
their syntax definition and the adversary’s access to the primitive execution
(modeled via oracles in the security game). This is reached by declaring secrets
insecure iff the adversary conducted an unpreventable/trivial attack against
them (i.e., a successful attack that no instantiation can prevent). Consequently,
fixing syntax and oracle definitions, no stronger security definitions exist.

also this paper uses italics for emphasis and not to be obnoxious and it's a clearly noticeable and distinguishable tonal difference. the italics made me hate the 60-page paper which sucks
oh man i could go on. imagine right if someone said in a footnote "as you can see, this is really a generalization of the double ratchet" two separate times and then at the end was like oh yeah so uh bandwidth costs and yeah this completely changes the number of messages we send back and forth. but here's a table. we literally made it up. we have no reason to use these distributions" and THEN they tell you in the appendix that they decided adversarial randomness wasn't important to them because uhhhh big keys
literally if the keys are too big maybe fix your theory?
i also don't think "triple ratchet" is whatsoever appropriate given that the entire interaction requires creating a constant cloud of keys swarming between participants. van de graaf generator is a better name. completely misleading
that's a really fantastic property of the double ratchet that it happens to involve these multiple DH exchanges per interaction while not introducing channel state
i'm not even gonna try to read the 60-page paper again because they don't define the "erasure coding" that makes up half the abstract and they're only proud of because it makes the shit remotely tolerable
it's research code! it's research quality!
sorry fuckboys give me hives

Thus, they do not require recovery from state
exposures – which are a part of impersonation attacks

the ratcheting paper using an em dash to indicate a significant addendum

state exposures => impersonation attacks is described regarding random state, which then can be used to infer the subsequent result of an "ephemeral" key, salt, etc

so: is it just a coincidence that these terms also directly bear upon anonymity

Furthermore, both works neglect bad randomness as an attack vector

this paper continues and i'm more and more convinced

ok so no it's not a coincidence but it's a separate mechanism entirely

https://circumstances.run/@hipsterelectron/116269867302883821

the analogy my brain was making was "impersonation" => deanonymization, which is not a thing

random state is both consumed and constructed in the process of constant-bandwidth noise generation

"state exposure" was another mistake. i don't really know that the constant-bandwidth noise is an obvious target for attack, given that we can construct the interface which describes a baud rate and then sends bytes we like while filling the rest with noise and avoids timing variations. this doesn't seem problematic but it wouldn't be something exposed directly to like the cpu or threading or whatever. this is solvable and someone else i'm sure has considered this

the "state exposure" i was thinking of was the state of message progress that links message source to sink i.e. alice to bob, s to t, me to you

i think that's not the attack that would be relevant but rather identifying the variants of message channel used between neighboring peers, and from there inferring types of messages

so that does make constant-bandwidth noise the target for attack. which is good because i didn't know where that term was going and it seems like an eminently solvable problem

i think source onion routing with progress notifications and route status seems pretty solid with the right choice of responses

i like this result here which has "generating actual literal indistinguishable noise" as the goal because there's a particular synth that plays exactly one time in justice stress which i imagine it to sound like
hard problem, maybe years of research, seems possible, seems like real crypto could be tricked into it

also hero-coded behavior from this behavior in a footnote

We explicitly cite the extended version [19] for results that are not captured in the
CRYPTO 2018 proceedings [20].

there is of course two version of the initial (not eurocrypt) paper with the cryptographer fuckboy, one of which references i'm pretty sure an earlier version of the same paper (the initial) at a conference, specifically to justify "40 bytes" (no context) as "from signal". earlier version of this paper? if you read it? specifically says "oh yeah signal doesn't do user research so i created this interaction graph with uniform distributions to model sender and recipient interactions"

spends like half a page on this:
(1) like "oh yeah signal doesn't do user research" doesn't sound like the most tech bro way to say "i didn't ask users because i don't care" and pretend anyone thinks signal is gonna start adding malware if they ask users or provide some opt-in?
(2) zero citations for the entire stretch, like LITERALLY NO ONE has EVER modelled sending a message back and forth. probabilistically.
(3) choice of uniform distributions for whether someone sends a message over the hour/day and absolutely zero attempt to acknowledge that a """mathematician""" could spend a whole paper on distributional assumptions, particularly CRYPTOGRAPHER

so then he's like ok now i'm going to forget that these are random variables and create this other set of models defined in terms of these distributional assumptions you know what i need to find the way he abuses italics too

yeah love this shit https://www.usenix.org/system/files/usenixsecurity25-auerbach.pdf

A new method to quantify security.

sir i am a scientist we don't need new methods to quantify things that is the last thing anyone needs

oh this is so great

The mismatch between epochs and compromised messages discussed above
will have a rather simple solution in this work,

definitely not allowed to say that about the paper which is currently on page 2 and hasn't finished talking about how simple it is

where we will define SM security

brain destroying acronym usage. "secure messaging". defined as?

applications, including WhatsApp, Signal, Google RCS, and Facebook Messenger, have taken over the world.

defined as corporations. ok

Used by billions of people daily, these appli-
cations achieve extremely strong security properties,

honestly who do you think you are? google? facebook? whatsapp? billions => security?

ok. hit me with fuckboy. define SM security

in a way which explicitly looks at the set
of exposed messages.

it's SO much more fucking obnoxious to italicize half a sentence in the math latex font

the epoch-based model is fucking insane and not how you would model ratchet at all if you implemented it which i know he has not because he has only committed to this fucking scheme

explicitly looks at the set of exposed messages

how "explicitly"? duh. that's what the simulations based on our distributional assumptions are for

everything this guy says is obfuscation

https://signal.org/blog/spqr/

Mixing In Quantum Security
In order to make our security guarantees stand up to quantum attacks, we need to mix in secrets generated from quantum secure algorithms.

with you there. kyber is fucking sick. that was cool

In PQXDH, we did this by performing an additional round of key agreement during the session-initiating handshake, then mixing the resulting shared secret into the initial secret material used to create Signal sessions.

here's the thing: when he does not want you to look at the alternative, he makes sure you know it. (1) "additional round" huh that sounds facetious and incorrect, let's check. oh PQXDH? no link to that 5-letter initialism. that's normal. https://signal.org/docs/specifications/pqxdh/ it's literally just x3dh, EXCEPT! it defines an entire precedence policy for key availability.

and then you realize (2) "additional round" is good, because those are identities. but the cryptographer tech bro uses the word that sounds like more work.

so after he makes this double double toil and trouble shit sound like it's not "just add a lattice key".

To handle FS and PCS,

shut the fuck up. this is a blog post. expand acronym

To handle Forward Secrecy and Post-Compromise Security,

you don't "handle" these, you generally "achieve" them--but of course, "handle" means he gets to pretend that wasn't done before

we need to do continuous key agreement, where over the lifetime of a session we keep generating new shared secrets and mixing those keys into our encryption keys

insane. completely insane shit. balderdash

Signal Protocol and Post-Quantum Ratchets

We are excited to announce a significant advancement in the security of the Signal Protocol: the introduction of the Sparse Post Quantum Ratchet (SPQR). This new ratchet enhances the Signal Protocol’s resilience against future quantum computing threats while maintaining our existing security guar...

Signal Messenger
i remembered kyber exists
i think that's acceptable bc unlike non-ratcheting keys ("pgp", which is currently non-ratcheting), we can be luxurious with the specific process of initiating a session
oh but hm some operations obv are not available on lattice keys. that's actually great though bc it addresses my unease about one type of key. i really want to push key management up into application code. i think that can be done for a research prototype which is what this is going to be for a while