@emenel @haskal lmao he's so funny

We made it a good UNIX tool, working on pipes

sir i do build tools that is literally THE problem i know of 3 individuals working on incl me. none of us have solved it we just like ponder it

One thing we decided is that we’d not include signing support. Signing introduces a whole dimension of complexity to the UX

hmmmm shit (1) he's right except (2) key management is an interesting framing and indicates his tool is doing too much in a different way.

ok here i wouldn't say "too much" necessarily. but like. "key management" is a really high-level task

and "no signatures" means "asymmetric crypto can't use half its special attacks"

i do worry that "only curve25519" (fuck djb) could introduce unexpected assumptions elsewhere that aren't tested. but modifying the type of key is not the way to test them. and it's actually pretty sick to have:

  • gen key (+entropy [effect])
  • calculate dh key agreement (two keypairs, but only unique per pair--basically for ephemeral only)
  • generate symmetric key w salt (+entropy [effect])

    • this is actually not so trivial. i wanna say signal uses aes-256-gcm i'll check rn
    • this is not quite a primitive then but it is something that can be encapsulated w double ratchet
  • soooooo what about cases that don't support a session-like context?

shit they have cbc ctr gcm.......if they don't describe at length which is used where................that's unfortunate

the docstring for the single struct in aes_ctr.rs:

/// A wrapper around [`ctr::Ctr32BE`] that uses a smaller nonce and supports an initial counter.
pub struct Aes256Ctr32(ctr::Ctr32BE<Aes256>);

yes, i can see that

a damn shame

literally what

#[derive(displaydoc::Display, thiserror::Error, Debug)]
pub enum Error {
/// "unknown {0} algorithm {1}"
UnknownAlgorithm(&'static str, String),
/// invalid key size
InvalidKeySize,
/// invalid nonce size
InvalidNonceSize,
/// invalid input size
InvalidInputSize,
/// invalid authentication tag
InvalidTag,
}

this is not the appropriate use of displaydoc fuckboys

they're not even using thiserror. just impl error::Error. how is this real

(1) completely unrelated to the SPQR fuckboys
(2) 2021????
(3) fuckboy #2 "adding support for username links"
https://github.com/signalapp/libsignal/commit/e50bec648fed7d6f87648c2c7937a9eeda3841b3

COMPLETELY half-assed

Adding support for username links · signalapp/libsignal@e50bec6

Home to the Signal Protocol as well as other cryptographic primitives which make Signal possible. - Adding support for username links · signalapp/libsignal@e50bec6

GitHub
impl fmt::Display for Error {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
match self {
Error::UnknownAlgorithm(typ, named) => write!(f, "unknown {} algorithm {}", typ, named),
Error::InvalidKeySize => write!(f, "invalid key size"),
Error::InvalidNonceSize => write!(f, "invalid nonce size"),
Error::InvalidInputSize => write!(f, "invalid input size"),
Error::InvalidTag => write!(f, "invalid authentication tag"),
Error::InvalidState => write!(f, "invalid object state"),
}
}
}

this is actually much better for a !!!!cryptographic!!!!! error!!!!!

completely did not change the messages, or cases, just fucking removed Clone/Eq/PartialEq which sure that's not a correctness issue but why? why?

#[derive(Debug, displaydoc::Display, thiserror::Error)]
pub enum DecryptionError {
/// The key or IV is the wrong length.
BadKeyOrIv,
/// These cases should not be distinguished; message corruption can cause either problem.
BadCiphertext(&'static str),
}

brb distinguishing your cases

bro says i know. i know what to do

signal-crypto = { path = "../crypto" }

our problem? too much crypto.......not enough signal crypto

same code

i would not accept this at all for any professional work

i would have given my undergrad students maybe a B if it passes all the tests and i gave them the context for them to solve

if it was a junior eng i would totally req to pair and it would be cool as hell and i would learn what kinds of criteria they were familiar with / assuming judged upon

how do you add new protobufs in the same fucking commit
ok so at least main doesn't duplicate deps and uses cargo's (intentionally-broken) "workspace" feature
so that's progress. wait i'm gonna check blame
the king
senpai
my hero jrose-signal who taught me for free

i forgot i had an email to send about checksums but then i found fuckboy #1 at it again https://github.com/signalapp/libsignal/commit/8fcc30278c518306a9471d0ddb496b9a5e722dc6

who changes cbindgen.toml like that. who does that

Add AES-256-GCM implementation · signalapp/libsignal@8fcc302

Along with sub-components AES-256-CTR

GitHub

Specializing this for exactly == 16 results in much better codegen

[does not provide codegen, or indicate method to reproduce]

what is "better codegen" to you dude
yeah i like when i see more eax than ebx cause that's the one i learned first and it's still my favorite
idk how x86 works. was it x86? maybe it was some arm shit
i fail to see how "AES-256-GCM" involves implementing a hash by hand when it's the first implementation?
const AES_BLOCK_SIZE: usize = 16;
const PAR_BLOCKS: usize = 8;
const NONCE_SIZE: usize = AES_BLOCK_SIZE - 4;
const PAD_SIZE: usize = PAR_BLOCKS * AES_BLOCK_SIZE;

pub struct Aes256Ctr32 {
aes256: Aes256,
ctr: [u8; PAD_SIZE],
pad: [u8; PAD_SIZE],
pad_offset: usize,
}

i was gonna say "oh maybe he's C-brained" but no you can still do sizeof(arr) in C

oh omg there's another good eng doing good changes
that's 2
they have a numerical advantage over the cryptographer
who literally acts like i'm not gonna read his 60-page paper and puts everything at the end? like a meme? like in a movie?
if anyone knows any practical applications of adversarial randomness i have a great prank
christ https://eprint.iacr.org/2020/148.pdf poettering jumpscare. this is the paper that afaict describes adversarial randomness
and it appears to describe adversarial randomness as like the specific thing that ratcheting key protocols are resistant to?
if that's true this cryptographer whose benchmarks were completely fabricated owes me tomorrow's lunch money too

they're describing this so generically as if it's theoretical and new

Practical Relevance of Randomness Manipulation
In addition to exposures of locally stored state secrets, randomness for generat-
ing (new) secrets is often considered vulnerable. This is motivated by numerous
attacks in practice against randomness sources (e.g., [11]), randomness genera-
tors (e.g., [23,7]), or exposures of random coins (e.g., [22]). Most theoretic ap-
proaches try to model this threat by allowing an adversary to reveal attacked
random coins of a protocol execution (as it was also conducted in related work
on ratcheting). This, however, assumes that the attacked protocol honestly and
uniformly samples its random coins (either from a high-entropy source or using
a random oracle) and that these coins are only afterwards leaked to the attacker.
In contrast, practically relevant attacks against bad randomness generators or
low-entropy sources (e.g., [11,23,7]) change the distribution from which random
coins are sampled. Consequently, this threat is only covered by a security model if
considered adversaries are also allowed to influence the execution’s (distribution
of) random coins. Thus, it is important to consider randomness manipulation
(instead of reveal), if attacks against randomness are regarded practically rele-
vant.

and i mean it is i think since this paper described it

Examples for
countermeasures are replacing bad randomness generators via software updates

simply sudo pacman -S openssl

Please note our distinction between key agreement and ratcheted key exchange protocols.

oh i am so fucking ready for this i am RIVETED

the distinction they noted........yes, it is one that i identified above

While the security provided by Signal is sufficient in most real-world scenarios,

this is the START of the sentence

we focus in this work on the theoretic analysis of the (optimally secure) primitive ratcheting with respect to its instantiability by smaller building
blocks.

@haskal detected another good cryptographer.......makes sense they would be more difficult to find
wow oops lmao there's no way active adversarial input with intent to deanonymize could be analogous to adversarial randomness..................
i think it's not quite a proof
my favorite "not quite a proof" in the entire world is that "two stacks are turing-complete because you can put them back-to-back and generate the turing tape"
they can emulate a turing machine. it does not lead to any incorrect generalizations. but iirc i could not immediately prove it when <3 jeremy spinrad mentioned it like it was a fun trick
this guy poettering who does ratcheting crypto seems like he does cool shit and collabs with tons of others to define novel frameworks of security. not gonna lie easily my #1 poettering rn

Consequently, fixing syntax and oracle definitions, no stronger security definitions exist.

can you say that? is that allowed? this claim was not under dispute in the 60-page paper that sucks

the paragraph actually clarifies

All of the above mentioned works define security optimally with respect to
their syntax definition and the adversary’s access to the primitive execution
(modeled via oracles in the security game). This is reached by declaring secrets
insecure iff the adversary conducted an unpreventable/trivial attack against
them (i.e., a successful attack that no instantiation can prevent). Consequently,
fixing syntax and oracle definitions, no stronger security definitions exist.

also this paper uses italics for emphasis and not to be obnoxious and it's a clearly noticeable and distinguishable tonal difference. the italics made me hate the 60-page paper which sucks
oh man i could go on. imagine right if someone said in a footnote "as you can see, this is really a generalization of the double ratchet" two separate times and then at the end was like oh yeah so uh bandwidth costs and yeah this completely changes the number of messages we send back and forth. but here's a table. we literally made it up. we have no reason to use these distributions" and THEN they tell you in the appendix that they decided adversarial randomness wasn't important to them because uhhhh big keys
literally if the keys are too big maybe fix your theory?
i also don't think "triple ratchet" is whatsoever appropriate given that the entire interaction requires creating a constant cloud of keys swarming between participants. van de graaf generator is a better name. completely misleading
that's a really fantastic property of the double ratchet that it happens to involve these multiple DH exchanges per interaction while not introducing channel state
i'm not even gonna try to read the 60-page paper again because they don't define the "erasure coding" that makes up half the abstract and they're only proud of because it makes the shit remotely tolerable
it's research code! it's research quality!
sorry fuckboys give me hives

Thus, they do not require recovery from state
exposures – which are a part of impersonation attacks

the ratcheting paper using an em dash to indicate a significant addendum

state exposures => impersonation attacks is described regarding random state, which then can be used to infer the subsequent result of an "ephemeral" key, salt, etc

so: is it just a coincidence that these terms also directly bear upon anonymity

Furthermore, both works neglect bad randomness as an attack vector

this paper continues and i'm more and more convinced

ok so no it's not a coincidence but it's a separate mechanism entirely

https://circumstances.run/@hipsterelectron/116269867302883821

the analogy my brain was making was "impersonation" => deanonymization, which is not a thing

random state is both consumed and constructed in the process of constant-bandwidth noise generation

"state exposure" was another mistake. i don't really know that the constant-bandwidth noise is an obvious target for attack, given that we can construct the interface which describes a baud rate and then sends bytes we like while filling the rest with noise and avoids timing variations. this doesn't seem problematic but it wouldn't be something exposed directly to like the cpu or threading or whatever. this is solvable and someone else i'm sure has considered this

the "state exposure" i was thinking of was the state of message progress that links message source to sink i.e. alice to bob, s to t, me to you

i think that's not the attack that would be relevant but rather identifying the variants of message channel used between neighboring peers, and from there inferring types of messages

so that does make constant-bandwidth noise the target for attack. which is good because i didn't know where that term was going and it seems like an eminently solvable problem

i think source onion routing with progress notifications and route status seems pretty solid with the right choice of responses

i like this result here which has "generating actual literal indistinguishable noise" as the goal because there's a particular synth that plays exactly one time in justice stress which i imagine it to sound like