hmmmm shit (1) he's right except (2) key management is an interesting framing and indicates his tool is doing too much in a different way.
ok here i wouldn't say "too much" necessarily. but like. "key management" is a really high-level task
hmmmm shit (1) he's right except (2) key management is an interesting framing and indicates his tool is doing too much in a different way.
ok here i wouldn't say "too much" necessarily. but like. "key management" is a really high-level task
i do worry that "only curve25519" (fuck djb) could introduce unexpected assumptions elsewhere that aren't tested. but modifying the type of key is not the way to test them. and it's actually pretty sick to have:
generate symmetric key w salt (+entropy [effect])
soooooo what about cases that don't support a session-like context?
the docstring for the single struct in aes_ctr.rs:
/// A wrapper around [`ctr::Ctr32BE`] that uses a smaller nonce and supports an initial counter.
pub struct Aes256Ctr32(ctr::Ctr32BE<Aes256>);
yes, i can see that
literally what
#[derive(displaydoc::Display, thiserror::Error, Debug)]
pub enum Error {
/// "unknown {0} algorithm {1}"
UnknownAlgorithm(&'static str, String),
/// invalid key size
InvalidKeySize,
/// invalid nonce size
InvalidNonceSize,
/// invalid input size
InvalidInputSize,
/// invalid authentication tag
InvalidTag,
}
this is not the appropriate use of displaydoc fuckboys
thiserror. just impl error::Error. how is this real(1) completely unrelated to the SPQR fuckboys
(2) 2021????
(3) fuckboy #2 "adding support for username links"
https://github.com/signalapp/libsignal/commit/e50bec648fed7d6f87648c2c7937a9eeda3841b3
COMPLETELY half-assed
impl fmt::Display for Error {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
match self {
Error::UnknownAlgorithm(typ, named) => write!(f, "unknown {} algorithm {}", typ, named),
Error::InvalidKeySize => write!(f, "invalid key size"),
Error::InvalidNonceSize => write!(f, "invalid nonce size"),
Error::InvalidInputSize => write!(f, "invalid input size"),
Error::InvalidTag => write!(f, "invalid authentication tag"),
Error::InvalidState => write!(f, "invalid object state"),
}
}
}
this is actually much better for a !!!!cryptographic!!!!! error!!!!!
completely did not change the messages, or cases, just fucking removed Clone/Eq/PartialEq which sure that's not a correctness issue but why? why?
#[derive(Debug, displaydoc::Display, thiserror::Error)]
pub enum DecryptionError {
/// The key or IV is the wrong length.
BadKeyOrIv,
/// These cases should not be distinguished; message corruption can cause either problem.
BadCiphertext(&'static str),
}
brb distinguishing your cases
bro says i know. i know what to do
signal-crypto = { path = "../crypto" }
our problem? too much crypto.......not enough signal crypto
same code
i would not accept this at all for any professional work
i would have given my undergrad students maybe a B if it passes all the tests and i gave them the context for them to solve
if it was a junior eng i would totally req to pair and it would be cool as hell and i would learn what kinds of criteria they were familiar with / assuming judged upon
i forgot i had an email to send about checksums but then i found fuckboy #1 at it again https://github.com/signalapp/libsignal/commit/8fcc30278c518306a9471d0ddb496b9a5e722dc6
who changes cbindgen.toml like that. who does that
Specializing this for exactly == 16 results in much better codegen
[does not provide codegen, or indicate method to reproduce]
const AES_BLOCK_SIZE: usize = 16;
const PAR_BLOCKS: usize = 8;
const NONCE_SIZE: usize = AES_BLOCK_SIZE - 4;
const PAD_SIZE: usize = PAR_BLOCKS * AES_BLOCK_SIZE;
pub struct Aes256Ctr32 {
aes256: Aes256,
ctr: [u8; PAD_SIZE],
pad: [u8; PAD_SIZE],
pad_offset: usize,
}
i was gonna say "oh maybe he's C-brained" but no you can still do sizeof(arr) in C
they're describing this so generically as if it's theoretical and new
Practical Relevance of Randomness Manipulation
In addition to exposures of locally stored state secrets, randomness for generat-
ing (new) secrets is often considered vulnerable. This is motivated by numerous
attacks in practice against randomness sources (e.g., [11]), randomness genera-
tors (e.g., [23,7]), or exposures of random coins (e.g., [22]). Most theoretic ap-
proaches try to model this threat by allowing an adversary to reveal attacked
random coins of a protocol execution (as it was also conducted in related work
on ratcheting). This, however, assumes that the attacked protocol honestly and
uniformly samples its random coins (either from a high-entropy source or using
a random oracle) and that these coins are only afterwards leaked to the attacker.
In contrast, practically relevant attacks against bad randomness generators or
low-entropy sources (e.g., [11,23,7]) change the distribution from which random
coins are sampled. Consequently, this threat is only covered by a security model if
considered adversaries are also allowed to influence the execution’s (distribution
of) random coins. Thus, it is important to consider randomness manipulation
(instead of reveal), if attacks against randomness are regarded practically rele-
vant.
and i mean it is i think since this paper described it
Examples for
countermeasures are replacing bad randomness generators via software updates
simply sudo pacman -S openssl
Please note our distinction between key agreement and ratcheted key exchange protocols.
oh i am so fucking ready for this i am RIVETED
While the security provided by Signal is sufficient in most real-world scenarios,
this is the START of the sentence
we focus in this work on the theoretic analysis of the (optimally secure) primitive ratcheting with respect to its instantiability by smaller building
blocks.
Consequently, fixing syntax and oracle definitions, no stronger security definitions exist.
can you say that? is that allowed? this claim was not under dispute in the 60-page paper that sucks
the paragraph actually clarifies
All of the above mentioned works define security optimally with respect to
their syntax definition and the adversary’s access to the primitive execution
(modeled via oracles in the security game). This is reached by declaring secrets
insecure iff the adversary conducted an unpreventable/trivial attack against
them (i.e., a successful attack that no instantiation can prevent). Consequently,
fixing syntax and oracle definitions, no stronger security definitions exist.
Thus, they do not require recovery from state
exposures – which are a part of impersonation attacks
the ratcheting paper using an em dash to indicate a significant addendum
state exposures => impersonation attacks is described regarding random state, which then can be used to infer the subsequent result of an "ephemeral" key, salt, etc
so: is it just a coincidence that these terms also directly bear upon anonymity
Furthermore, both works neglect bad randomness as an attack vector
this paper continues and i'm more and more convinced
ok so no it's not a coincidence but it's a separate mechanism entirely
https://circumstances.run/@hipsterelectron/116269867302883821
the analogy my brain was making was "impersonation" => deanonymization, which is not a thing
random state is both consumed and constructed in the process of constant-bandwidth noise generation
"state exposure" was another mistake. i don't really know that the constant-bandwidth noise is an obvious target for attack, given that we can construct the interface which describes a baud rate and then sends bytes we like while filling the rest with noise and avoids timing variations. this doesn't seem problematic but it wouldn't be something exposed directly to like the cpu or threading or whatever. this is solvable and someone else i'm sure has considered this
the "state exposure" i was thinking of was the state of message progress that links message source to sink i.e. alice to bob, s to t, me to you
i think that's not the attack that would be relevant but rather identifying the variants of message channel used between neighboring peers, and from there inferring types of messages
so that does make constant-bandwidth noise the target for attack. which is good because i didn't know where that term was going and it seems like an eminently solvable problem
i think source onion routing with progress notifications and route status seems pretty solid with the right choice of responses