@emenel @haskal i still think he sounds like me when i don't have a concrete use case for this part though
Another difference is that while authentication can happen at the key exchange level, and the derived shared symmetric key can be used with STREAM as age does, signatures need to be necessarily computed over the whole message. This sets us back on making the format seekable and streamable: either we make an expensive asymmetric signature for every chunk, or we get fancy with signed Merkle trees, which anyway get us a streamable format only either in the encryption or in the decryption direction. (Or, like discussed above, we just stick a signature at the end and release unverified plaintext at decryption time, causing countless vulnerabilities.)
particularly this part
This sets us back on making the format seekable and streamable
@emenel @haskal lmao he's so funny
We made it a good UNIX tool, working on pipes
sir i do build tools that is literally THE problem i know of 3 individuals working on incl me. none of us have solved it we just like ponder it
One thing we decided is that we’d not include signing support. Signing introduces a whole dimension of complexity to the UX
hmmmm shit (1) he's right except (2) key management is an interesting framing and indicates his tool is doing too much in a different way.
ok here i wouldn't say "too much" necessarily. but like. "key management" is a really high-level task
i do worry that "only curve25519" (fuck djb) could introduce unexpected assumptions elsewhere that aren't tested. but modifying the type of key is not the way to test them. and it's actually pretty sick to have:
generate symmetric key w salt (+entropy [effect])
soooooo what about cases that don't support a session-like context?
the docstring for the single struct in aes_ctr.rs:
/// A wrapper around [`ctr::Ctr32BE`] that uses a smaller nonce and supports an initial counter.
pub struct Aes256Ctr32(ctr::Ctr32BE<Aes256>);
yes, i can see that
literally what
#[derive(displaydoc::Display, thiserror::Error, Debug)]
pub enum Error {
/// "unknown {0} algorithm {1}"
UnknownAlgorithm(&'static str, String),
/// invalid key size
InvalidKeySize,
/// invalid nonce size
InvalidNonceSize,
/// invalid input size
InvalidInputSize,
/// invalid authentication tag
InvalidTag,
}
this is not the appropriate use of displaydoc fuckboys
thiserror. just impl error::Error. how is this real(1) completely unrelated to the SPQR fuckboys
(2) 2021????
(3) fuckboy #2 "adding support for username links"
https://github.com/signalapp/libsignal/commit/e50bec648fed7d6f87648c2c7937a9eeda3841b3
COMPLETELY half-assed
impl fmt::Display for Error {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
match self {
Error::UnknownAlgorithm(typ, named) => write!(f, "unknown {} algorithm {}", typ, named),
Error::InvalidKeySize => write!(f, "invalid key size"),
Error::InvalidNonceSize => write!(f, "invalid nonce size"),
Error::InvalidInputSize => write!(f, "invalid input size"),
Error::InvalidTag => write!(f, "invalid authentication tag"),
Error::InvalidState => write!(f, "invalid object state"),
}
}
}
this is actually much better for a !!!!cryptographic!!!!! error!!!!!
completely did not change the messages, or cases, just fucking removed Clone/Eq/PartialEq which sure that's not a correctness issue but why? why?
#[derive(Debug, displaydoc::Display, thiserror::Error)]
pub enum DecryptionError {
/// The key or IV is the wrong length.
BadKeyOrIv,
/// These cases should not be distinguished; message corruption can cause either problem.
BadCiphertext(&'static str),
}
brb distinguishing your cases
bro says i know. i know what to do
signal-crypto = { path = "../crypto" }
our problem? too much crypto.......not enough signal crypto
same code
i would not accept this at all for any professional work
i would have given my undergrad students maybe a B if it passes all the tests and i gave them the context for them to solve
if it was a junior eng i would totally req to pair and it would be cool as hell and i would learn what kinds of criteria they were familiar with / assuming judged upon
i forgot i had an email to send about checksums but then i found fuckboy #1 at it again https://github.com/signalapp/libsignal/commit/8fcc30278c518306a9471d0ddb496b9a5e722dc6
who changes cbindgen.toml like that. who does that
Specializing this for exactly == 16 results in much better codegen
[does not provide codegen, or indicate method to reproduce]
const AES_BLOCK_SIZE: usize = 16;
const PAR_BLOCKS: usize = 8;
const NONCE_SIZE: usize = AES_BLOCK_SIZE - 4;
const PAD_SIZE: usize = PAR_BLOCKS * AES_BLOCK_SIZE;
pub struct Aes256Ctr32 {
aes256: Aes256,
ctr: [u8; PAD_SIZE],
pad: [u8; PAD_SIZE],
pad_offset: usize,
}
i was gonna say "oh maybe he's C-brained" but no you can still do sizeof(arr) in C
they're describing this so generically as if it's theoretical and new
Practical Relevance of Randomness Manipulation
In addition to exposures of locally stored state secrets, randomness for generat-
ing (new) secrets is often considered vulnerable. This is motivated by numerous
attacks in practice against randomness sources (e.g., [11]), randomness genera-
tors (e.g., [23,7]), or exposures of random coins (e.g., [22]). Most theoretic ap-
proaches try to model this threat by allowing an adversary to reveal attacked
random coins of a protocol execution (as it was also conducted in related work
on ratcheting). This, however, assumes that the attacked protocol honestly and
uniformly samples its random coins (either from a high-entropy source or using
a random oracle) and that these coins are only afterwards leaked to the attacker.
In contrast, practically relevant attacks against bad randomness generators or
low-entropy sources (e.g., [11,23,7]) change the distribution from which random
coins are sampled. Consequently, this threat is only covered by a security model if
considered adversaries are also allowed to influence the execution’s (distribution
of) random coins. Thus, it is important to consider randomness manipulation
(instead of reveal), if attacks against randomness are regarded practically rele-
vant.
and i mean it is i think since this paper described it
Examples for
countermeasures are replacing bad randomness generators via software updates
simply sudo pacman -S openssl
Please note our distinction between key agreement and ratcheted key exchange protocols.
oh i am so fucking ready for this i am RIVETED
While the security provided by Signal is sufficient in most real-world scenarios,
this is the START of the sentence
we focus in this work on the theoretic analysis of the (optimally secure) primitive ratcheting with respect to its instantiability by smaller building
blocks.
Consequently, fixing syntax and oracle definitions, no stronger security definitions exist.
can you say that? is that allowed? this claim was not under dispute in the 60-page paper that sucks
the paragraph actually clarifies
All of the above mentioned works define security optimally with respect to
their syntax definition and the adversary’s access to the primitive execution
(modeled via oracles in the security game). This is reached by declaring secrets
insecure iff the adversary conducted an unpreventable/trivial attack against
them (i.e., a successful attack that no instantiation can prevent). Consequently,
fixing syntax and oracle definitions, no stronger security definitions exist.
Thus, they do not require recovery from state
exposures – which are a part of impersonation attacks
the ratcheting paper using an em dash to indicate a significant addendum