| Blog and stuff | https://tobtu.com |
| GitHub | https://github.com/Sc00bz |
| Blog and stuff | https://tobtu.com |
| GitHub | https://github.com/Sc00bz |
I just learned that JS has the exponential operation **:
Math.pow(a, b) == a ** b
I remember when I was like fuck it "a ^ b" in Excel and was like wait that worked? This is that moment for me but for JS. I remember having Excel 97 and Excel XP (or 2003). I assume 97 didn't have ^ but XP (or 2003) did... I'm not old you're old.
Or just recreate these with better settings and have it take ~43 days on an RTX 4090. 96% success rate, 2 perfect tables, 942651571967 chains/table (before perfecting), chain length of 630000, <3 TB (RTI2), 6 steps. These are twice as fast to use too.
OK or ~100 days on an RTX 4090. 99.9% success rate, 4 perfect tables, 1265875614643 chains/table (before perfecting), chain length of 720000, 3.95 TB (DIRT) (or 5.56 TB (RTI2)), 10 steps. These take 1.443x longer to use but if you only use 2 tables it's 1.386x faster and higher success rate 96.84% (vs 94.75%).
Rainbow tables aren't that hard. Woof 94.75% success rate, 1 imperfect table, unsorted RT files, sequential start points, 549755568128 chains, chain length of 881689, 8 TiB. Effective rate is 89.77%.
You'll need to run rtsort on all the tables, then rtmerge, then rt2rti2 to make it 4 TiB instead of 8 TiB. If only they knew what they were doing.
https://cloud.google.com/blog/topics/threat-intelligence/net-ntlmv1-deprecation-rainbow-tables/
Anyone know the bandwidth of L2 cache on an RTX 5080?
(The specific case is for large sequential reads by relatively few threads (~100). Using the async read functions to L1 cache (shared memory), should be near peak bandwidth. Also the data will be set to persist in L2 cache.)