@zsapi

1 Followers
37 Following
58 Posts

#LearnLockpickingWithAlice lesson 11 (part two): DIY padlock shims!

Who wants to spend a $5 on fancy padlock shims, when you can make shitty ones for the price of a soda can (and maybe a finger or two)?

Today, I'm going to teach y'all how to turn a soda can into like 6 disposable padlock shims.

⚠️ Soda cans are razor sharp on the cut edges, please be careful; your fingers will thank you 🫶

1. To start, get yourself an empty can of soda, beer, energy drink, etc.

2. Using some scissors you don't care about, cut the top and bottom off along the bevel. You should now have a tube that is open at both ends.

3. Cut the tube down one side and flatten it out into a rectangle, then cut it into 2.5" x 1" strips.

4. Take a strip and (with a marker) divide it into a 4x4 grid.

5. Then cut an "M" shape out of the bottom half of the 2.5" x 1" rectangle.

6. Fold the top ¼ of the rectangle down.

7. Fold the legs of the "M" up and over the top on each side.

8. Shape the shim around a lock shackle.

9. Shim something.

#AlicePics #DIY #Locksport #BypassTechniques #AltText

MLA-ACLS-AHA Depositions (6.81 GB)

AKA The DOGE Depositions

Censored depositions of two members of the Department of Government Efficiency (DOGE), Justin Fox and Nathan Cavanaugh, along with two members of the NEH, Adam Wolfson and Michael McDonald.

https://ddosecrets.org/article/mla-acls-aha-depositions

Help us keep publishing: https://donorbox.org/ddosecrets

MLA-ACLS-AHA Depositions - Distributed Denial of Secrets

On March 6, 2026, the American Council of Learned Societies (ACLS), American Historical Association (AHA), and Modern Language Association (MLA) filed a joint lawsuit to restore federal funding for ed…

@merill The orgs won't allow employees to use anything else, and you know it. Sadly you are not the first to require non rooted devices, but it is still another step back for freedom and privacy. Let us use our general computing pocket device as we wish. Or at least allow orgs to toggle the need for this. Though most will just enable it without question.
@merill yeah sure, make sure we can't control our devices as we want to, but only as the duopoly/governments allow. Great step toward freedom and security /s
@d4v @Tutanota I had to do some searching too. It's about their contract with the US pentagon after anthropic refused to do everything the government asked. I think they drew the line at the use of fully autonomous murder by their ai, and using it for mass surveilance or something similar (I'm a bit hazy on this part).
Here is an article about it (Firefox reader mode bypasses the paywall):
https://fortune.com/2026/02/28/openai-pentagon-deal-anthropic-designated-supply-chain-risk-unprecedented-action-damage-its-growth/
And here is Anthropic's blog post: https://www.anthropic.com/news/statement-department-of-war
OpenAI sweeps in to ink deal with Pentagon as Anthropic is designated a ‘supply chain risk’—an unprecedented action likely to crimp its growth

Anthropic said it will contest the decision—but the damage may already be done.

Fortune
@EUCommission
Hey there EU commission. As a Hungarian citizen I must ask, could we not do this? This is a short sighted and arbitrary limitation that does not add to security but strengthens our dependence on the US.
@tinker as a random kinda recent follower: oh hell yeah, that sounds awesome
@sarahtaber What I meant to say, is that it is trained on all of the creators' comments, not just yours. And then it tries to do some more fine-tuning in yours, which seems to not be weighted heavily enough in your case, or maybe it simply doesn't have enough samples. Then it sees your content as "a woman talking about farming" and it uses the more common answer profile. This is a great example of how sexism, and outdated, harmful viewpoints are trained into LLMs. Love what you do btw 🙂

@sarahtaber H.G. Modernism did an interesting test on this. It shows that the LLM uses your own comments. I assume it is a general LLM that is then explicitly fine-tuned on creators' comments, which lines up with your observation.

https://youtu.be/dIs9c2tetqM

Just... teaching YT's "reply-bot" to be mean 😈

YouTube
@GossiTheDog I think you'd make quite a few people happy if you released it. Are you considering it at all now? Is there hope? 🙃