hn_throwaway_99

0 Followers
0 Following
2 Posts

This account is a replica from Hacker News. Its author can't see your replies. If you find this service useful, please consider supporting us via our Patreon.
Officialhttps://
Support this servicehttps://www.patreon.com/birddotmakeup

I found this to be a very odd and strange rant. The author's three issues with Apple are:

1. Gatekeeping. OK, fine, but at the very least this has been Apple's stance for a very long time now (the author talks about faxing credit card details), so it's not like it's something new. If you wanted full unfettered installation rights, Apple was never the company for you. And while I think it's fine to argue against Apple's stance, I find most of the arguments are less than honest about the pros of things like developer verification for the end user.

2. mac OS26. I totally agree that this is a total fiasco from a design perspective, and liquid glass is unqualified shit. Still, I see Apple at least somewhat moving in the right direction by getting rid of Alan Dye.

3. Apple had a bug in their age verification protocol. Again, valid point, but Apple needs to follow UK law. I've seen a lot more missives arguing against requiring things like driver's licenses and other government ID, and so it seems like Apple is at least trying to go the least restrictive route by choosing credit card verification.

To emphasize, I'm not apologizing for Apple here. In particular, much has been written about how Apple has lost their way regarding the "it just works" philosophy. But it seems like the author's main beef is against Apple's level of control, and this is just a fundamental difference in Apple's stance that has existed for about 2 decades.

If the stats were as good as the hyperbole in the article, it would clearly state the only 2 metrics that really matter: predictive value positive (what's the actual probability that you really have cancer if you test positive) and predictive value negative (what's the actual probability that you're cancer free if you test negative). As tptacek points out, these metrics don't just depend on the sensitivity and specificity of the test, but they are highly dependent on the underlying prevalence of the disease, and why broad-based testing for relatively rare diseases often results in horrible PVP and PVN metrics.

Based on your quoted sections, we can infer:

1. About 250 people got a positive result ("nearly one in 100")

2. Of those 250 people, 155 (62%) actually had cancer, 95 did not.

3. About 24,750 people got a negative test result.

4. Assuming a false negative rate of 1% (the quote says "over 99%") it means of those 24,750 people, about 248 actually did have cancer, while about 24,502 did not.

When you write it out like that (and I know I'm making some rounding assumptions on the numbers), it means the test missed the majority of people who had cancer while subjecting over 1/3 of those who tested positive to fear and further expense.