refulgentis

0 Followers
0 Following
11 Posts
[email protected]
This account is a replica from Hacker News. Its author can't see your replies. If you find this service useful, please consider supporting us via our Patreon.
Officialhttps://
Support this servicehttps://www.patreon.com/birddotmakeup
Hate to see you in gray, I went from dropout waiter to Google via my own startup in between. And you nailed e v e r y t h i n g, I am screenshotting this and reading it over and over again for years to come. Great writing too. Cheers.

Thanks for following up on this: I was really surprised by how much air this paeon to, idk, TDD, took out of the comments by getting off-topic.

Before you commented, I started poking at what you described for 15 minutes, then forget about it and fell asleep. Now I remembered, and I know it's viable and IIUC it's almost certainly going to make a big difference in my work practice moving forward. Cheers.

But that's exactly my point. "It's natural to discuss the broader category" is doing a lot of heavy lifting here. The blog post is making a very specific claim: that formal proof, checked by Lean's kernel, is qualitatively different from testing, it lets you skip the human review loop entirely. cadamsdotcom's comment rounds that down to "executable specs good, markdown specs bad," which... sure, but that's been the TDD elevator pitch for 20 years.

If someone posted a breakthrough in cryptographic verification and the top comment was "yeah, unit tests are great," we'd all recognize that as missing the point. I don't think it's unrelated, I think it's almost related, which is worse, because it pattern-matches onto agreement while losing the actual insight.

But isn't that tantamount with "his comment is a complete non-sequitor"?

I've seen this sentiment and am a big fan of it, but I was confused by the blog post, and based on your comment you might be able to help: how does Lean help me? FWIW, context is: code Dart/Flutter day to day.

I can think of some strawmen: for example, prove a state machine in Lean, then port the proven version to Dart? But I'm not familiar enough with Lean to know if that's like saying "prove moon made of cheese with JavaScript, then deploy to the US mainframe"

The operator overlap is real, and the correction that FoF is a non-profit rather than a federal program is fair. I'll take both points.

But look at what's happened across this exchange:

Essay: Industrial-scale CCAP fraud is "beyond intellectually serious dispute." The evidence: the OLA report shows investigators believed 50%+ of reimbursements were fraudulent.

When challenged that the OLA explicitly could not substantiate $100M and proven fraud was $5-6M: you retreat to "my citation that investigators believed this is absolutely true and is in the report as claimed."

When challenged that "investigators believed X" ≠ "X is beyond intellectually serious dispute" — especially when the same report documents the IG saying "I do not trust the allegation," DHS leadership calling $100M "not credible," investigators having "varying levels of certainty" with some having "not enough experience to have an opinion," and the OLA itself saying "we cannot offer a reliable estimate": you pivot to FoF operator overlap.

You still haven't addressed the actual critique. The overlap proves that some fraudsters work across programs. Not disputed, not surprising, and consistent with your essay's point about fraud lifecycles. What it doesn't do is retroactively substantiate Swanson's 50% CCAP figure. His methodology, spelled out in the very email the OLA appended, counted all payments to any center where children were poorly supervised as 100% fraudulent, a standard he himself distinguished from "the kind of proof needed in a criminal or administrative proceeding." FoF fraud was fabricating meal claims for meals never served. CCAP fraud is billing for children not present. Different programs, different billing mechanisms, different oversight bodies (MDE vs. DHS), different proven scales by orders of magnitude. That some of the same people ran both doesn't collapse the distinction.

"CCAP is also funded by federal block funding" seems designed to blur a line that matters. Many programs are federally funded. That doesn't make convictions in one program evidence of the fraud rate in another.

Here's what's frustrating: I think the essay's core argument is genuinely strong, and it doesn't need the overclaim. The OLA report is already damning on its own honest terms: proven fraud of $5-6M in a program with paper sign-in sheets prosecutors called "almost comical," no electronic attendance verification, 60-day billing windows, a certification statement removed from billing forms in 2013, and the investigators themselves saying centers "open faster than they can close the existing ones down." The report makes it clear fraud was likely substantially higher than proven convictions. That's already a devastating indictment of program oversight. Your argument about base rates, weak controls, and Shirley fishing in a troubled pond all follows from that, from the report as it actually reads.

But "likely substantially higher than $5-6M, in a program with terrible controls" is a very different claim than "50%+ fraud, beyond intellectually serious dispute." Your essay presents the latter. The report supports the former. And the gap matters, because the essay is being read right now, today. in a context where the president is calling Somalis "garbage," prosecutors are throwing out $9 billion estimates in press conferences, and five states just had billions in child care funding frozen. Overstating what the evidence supports isn't a minor rhetorical choice in that environment. It's exactly the kind of epistemic failure the essay warns about when it talks about "irresponsible demagogues" filling the vacuum.

I'm very amenable to the argument made in the blog post, but the tone and sidecars attached made it feel quite off, like if I looked into it, it'd turn out there was a rush to get the main idea down so we could get the sidecars attached.

Spent about 30 minutes consulting the report you mention, OP's post, and your reply, and there's clearly issues.

In the essay, you wrote that industrial-scale fraud is "beyond intellectually serious dispute," cited the OLA report, and presented the 50% figure as the finding that "staggers the imagination." (this should have been a tell)

When challenged, you retreat to: "My citation that investigators believed this is absolutely true."

Those are completely different claims. "An investigator believed X" is not "X is beyond intellectually serious dispute" — especially when the same report on the same pages says:

- The OLA itself: "We did not find evidence to substantiate the allegation" (p. 5)

- The DHS Inspector General: "I do not trust the allegation that 50 percent of CCAP money is being paid fraudulently" (p. 12)

- The investigators themselves had "varying levels of certainty — some thought it could be less, and some said they did not have enough experience to have an opinion" (p. 9)

- Swanson explicitly used "a view that does not require the kind of proof needed in a criminal or administrative proceeding" (p. 10), counting the entire payment to any center with poorly-supervised kids as "fraud"

That's not "beyond intellectually serious dispute." It is, literally, a documented dispute, inside the very report you cite as settling the matter.

And "nine figures from convictions"? That's Feeding Our Future, a federal food nutrition program. The OLA report is about CCAP, a state childcare program. Proven CCAP fraud remains at $5-6 million. You're retroactively validating a CCAP claim with convictions from a different program.

Putting it all on the table: do you agree with the claim that binary analysis is just as good as source code analysis?

This is a Claude-coded website that invents metrics and data based on bald-faced lies, ex. the first example makes up that the search in Mail does not work 100% of the time and then calculates fantastical millions of hours wasted.

I can report it works fine, needed it to pull up 13 year old emails for me a couple months ago.

I absolutely abhor Apple’s software QC since 2010 but I don’t think a vibe-coded, vibes-based, fantasy, written by AI, with the sheen of numbers and reality is the way to do it, or a net-positive outlet for my frustration. At least on HN.

I assumed based on your post and the post you replied to that it is literally impossible to prove any AI is involved, and I trust both of you on that.

Given that, I'm afraid all the interlocution I have to offer is the thing you commented on, the mind of a downvoter, i.e. positing that every downvoter must have details, including details we[1] can't find.

Past that, I'm afraid to admit I am having difficulty understanding how the slides are related, and I don't even know what Matasano is -- is that who owns fly.io? I thought they were "indie" -- I'm embarrassed to admit I thought Monsanto at first. I do know how much I've used AI to code, so I can vouch for tptacek's post.

[1] royal we, i.e. I trust you and OP so completely on what it findable vs. not findable that I trust we can't establish with 100% certainty any sort of AI-based thingy was used at all. To be clear, too, 100% is always too high of a bar, I mean to say we can even't establish at 90% confidence. Even 1% confidence. If all we have is their word to go on, it's impossible.