Richard "mtfnpy" Harman (he/him)

873 Followers
835 Following
251 Posts

New follower requests: I deny anybody who doesn't exert the minimum amount of effort in filling out their mastoton profile. I'll accept you if you try again later w/ a filled out profile!

previously twitter.com/xabean || Sourcefire VRT -> Cisco Talos || perl, music, linux, 3d printing, electronics, woodworking, security research, and single.

💡 If I follow you, I think you're smart and have things to say/share I want to think about.

​ If you have cryptocurrency/blockchain tendencies, I do not want to be part of your timeline, nor do I want you in mine. ✌️

​ I'm here to follow/be followed by people, not brands. This is a personal interaction account, I get to be as selective as I like.

 If we've never interacted, and I block you, changes are it's because you posted GenAI trash.

blah blah words do not represent employers, use as directed, do not taunt happy fun ball, only you can prevent forest fires, the limits are UNKNOWN at zombocom.

Good representation of the inside of my brain

Edit - the drummer - https://mastodon.art/@liebach/116340007790020165

Weekly #nook DIY cloud thread update: https://xdaforums.com/t/progress-on-a-diy-cloud-for-eol-nook-hardware-bnrv500-possibly-others.4782115/

A lot of goals accomplished, I'm impressed with myself for "got most of the bare minimum cloud proof of concept working in my spare evening/weekend time in about two months.

Finally got device->cloud, and cloud->device sync to a good place; have a mostly functional persistent database backend, next will be getting more than one book in the cloud. Right now I've got one hardcoded response in the API, without any database tables supporting downloads just yet.

I did discover the Calibre library database has most of the same metadata I need to send to connected devices, so it may be entirely possible to just drop a Calibre library into this DIY nook cloud when it's all done.

Progress on a DIY cloud for EOL Nook hardware (BNRV500, possibly others)

(cross-post from r/nook because there's not much traction over there) Week of Mar 2 2026 progress: Frustrated that my old Nook Glowlight from 2013 works standalone, but hints at content up in the cloud that I can't download any more, I wondered...

XDA Forums

#MASH on #war

Hawkeye: War isn't Hell. War is war, and Hell is Hell. And of the two, war is a lot worse.

Father Mulcahy: How do you figure, Hawkeye?

Hawkeye: Easy, Father. Tell me, who goes to Hell?

Father Mulcahy: Sinners, I believe.

Hawkeye: Exactly. There are no innocent bystanders in Hell. War is chock full of them - little kids, cripples, old ladies. In fact, except for some of the brass, almost everybody involved is an innocent bystander.

Dear #dsp / #mathstodon: I come to you deeply vexed about a subject.

We have well established, in my Teardown talk, that I believe that the Fourier Transform is the most beautiful concept in DSP -- using a simple change of basis into a frequency-orthogonal form unlocks so much optimization. And the Fast Fourier Transform is the most beautiful algorithm, doing this lovely dance of carefully reusing computation that you have already performed to simplify what by all rights ought be an $O(n^2)$ computation into an $O(n log n)$ computation.

However, I am upset, angry, and simply vexed about a different algorithm: the Short Time Fourier Transform. Often, it seems like I would like to get information about a *changing* signal with time granularity that is smaller than just one Fourier transform width. For instance, I would like to know how the cross-correlation of two signals varies over time. Or I would like to evaluate many offsets into a signal to choose which best lines my block up with my Fourier window. The Short Time Fourier Transform is the common solution to this. The algorithm, I fear, is dumb as rocks -- if you would like to split evaluate a signal with $k$ overlapping parts, you just... do $k$ Fourier transforms of your choice.

It feels like there ought be a better way. With all of the beauty and brains of the Cooley-Tukey Fast Fourier Transform, they so very exquisitely reuse computation to form the output. But when you do a STFT on top of that, if you want $k$ overlapping blocks, the cost is... $O(kn log n)$. Even though each of the $k$ may have all but one sample overlapping in there! It feels like we must be throwing away so much valuable information to just repeatedly redo a FT over and over again.

Dear Mathstodon, I beg of you. What am I missing here? If I want to do many closely overlapping Fourier transforms, *is* there a better way? Is this a sign that I am trying to do something fully wrong -- and if so, what's the better way to do, for example, the things I described before? Or have I just stumbled in to the dark shadow of frequency-domain operations that we dare not speak of?
I experimented with unblocking almost everyone I had blocked on mastodon. Apparently I made the correct decision to block someone for being an absolute unit of a shitbag.
I'm going to invent explosion proof nitroglycerine, by (concussivly) slapping a label on it that says "explosion proof" and blame the employee slapping the label on the box when it blows their hand off.
So many companies are trying to "secure AI" how do you secure something that is intentionally and flagrantly insecure

Please continue deploying random npm packages on your computers. Incident Response consultants are standing by. Ask about our "NO AI" service, where we teach you not to use AI. We will gladly accept money from you, to teach you not to use AI.

https://masto.deoan.org/@neurovagrant/116341947161524505