felix qui nihil debet

4 Followers
26 Following
268 Posts

Running IBM COBOL lab files locally

z/OS mainframe to Fedora Silverblue I have finally been able to get the IBM COBOL compiler installed and running on the ThinkPad correctly. I've now started to experiment with the IBM lab COBOL files and get them running locally. The first one is CBL0001. It deals with a data set of presidential names with account information. I downloaded the data set using zowe. Claude and I had to make six changes to the source code in order to…

https://felixquinihildebet.wordpress.com/2026/03/02/running-ibm-cobol-lab-files-locally/

Running IBM COBOL lab files locally

z/OS mainframe to Fedora Silverblue I have finally been able to get the IBM COBOL compiler installed and running on the ThinkPad correctly. I’ve now started to experiment with the IBM lab COB…

Felix qui nihil debet

YubiKey GPG Recovery Adventure

Recovery from a full Admin PIN lockout in OpenGPG What Went Wrong? I am not sure. For some reason my three PIN tries permitted by YubiKey were exhausted plus my Admin PIN was locked (after 3 tries). Not even my management key or reset key was of much help. What caused this is unclear. I thought it was hacking but Co-Pilot did not think so given the condition of the YubiKey particulars. All I had was a YubiKey that was completely locked on…

https://felixquinihildebet.wordpress.com/2026/02/28/yubikey-gpg-recovery-adventure/

YubiKey GPG Recovery Adventure

Recovery from a full Admin PIN lockout in OpenGPG What Went Wrong? I am not sure. For some reason my three PIN tries permitted by YubiKey were exhausted plus my Admin PIN was locked (after 3 tries)…

Felix qui nihil debet

The numerical dissipation introduced by a cheap semi-Lagrangian solver is a deterministic function of the current field state.

https://felixquinihildebet.wordpress.com/2026/02/16/turbulence-spectral-lab-notes-on-neural-correction-of-fast-fluid-solvers/

Turbulence Spectral Lab: Notes on Neural Correction of Fast Fluid Solvers

Claude 4.6 is impressive, enabling exploration despite the lack of domain expertise This project demonstrates that neural networks can learn to correct the systematic numerical errors of fast fluid…

Felix qui nihil debet

Scientific computing with large datasets (over 70GB) can be conducted on an 8GB RTX4060: WeatherBench2 ERA5 dataset is training (untuned of course) on our fortran cuDNN training engine with U-Net CNN. On github soon (after some further training runs).

https://felixquinihildebet.wordpress.com/2025/11/24/training-the-72gb-weatherbench2-era5-dataset-on-an-8gb-rtx4060/

Training the 72GB WeatherBench2 ERA5 dataset on an 8GB RTX4060

our first real scientific computing dataset larger than GPU and system RAM trains, untuned and unoptimised, at 93 samples/sec with 2.8e-7 max difference from PyTorch I wanted to record some of the …

Felix qui nihil debet

currently refining the unified memory feature, does anyone have ideas for large dataset scientific projects that hobble PyTorch? CuPy and PyTorch crashing because of GPU memory is a constant annoyance.

https://felixquinihildebet.wordpress.com/2025/11/20/can-someone-suggest-a-scientific-project-involving-a-large-dataset-that-is-difficult-for-pytorch/

Can someone suggest a scientific project involving a large dataset that is difficult for PyTorch?

AI seems excited about the possibilities of 4x PyTorch speed at the same accuracy plus unified memory for scientific research Kimi cloud seems excited (I fixed the price estimates for hardware as t…

Felix qui nihil debet

We now have a full workflow from data loading to training to inference that has access to all the python ml libraries but is able to train 4x faster than PyTorch. We can review the samples, check out the confusion matrix and do all the other tests as normal.

https://felixquinihildebet.wordpress.com/2025/11/19/cnn-fortran-training-engine-workflow-is-now-complete/

CNN fortran training engine workflow is now complete

load data in python, train in hpc fortran and run inference and other tests in Jupyter notebook Building on top of our v28 fortran training engine, I have added a fortran to python export module wi…

Felix qui nihil debet

Oxford flowers 102: cutting training from 7 minutes to 1 second

a two stage methodology offers advantages A rather large change has taken place after the 5 month CIFAR-10 ordeal. In one day I was able to get my old Oxford flowers 102 training workflow up and running but what surprised me was that I was able to get the training time down from 7 minutes for 100 epochs to 1 second. With a two stage methodology I am also able to offload a lot of the work to existing…

https://felixquinihildebet.wordpress.com/2025/11/14/oxford-flowers-102-cutting-training-from-7-minutes-to-1-second/

Oxford flowers 102: cutting training from 7 minutes to 1 second

a two stage methodology offers advantages A rather large change has taken place after the 5 month CIFAR-10 ordeal. In one day I was able to get my old Oxford flowers 102 training workflow up and ru…

Felix qui nihil debet

CIFAR-10 cuDNN reaches PyTorch parity. We solved the column major/row major puzzle for a cuDNN intrinsic along the way.

https://felixquinihildebet.wordpress.com/2025/11/13/cifar-10-cudnn-reaches-pytorch-parity/

CIFAR-10 cuDNN reaches PyTorch parity

I can’t believe I spent five months on this After my success with optimising the MNIST training workflow (it runs at unbelievable speed), CIFAR-10 seemed to be the next logical step. The dataset is…

Felix qui nihil debet
How to restore YubiKey functions in Fedora Silverblue 43

of course they had to break something Fedora Silverblue 43 (and other variants) jettisoned the scdaemon, thus breaking YubiKey support (for ‘gpg —card-status’ for example and for Kleopatra). Unfort…

Felix qui nihil debet

I noticed some things about Claude Sonnet when debugging a +10,000 line code base over the course of a month.

https://felixquinihildebet.wordpress.com/2025/08/21/observations-after-a-month-of-debugging-with-claude-sonnet/

Observations after a month of debugging with Claude Sonnet

when the code base > 10,000 lines, Claude needs help After over a month of debugging, version 12 of my fortran CIFAR-10 training workflow finally worked properly (even better than version 11). B…

Felix qui nihil debet