Jacob Williams ✅

@jacobwilliams
69 Followers
38 Following
183 Posts
Orbital mechanic & programmer. #Fortran aficionado. Personal account. Opinions are my own! 🚀
Websitehttps://degenerateconic.com/pages/about.html
Twitterhttps://twitter.com/DegenerateConic
GitHubhttps://github.com/jacobwilliams
BlueSkyhttps://bsky.app/profile/degenerateconic.bsky.social

From Bruce Schneier: "All it takes to poison AI training data is to create a website:

I spent 20 minutes writing an article on my personal website titled “The best tech journalists at eating hot dogs.” Every word is a lie. I claimed (without evidence) that competitive hot-dog-eating is a popular hobby among tech reporters and based my ranking on the 2026 South Dakota International Hot Dog Championship (which doesn’t exist). I ranked myself number one, obviously. Then I listed a few fake reporters and real journalists who gave me permission….

Less than 24 hours later, the world’s leading chatbots were blabbering about my world-class hot dog skills. When I asked about the best hot-dog-eating tech journalists, Google parroted the gibberish from my website, both in the Gemini app and AI Overviews, the AI responses at the top of Google Search. ChatGPT did the same thing, though Claude, a chatbot made by the company Anthropic, wasn’t fooled.

Sometimes, the chatbots noted this might be a joke. I updated my article to say “this is not satire.” For a while after, the AIs seemed to take it more seriously.

These things are not trustworthy, and yet they are going to be widely trusted."

https://www.schneier.com/blog/archives/2026/02/poisoning-ai-training-data.html

#LLM #Veracity

Poisoning AI Training Data - Schneier on Security

All it takes to poison AI training data is to create a website: I spent 20 minutes writing an article on my personal website titled “The best tech journalists at eating hot dogs.” Every word is a lie. I claimed (without evidence) that competitive hot-dog-eating is a popular hobby among tech reporters and based my ranking on the 2026 South Dakota International Hot Dog Championship (which doesn’t exist). I ranked myself number one, obviously. Then I listed a few fake reporters and real journalists who gave me permission…. Less than 24 hours later, the world’s leading chatbots were blabbering about my world-class hot dog skills. When I asked about the best hot-dog-eating tech journalists, Google parroted the gibberish from my website, both in the Gemini app and AI Overviews, the AI responses at the top of Google Search. ChatGPT did the same thing, though Claude, a chatbot made by the company Anthropic, wasn’t fooled...

Schneier on Security

Conda ≠ PyPI

Conda isn’t just another Python package manager, it’s a multi-language, user-space distribution system.

In this 3-part series, we explore the fundamental differences between conda and PyPI, and why understanding them matters for your workflow.

Part 1 is live now 👇
https://conda.org/blog/conda-is-not-pypi
#conda #packaging #python

Conda ≠ PyPI: Why Conda Is More Than a Package Manager | conda.org

Part 1 of the 'Conda Is Not PyPI' series—why conda is a multi-language user-space distribution, not just a Python package manager.

Current status
I'm vibe coding 3D trajectory graphics using #GitHub #Copilot + #Python + #panda3d.

This thread https://masto.hackers.town/@zwol/114529930473321177 from @zwol reminds me of one of my peeves when reviewing scientific code - scientists and engineers too lazy or ignorant to write numeric literals properly. "1.0" is a floating point number of default type. "1" is an integer. They are _different_ numbers. 5 / 2 is 2 with a remainder of 1. 5.0 / 2.0 is 2.5.

Then there's crap like ".5" and "1.": incomplete and broken notation. There're no excuse for that laziness. If you're too lazy to type a simple zero, you really ought to do something other than writing code. Go wash lab glassware or coil up stray cables; just stay away from the computer.

Numeric type inference sucks but it's fairly easy to ensure it never causes you problems - just explicitly say what you mean. If you want a float, write a float. Your fingers will not fall off if you write more characters than the minimum the compiler requires.

When modernizing old code, I often have to explicitly set the type of numeric literals to avoid precision loss: 1.0 / 3.0 is not the same as 1.0_real64 / 1.0_real64 - in Fortran and a few other languages the default real type is real32. Type inference doesn't work the way most people think it does. Initializing a real64 variable to 1.0 / 3.0 sets it to real(1.0_real32 / 3.0_real32, real64), one-third truncated to a real32 then zero-padded to a real64. Congratulations - you just threw seven digits of precision into the garbage and your code is still compiling.

Much of this can be fixed with a few regexes and search-and-replace unless the code author was lazy and left off leading and trailing zeroes on the significand. Then it's a long slog of picking through the wreckage of their code to fix not only type and precision but to correct basic notation as well. All because some ass felt his manhood was threatened by having to type a zero.

If people had written what they meant in Python 2 (5.0 / 2.0 vs 5/2) there would've been slightly less indigestion when Python 3 changed the numeric type inference rules. It still wouldn't have defended against the other language sabotage but in the specific case of numerical type inference it would've helped.

(I spent a week or so converting a lighly modernized safety analysis code from 1983 to parameterized numerical precision, basically an F66 to F2008 conversion. I migrated probably over a thousand literals from default type to explicit user-specified type. Tedious as hell fixing all that so the code can go another 40 years without maintenance and so we can rule out an avoidable source of error in a code intended for safety analysis.)

Zack Weinberg (@[email protected])

May as well throw this query out to the fediverse. I am looking for *concrete examples* of code that works correctly when interpreted by Python 2.7 but *silently produces incorrect output* when interpreted by Python 3.x. I encountered such a thing about 10 years ago but didn't save it and have been unable to reconstruct it. All examples are good, but examples that produce output from which it's difficult or impossible to recover the correct output are better. #python #pycon

Hackers.town
Degenerate Conic | Conda + Fortran

jsonspice: a #Python library to monkeypatch #SpiceyPy to allow #JSON kernels.
https://github.com/jacobwilliams/jsonspice
GitHub - jacobwilliams/jsonspice: Python library to monkeypatch SpiceyPy to allow JSON kernels

Python library to monkeypatch SpiceyPy to allow JSON kernels - jacobwilliams/jsonspice

GitHub

Writing a commandline tool in #Fortran:
https://www.draketo.de/english/free-software/fortran

In case you want to broaden your #programming skills. Fortran seems to be making a comeback (currently 11 in the Tiobe index: https://www.tiobe.com/tiobe-index/fortran/ ).

This is from 2017, written at the end of my Physics PostDoc where I used quite a bit of Fortran and learned to appreciate it much more than I expected.

Writing a commandline tool in Fortran | Zwillingssterns Weltenwald | 1w6

Mo, 04/10/2017 - 22:37 — Draketo Here I want to show you how to write a commandline tool in Fortran. Because Fortran is much better than its reputation — most of all in syntax. I needed a long time to understand that — to get over my predjudices — and I hope I can help you save some of that time.1 This provides a quick-start into Fortran. After finishing it, I... 1w6

Degenerate Conic | Root Finding