Great video. Watch it!
(This is Prof. Ada Palmer @adapalmer)
Great video. Watch it!
(This is Prof. Ada Palmer @adapalmer)
Returning To Rails in 2026
https://www.markround.com/blog/2026/03/05/returning-to-rails-in-2026/#top
I love a good side-project. Like most geeks, I have a tendency to go down rabbit holes when faced with problems - give me a minor inconvenience and I’ll happily spend weeks building something far more elaborate than the situation warrants. There’s joy in having a playground to explore ideas and “what ifs”; Building things just for the sheer hell of it, as Richard Feynman put it “The Pleasure of Finding Things Out”.
Log4j, *the* project that escalated the need for funding open source in the first place, is currently being DOS’d by slop vulnerability reports. Well done everyone. Slow fucking clap.
I have a new technique for reliably vibecoding apps:
First, you write your requirements in an unambiguous specification language. This is the prompt, but to disambiguate it from less precise prompts, we will call it the source of truth encoding, or source code for short. You then feed it to an agent that will create an of outputs by applying some heuristic-driven transforms that are likely (but not guaranteed) to improve performance. This agent compiles a load of information about how to transform the code into a single pipeline, so we’ll call it a ‘compiler’. This then feeds to the next agent that finds missing parts of the program and tries to fill them in with existing implementations. This is more efficient than simply generating new code and more reliable since the existing implementations are better tested. This agent has a knowledge base of existing code organised in grouping that I’ll refer to as ‘libraries’. It creates links in that web of knowledge between the outputs of the first agent and these existing ‘libraries’ and so we’ll call it a ‘linker’.
I think it might catch on. VCs: I think we can build this thing for only a couple of hundred million dollars! And the compute requirements are far lower than for existing agentic workflows, so we can sell it as a service and become profitable far sooner than other AI startups. Sign up now for our A round! We have a working proof of concept that can output the Linux kernel, LibreOffice, and many other large codebases from existing prompts!
From The Information, reporting on OpenAI's recently released long term revenue and profit projections:
"The most important element of the report was that in 2025, the cost of running AI models quadrupled, so the company’s gross margin fell to 33%, which is below the 46% it had expected."
But tell me again how the cost of inference is coming down, bro.

OpenAI recently hiked its revenue outlook for the next five years, predicting that it would generate about 27% more than previously forecast from sales of its ChatGPT subscriptions, AI models, and newer business lines such as advertising and hardware, according to financial forecasts. But it ...
“Most electronic shopping cart wheels listen for a 7.8 kHz signal from an underground wire to know when to lock and unlock. A management remote can send a different signal at 7.8 kHz to the wheel to unlock it. Since 7.8 kHz is in the audio range, you can use the parasitic EMF from your phone's speaker to ‘transmit’ a similar code by playing a crafted audio file.”
This sounds improbable but I needed to use it just now and it worked both times.
What happens when a large open source project dies?