It's sort of amazing how quickly you can do things now. I wanted to try writing an alphazero-style AI for Azul. With no AI background, it took me maybe 2-3 hours to (2-3 days wall clock) to beat the best AI I could find to play against:

https://danluu.com/game/tile/

I don't think the AI is superhuman, but I've just been training it on a CPU on my laptop and it's not bad and measurably better every few hours, so maybe it will get there if I just let it run for longer (or if I get a real workstation)

I don't want to overstate the case — I saw someone vibe coded the same project and then declared programming was dead after they finished, but their bot loses to plain MCTS with a simple heuristic.

Mine was in the same state when I just had an LLM running in a loop with instructions to improve the result. At least for now, you have to apply some direction, but it turns out someone with no AI background can supply enough direction.

@danluu Can you share any details about your workflow?

Were you really running the LLM on CPU on your laptop? Or was the LLM generating ML code for some other machine learning that ran on-laptop?

@dynomight Sorry that was unclear! I used GPT-5.2 (seems to give you the most quota, and I did this in a way that used a ton of quota; even on the Pro plan, I had to throttle my usage) to generate code which ran on my laptop.

I'm sort of in the stone ages with my setup (codex plugin in vscode). I wouldn't recommend anyone use this because the plugin is so buggy, but it works ok enough if you want to queue up a bunch of work and occasionally check in on what it's doing.