@Tirial

83 Followers
120 Following
1,030 Posts
Talk Hard
DegreesPhD: How We Hear MSc: Making Cool Sounds BEng: Computer Stuff
Orcidhttps://orcid.org/0000-0002-6607-5262
@gruber Do you ever consider (or have you even done) getting custom key caps for your extended keyboard II, just for the option key, and just to remove the alt?
@gruber @jsnell I’m 8 minutes in to the talk show, and this is already my fav podcast episode of the year. You should do more together.
@caseyliss @siracusa @marcoarment I have three dots (…) programmed in my keyboard replacements to swap to an ellipsis (…) I put a few things in there, like № and ≈, so if I want to type them on my phone, I can easily. Some I have in there, but only use on the phone (≠ and ±) because there are option replacements for them. The only real issue I find is some programs refuse to honour them (MS Word is the worst) and I prefer Matlab not to do it (it doesn’t) because three dots has meaning in Matlab
@jsnell Just a bit of feedback for this week’s Upgrade. I don’t think you can really consider Apple/USA the big leagues anymore. 20 years ago, absolutely. As a young engineer at BlackBerry, I dreamed of working for Apple. Today, as a uni lecturer, I get recruited to come to the USA a fair amount (not by Apple) with more resources and money than I could ever have here. There just isn’t any way I would go. There are still some, even many, who would, but way fewer than ever before

I will concede this is an annoyingly good answer to my snarky question.

“Route 66” can reasonably be pronounced “rowt” or “root”. But the point is still fair.

@Tirial I appreciate the point about working with limited resources, but I don't think scientists versus engineers is the right framing for your point. With some few exceptions, mostly scientists are on a very tight budget and have to be very creative to build the tech they need in a very resource constrained way. I also don't find current machine learning to be 'scientific' in the sense that there is very little focus on 'understanding'. Their benchmarks are very unscientific.

The big AI companies are building out as though those big breakthroughs are going to regularly come, and are trying to argue that their build out will be sustainable. It won’t be.

Before long, the company that best understands that they need to make the best model within the constraints they have, is going to win. I kind of suspect it will be Google, but Anthropic has a pretty good case too.

Now, I realise that LLMs still kind of suck at their jobs. We need the models to get much better at what they do. But the difference in efficacy of LLMs over the last year really isn’t all that much. 2021 to 2022 got a lot better. And 2023 they got a lot better still. 2024 a little better. And 2025? Now they are just getting a little bit better, much more slowly. Much more regularly and predictably. They will continue to improve, but not by the leaps and bounds we saw before.
Deepseek came up with a model, that can run on a mac studio. It doesn’t need tens of thousands of the absolute best chips from NVIDIA, because they literally can’t get them—they are banned from buying them. So they made the best they could do with what they have.
So, back to AI. OpenAI and Meta especially, are just throwing as much data compute as they can at the problem. They do not care about making the model more efficient, only at making it better. With the help of the business bros, who only care about making more money and literally nothing else, they just keep expanding. They are unconstrained.