Michael Hannemann

157 Followers
1.2K Following
444 Posts
Programmer, sci-fi reader, slow videogame player, lucky enough to have the mythical Lisp Day Job. Columbus, OH. He/him. @mhannemann.bsky.social too.

I'm pausing the latest #AccidentalTechPodcast member special on Computer Science/Engineering curriculums to say, hi! Long time listener, working Lisp programmer, we do exist in the wild. Granted, if we were starting this project today, it would undoubtedly be in Python or something else, but I've got a very comfortable, productive development environment for the work I'm doing.

Regarding teaching languages - the reason you might use a Scheme or a Racket is that everything is simpler - both in execution and the actual provided language facilties. You don't have to worry in the same way about version changes (are you on the right version of python?) or path setups or including the right header files. You won't have a million ways to solve the same problem, and you won't have a large library of prebuilt objects and functions which might solve half the assignment for you. If you're teaching basic computer science concepts, using slimmed down, easier-to-use languages like these make a lot of sense.

(Common Lisp, for the record, is not like this, it's got an enormous library and many different ways to solve the same problem. I wouldn't consider it a teaching language. And, per John in the show, my first CS class was in Pascal.)

Also, Lisp-derived languages, and Python & Ruby too, have fully functional live interaction environments in their read-eval-print loops (REPLs). IMO it is both more satisfying and more productive to be able to recompile a single function and then execute your next command in the repl without having to recompile a whole program. It lends itself much more to ad-hoc testing and experimentation.

Finally, as to the discussion on teaching fundamentals vs teaching particular languages - I once knew a VB.NET programmer who was perfectly adequate most of the time, but did not understand pointers and did not have a fundamental understanding of how CPUs and memory worked. While this rarely affected his day-to-day work, I remember there was one bug in particular that he couldn't begin to address because it required deeper understanding of what was really going on when code ran.

@siracusa @marcoarment @caseyliss

Does anyone ever actually use Apple's Books app on the desktop? Asking, because here's the grid view, and then the list view. Why, why isn't the list view all text and much more information dense?
Most immediately actionable info from today's Accidental Tech Podcast: there's a new Chvrches album! Thank you, @siracusa .
That madman @ZachWeinersmith predicted vibe coding 14 fucking years ago https://www.smbc-comics.com/?id=2362
Saturday Morning Breakfast Cereal - 2011-09-08

Saturday Morning Breakfast Cereal - 2011-09-08

@marcoarment - I just received my Incase version of the "Microsoft Sculpt" keyboard. So far, so good. Are you going to try it?

@atpfm re app review summaries: am I the 500th person to say that Amazon has been doing customer review summaries for some time now?

re reading glasses for computers: progressive lenses were _awful_ for my 27" monitor, I felt like I was sitting in the front row at the movie theater, craning left & right to try and see the corners of the screen in sharp focus. Dedicated "computer glasses", which are like reading glasses except focusing at arm's length, have been so much better.

I take care of myself the same way I take care of plants.

Not enough sunlight, often forget to water.

And relatedly, today I learned a useful term: "slop". "*Slop* describes AI-generated content that is both unrequested and unreviewed."

I say this is related because Google search at the top that I can probably only store 30 files because I'm at the limit of my 5GB default file store. Google, no. I have 1.59TB left.

Term via: https://simonwillison.net/2024/Dec/31/llms-in-2024/#the-environmental-impact-got-better

Things we learned about LLMs in 2024

A lot has happened in the world of Large Language Models over the course of 2024. Here’s a review of things we figured out about the field in the past …

Simon Willison’s Weblog