After a long while, I bought an (DRM-free) ebook again
(Manning currently has a 35% discount so go get it and maybe read it along with me? https://www.manning.com/books/the-programmers-brain)
The Programmer's Brain - Felienne Hermans

With this unique book learn how to optimize your brain’s natural cognitive processes to read code more easily, write code faster, and pick up new languages in much less time.

Manning Publications
Wait‼️ Why am I just now learning that @Felienne wrote a paper title “Programming is writing is programming”⁉️ That’s so obviously my jam! https://dl.acm.org/doi/10.1145/3079368.3079413
“Research has shown that when code contains comments, programmers will take more time to read it.”

‘Beacons are parts of a program that help a programmer understand what the code does. You can think of a beacon like a line of code, or even part of a line of code, that your eye falls on which makes you think, "Aha, now I see."

Beacons typically indicate that a piece of code contains certain data structures, algorithms, or approaches.’

Turns out that all that time I spend “making the code pretty” I was just making it understandable. The question is if what I consider pretty/understandable is the same as other programmers would call pretty/understandable

“after taking a course on design patterns, the time participants needed to maintain code was lower for the code with patterns but not for the code without patterns. The results of this study indicate that gaining knowledge about design patterns, which is likely going to improve your chunking ability, helps you process code faster. You can also see in the graphs that there is a difference in effect for different design patterns: the decrease in time is bigger for the observer pattern than for the decorator pattern.”

I have no idea what the decorator pattern is, but I’m a fan of the observer and i sprinkle it everywhere whenever I can.

A tangent: I think this the magic of functional programming: you have some well defined concepts that you can mix and match. It’s hard to learn them because they are unfamiliar, not because they are difficult, but once you do, it’s easier to reason about stuff that uses them

“In summary, you remember the longest if you study over a longer period. That doesn't mean you need to spend more time studying; it means you should study at more spaced-out intervals. […] This is, of course, in stark contrast with formal education, where we try to cram all the knowledge into one semester, or with bootcamps that seek to educate people in three months.”
“[…] recent research indicates that people never really forget memories—and that it’s the retrieval strength of memories that decays over the years”

In the age of so called AI isn’t it weird I’m reading a book that teaches you how to use refactoring and state tables for the sole purpose of understanding?

Weird is good though

The point of the chapter though is that readability and maintainability aren’t the same, and readability depends on the experience of who is reading.

So, for example, I tend to dislike loops now, I think they add too much noise to what usually can be just a map + a filter call, or reduce or whatever. But before I started to learn Haskell any code using higher order functions was overly confusing for me.

I been rejecting any attempt to make languages easier to learn or more familiar based on how my own experience. Learning is uncomfortable and takes time. There’s no shortcut. If you wanna help beginners don’t be an asshole when they ask stuff. That’s 100 better than adding braces to an ML language (yes I’m still mad at ReasonML)

I have to go, but what a cliffhanger: “Sajaniemi argues that with just 11 roles, you can describe almost all variables”

This books is making me doubly down on the idea that programming sucks because programmers suck, not because programming languages sucks. We approach the discipline in a way that’s nothing rigorous at all. We call 100% coverage rigor, and that’s laughable.

If you think and LLM can write code that’s as good or better than yours, that’s on you

“When reading entirely unfamiliar code, I find it helps to print out the code on paper or save it as a PDF that I can annotate. I realize it may feel weir to read code outside of the IDE, and you will tainly miss some features, like being able to search through the code. However, being able to write notes can deepen your thinking about the code, enabling you to interact with it on a different level.”
“The focal point of code is an important notion when reading code. Simply put, you have to know where to start reading. Some frameworks and techniques, like dependency injection frameworks, can fragment focal points so that they are far apart and hard to link together.”
“While most people Seibel interviewed said that reading code was important and that programmers should do it more, very few of them could name code that they had read recently. Donald Knuth was a notable exception.“
‘Have you ever asked a computer to do something, like "Please work this time?" Even though you know a computer is not a sentient being and cannot listen to you, you might still hold a mental model of a computer as an entity that can decide to act in your favor.’ This book has shown its age 😅
‘You might think that when you learn how something works in more depth, the old, "wrong" mental model is removed from your brain and replaced by a better one. However, in previous chapters we have seen that it is not likely that that information disappears completely from the LTM. […] So, especially in a situation of high cognitive load, you might suddenly use an old model.’
“[…] you should be careful about describing programming concepts and the corresponding workings of the computer in terms of objects and operations in the real world. While these metaphors can be valuable, they might also create confusion, especially since old mental models can remain in the long term memory and occasionally pop up in the working memory too.” Thank you‼️
“What you can assume people will know is not fixed in time and place, of course. When explaining a concept, it is therefore important to choose a comparison that the person you are explaining it to will be familiar with. For example, when explaining a computer's functionality to children in rural India, some educators have used elephants as computers and their trainers as the programmers since that is a principle familiar to the children.” 🤯
“Because misconceptions are held with such high confidence, it can be hard to change someone's mind about them. Often, it is not enough to point out the flaw in their thinking. Instead, to change a misconception, the faulty way of thinking needs to be replaced with a new way of thinking.”

“The process of replacing a misconception based on a programming language you already know with the right mental model for the new language you are learning is called conceptual change. In this paradigm, an existing conception is fundamentally changed, replaced, or assimilated by the new know ledge.” Can’t stop thinking about “type classes are like interfaces”. No.

Edit: it proceeds to explain that learning Java after Python my lead to misconceptions and then you have haskellers swearing there’s nothing wrong with saying that type classes are interfaces. Truly Haskell is hard for the same reason that Nix is hard

“Use tests and documentation within a code-base to help prevent misconceptions.”

There’s nothing more to be said about how and when to comment and test code

“[…] when you start a new project, you might want to take extra care in choosing good names, because the way you create names in the early stages of a project is likely going to be the way the names will be created forever.” Vindication
The best thing about this book is that every single stupid mistake I’ve made in the past reveals now as very common and just due to the interaction of bad code and human brains
I’m loving the idea that code can cognitively overload you, something that I’m pretty sure we have all experienced. And so, instead of talking about good or bad code, we can just talk about how much cognitive load it has, and try to reduce it with techniques that are already documented

No wonder this scale has been criticised, we need to come up with out own:

0. Very, very low mental effort
1. Very low mental effort
2. Low mental effort
3. Rather low mental effort
4. Neither high nor low mental effort
5. Rather high mental effort
6. High mental effort
7. Very high mental effort
8. Very, very high mental effort

(The numbers weren’t in the source)

“Research shows that experts especially rely heavily on episodic memory when solving problems. In a sense, experts recreate, rather than solve, familiar problems. That means that instead of finding a new solution, they rely on solutions that have previously worked for similar problems.”
Reached the part of the book where the generation of explicit memories is explained. The very annoying part is that math skills are described as such, and so you get better at them by practicing, instead of being born with some talent for it, which is pretty much what I thought when I was a kid because unlike other subjects I sucked at it, and so I never properly learned anything
«[…] automatization is complete when you fully rely on episodic memory and do not use any reasoning at all. This automatic performance of tasks is quick and effortless because retrieval from memory is faster than actively thinking of the task at hand and can be done with little or no conscious attention. If you have fully automatized a task, you will also feel no need to go back and check your work, which you might be tempted to do when completing a task by reasoning.» Daaaaaang! I knew we were giving «reasoning» too much credit!
«When you are performing the activity of exploration, you are in essence sketching with code. You might have a vague idea of where you want to go, but by programming you gain clarity about the domain of the problem and about the programming constructs you will need to use.» Programming is writing is programming
Damn!

This book pretty much confirmed what I’ve been suspected for a long long time: any idiot can be a programmer, which explains why I, a fucking idiot, am a programmer. It has nothing to do with intelligence, or not in the way people usually thinks about intelligence. You don’t have to be smarter than the average person, you just have to practice.

I’ve been fascinated by computers for a long time, so I’ve spent a huge amount of my time dealing with them, which is another way to say that I have practice.

I’m pretty dumb though, it takes me a long time to learn things, but when motivated that just gives me more practice

«[…] when we discuss different libraries, frameworks, modules, or programming languages, we can also discuss what they do to your brain rather than your computer.» OK, this right here… I can’t describe the joy it brought!

«If a codebase or programming language is very strict (for example, using types, assertions, and post conditions), it can be hard to use code to express a thought. We then say this tool has low provisionality. Provisionality is an essential factor in learnability because expressing vague ideas and incomplete code might be needed if you are a beginner in a certain system. Thinking of a plan for your code while also thinking about types and syntax can cause too much cognitive load in beginners.»

I consider this as a division we can’t surmount. Some languages will always be hard to learn and any attempt to make them easier (provisional) will remove some constraint that IMO helps with thinking

This is completely irrelevant, but I would enjoy this book more if all the examples of hard things to understand where in Haskell instead of Python. Probably that’s not the case because Python is a mainstream language, but damn, Haskell has so much more material to work with
«One of the things I take away from this book is that confusion and feeling cognitively overwhelmed is fine, and is part of life and learning. Before I knew all I know about cognition, I used to get upset with myself for not being smart enough to read complicated papers or explore unfamiliar code; now I can be kinder to myself and say, "Well, maybe you have too much cognitive load."»
Well, that was time and money well spent. I do have lots of thoughts. Thank you @Felienne!

@RosaCtrl

I should note that one's capacity for cognitive load decreases with age. Young children can withstand a ton of it. That's why we send them to school at such a young age. That's also why young kids learn programming relatively easily (provided, of course, that they're curious and nerdy enough to want to).

I do dearly miss having a young brain…

@argv_minus_one here I’m more sceptic. I experienced myself what you describe, as I’m mostly self-taught. But I think the difference was time. And I didn’t have to worry about buying and apartment or keep renting, or things like that.

Of course we see how the cognitive capacity decreases dramatically with age, but I believe it’s mostly because we don’t have the right to experience what kids do, for example few of us can study after we get one degree. But I’m pulling things out of my ass here. This is mostly a hope that something I’ve researched

@RosaCtrl

You can kinda-sorta mitigate this problem with helpful compiler diagnostics. Rust, for example, tries really hard to tell you why your code is wrong and what you can do to fix it.

But that's only a kinda-sorta mitigation. Rust is very much not a provisional language. And the compiler's advice isn't always correct, either.

@argv_minus_one yeah, it’s interesting because later Idris is mentioned as a language that’s good at progressive evaluation which is a related idea. That makes me think that maybe the reason why languages aren’t better here is because we don’t pay that much attention about how they actually impact cognition. Maybe in the future easy to learn isn’t at odds with advanced techniques

@RosaCtrl Same here.

I think that the problem is at the source tho, ie that we think that "intelligent" and "idiot" are good ways to describe people.
Being able to code, as you put it, just means that someone has put a lot of time into learning to "speak computer".

It's like being able to speak many human languages: it can take a lot of effort, but it's something all humans can do.

@forse yeah… I embraced the word idiot as a reaction though. Like if smart is synonymous of Silicon Valley employee I would rather bean idiot
@RosaCtrl I'm going to use a phrase you dislike but... Correctly naming functions, types, and variables is software engineering 101. Who doesn't agree with spending the right amount of time to do it? 🤔
@jenesuispasgoth the industry
@RosaCtrl I suppose there's the problem of naming "fast enough" to get on with the project, but... 😬

@RosaCtrl re type classes - i agree it’s maybe confusing if you think of interface as an oop thing. But type classes are collections of purely abstract functions for which you can demand or provide implementations. They’re contracts. They’re APIs. All these things are synonymous to interfaces.

It’s confusing if all you know is OOP interfaces, because this adds the subtyping constraint. And that’s a good example of how transfer learning can lead you astray.

Haskell goes one step further by requiring, by convention if not compilation, that there’s a single implementation of a type class for a given type. So - collections of purely abstract functions that you provide implementations for on a per-type basis. Without more context, i wouldn’t be able to tell you if you mean type classes or oop interfaces.

RE: https://social.vivaldi.net/@RosaCtrl/116311373431116257

@NicolasRinaudo the thing is that if you add protocols to the mix you can do better. Fun fat! I do remember some Haskellers complaining on Twitter that Apple called these protocols instead of type clases. But with protocols you get both, what a Haskeller is used to as well as what a Java programmer is used too! (Unless I’m missing out something about Java, to be fair I don’t know that much Java. Or C#)

@NicolasRinaudo and the other thing is that most programmers don’t see the implementation, but the concept in use, which is kind of the point of the chapter of misconceptions
@RosaCtrl but in that case they’re even MORE interfaces. If you disregard implementation details, they’re exactly “the specifications of how you interact with some things”. Interfaces.
@NicolasRinaudo yeah, with the internal state of the thing, which doesn’t make sense in Haskell at all

@RosaCtrl but that’s an implementation detail!

Also, it does - it’s just immutable in Haskell. Like in Java, for some classes. Strings are immutable and implement the Comparable interface.

@RosaCtrl the only distinction (on the state thing) is that type classes are declared separately, where interfaces are tightly bound to class instances.

@NicolasRinaudo OK, I’m getting ready for the gym so I wasn’t clear 😅 When I said that most programmers don’t see implementation I meant that when you learn Haskell you don’t have to learn how type clases are implemented. And maybe other languages will implement them in a different way.

And then when I mentioned state I’m saying that when you learn interfaces in OOP you most likely learn about how to implement methods that touch the internal state of the objects that implement the interface.

What I’ve seen is that people, including me, gets super confused by type classes are interfaces, because in the sense of OOP, they are not. Things may be similar, but the differences are enough to make us confused for years.

Unless I’m wrong and actually only some people gets confused, which I highly doubt, but I could accept if confronted with the right evidence

@RosaCtrl no i think lots of people do get confused. My point is that they are exactly interfaces, but people build mental models for them that are highly specialized to the implementation details of the language they’re using, and these don’t transfer from oop interface to type classes. It’s a great example of transfer learning going wrong.
@NicolasRinaudo then I agree. If you mean “interface” as a general concept, sure. But “the are just like Java interfaces”, heck no
@RosaCtrl they are a different implementation of the same concept . Which is true but maybe not as useful as people assume for building a mental models

@RosaCtrl i think if the explanation went:

- this is what an interface is.
- do you see how oop interfaces are actual interfaces? (This removes the confusion with subtyping and state which are not part of the concept)
- here’s another way of doing interfaces

But people skip the first point, and so you try to bring in unrelated concepts and can’t find them, hence the confusion.

@NicolasRinaudo yeah. Pretty much same problem as with coroutines BTW

@RosaCtrl understanding the similarity means knowing both has allowed you to create a more abstract mental model and to see how each is a specialisation of that model. And THEN the third time you encounter them with different implementation details, maybe transfer learning will help.

If you think about it it’s really just category theory :)

@RosaCtrl i don’t know enough about protocols to weigh in on this, i’m just commenting on “type classes are not interfaces”. They are - in Scala you actually define them with literal interfaces - but it’s confusing because of the OOP baggage associate with them, subtyping. Which i find interesting in the context of the book you’re reading