Sit on your hands, you isn't-that-just-ers, while I sing you a hit from Greece...

A monoid is a way of making *one* thing from some *number* of things.

If the number is one, you're laughing.

If the number is zero, you'd better have a thing ready to go. (How do you turn the empty list of things into a thing?)

If the number is more than one, you need a way to pick any two neighbouring things (let's face it, you could be given exactly two things) and combine them into one.

It shouldn't matter which two neighbours you pick to simplify the problem, and it shouldn't matter if you obfuscate the problem by combining some things with no more things.

Basically, there should be a way to splat a bunch of things into a thing, and it should not be subtle. (Defining unsubtlety may be subtle.)

So you need a thing that corresponds to *nada*, and you need a way to *gada* two things to make a thing.

Make sure that

mada gada nada is mada

and

nada gada fada is fada

and (more dramatically)

(mada gada fada) gada rada is mada gada (fada gada rada)

(In case you're wondering, Stanley Baxter is still alive. If you're not wondering, that's because you need to google Stanley Baxter right now.)

There are lots of examples.

Lists, where nada is empty and gada is concatenation.

Numbers with zero and addition is the same example for lists of things you can't tell apart.

Numbers with one and multiplication is a mightly subtle version of the same example if you're multiply barrelled and Euclid.

I like that there are four monoids on the Booleans. Once you pick nada, you've chosen three-quarters of the truth table for gada. You may choose the remaining output freely. Two 1-bit choices; four monoids.

Now, for every such monoid, there's a kind of diagram we can draw: a network where the nodes are things and an arrow from one node to another is labelled with the thing the source thing has to gada to get to the target gada.

If we're talking about concatenating strings, that's

Scun -[thorpe> Scunthorpe

Round about now is when we normally see the "Plotkin Leap". That's when the kind introduction switches off like a light, and, plunged into darkness, you're lined up for a proper chibbing which arrives directly. If you needed that introduction, you can't handle the rest of the paper. What to do? But I digress.

The associativity of gada ensures that whenever there's an indirect path, there's a direct path.

Scun -[th> Scunth -[orpe> Scunthorpe

So, for additive numbers, we have things like
2 -[5> 7 <3]- 4
and it becomes an interesting question to consider whether there is a best x (with a y and a z) such that
2 <y]- x -[z> 4
which makes the diagram coherent. I.e., given that we have
x -[y> 2 -[5> 7
and
x -[z> 4 -[3> 7
we need y+5 = z+3.

But what is "best"? The best is the one you can get to from all of them. Here, the best x is 2, with y=0 and z=2. That's to say x is the "minimum", with y and z recording by how much.

Note how tricky it is to compute x, y and z in most high-level programming languages, and how much easier it is in most assembly languages. Weird, huh?

You don't get to ask "which is smaller and by how much?" as one thing, despite the fact that comparison and subtraction require much the same machinery. Now, why is that? Sorry, I shouldn't get distracted by things which might be important.
Anyway, you can play the same game with multiplicative numbers
12 -[5> 60 <6]- 10
and you can ask the same question. Is there a best
12 <y]- x -[z> 10
which results in a coherent diagram, i.e. that y*5 = z*6?

And yes there is, x=2 with y=6 and z=5.

But how do you compute x (and y and z, what about them?)?

Well, let me tell you a thing. Before 300BC, bars of Ritter Sport chocolate were not reliably quadratisch...

Euclid used to work for Ritter, back in the day. The production line reliably delivered *rectangular* chocolate bars, but Ritter insist on selling *square* chocolate bars. Euclid was the engineer they put in charge of resolving the discrepancy.
So, what Euclid, and later his minions, would do is to pass through quality control those rectangular bars which happened to be square, but for the rest, to export the largest available sub-bar, by cutting off a square piece, then continuing with the residue.
That's to say, he computed a "best below" task in the *additive* structure in order to make progress in a "best below" task in the *multiplicative* structure. Anyone would think multiplication was iterated addition!
Honestly, where would we be if the Ancient Greeks hadn't had chocolate?
Meanwhile, if we knew how to compute Euclid's algorithm by iterating compare-subtract from the definition of multiplication as iterated addition, we'd be chuffed.