I've used Spacemacs for some years now as my primary editor, but recently I've been looking into #neovim.

It starts up a lot quicker, even with a bunch of plugins, and surprisingly for a terminal editor it has a more sophisticated UI than windowed Emacs.

@weavejester I’ve been using #vim for nearly 30 years :facepalm:. #neovim has re-wired my workflow.
Yes, #emacs fans will tell you the same about their #editor — but these tools aren’t just rivals, they represent opposing philosophies. Vim/Neovim: speed, composability, the Unix ethos. Emacs: the “OS inside your editor,” everything bundled in. Both have strengths, but they reveal two very different ways of thinking about code, control, and workflow.

@demiguru

I understand that emacs is designed to be invoked and then one will live in it until the end of his session. While vi is designed to be invoked as needed.

But on today's computer, emacs usually is also super fast, whether their invocation, opening files or when processing something.

I understand that emacs is not following Unix philosophy at all.

But in a sense, emacs is nothing but elisp interpreter with some addition to aid text editing.

All of those elisp packages are not emacs. They are just elisp packages, running on emacs. And those elisp packages usually are highly specialized tools.

I understand composability means each tools can be combined to build a more complex solution, such as through piping and redirection.

In a sense, elisp packages are just functions. We can use them in our own elisp code. But I know, it is different with the Unix composability.

#GNUEmacs #Emacs

@weavejester

@restorante @demiguru @weavejester

Vi isn't composable either. Sed and Ed and awk are composable.

That argument is out of context when talking about things with user interfaces.

I would say that emacs is ultimately more composable than vi, vim, or neovim. Emacs can integrate easily with other systems and tools.

That's the thing that it does amazingly well. Just because it's a super power doesn't mean that it's not within the philosophy.

The extensibility of #Emacs through integration with other tools is off the charts. That to me is completely within the Unix philosophy. I've been a Unix dev for 45 years. I think I know.

@Zenie @restorante @weavejester The claim that #Emacs is “more composable” than vi/#vim/neovim rests on stretching the meaning of composability. What Unix traditionally meant by composability was the ability to take simple, single-purpose tools and connect them together via well-defined interfaces (stdin/stdout, pipes, files). By that definition, ed, sed, and awk are indeed composable—they were designed to slot into pipelines without friction.

@Zenie @restorante @weavejester Vi (and later vim/neovim) may not fit that classical mold perfectly, but they do adhere to it in a limited way:

They operate directly on plain text files (not opaque binary state).
They can be scripted through ex commands.
They can be invoked non-interactively to perform transformations (vi -es).

That’s composability in the Unix sense: predictable input/output behavior and scriptable interfaces.

@Zenie @restorante @weavejester Emacs, on the other hand, is extensible, but extensibility is not the same as composability. Extensibility often means pulling other tools into Emacs, wrapping them in Lisp, and effectively making Emacs the hub of everything.
@Zenie @restorante @weavejester That’s closer to integration or even absorption, not composition. You don’t pipe the output of grep into Emacs and get a text transformation out the other side—you embed grep within Emacs as a Lisp function call.

@Zenie @restorante @weavejester That’s philosophically different.

Vim is composable in the Unix tradition because it’s file-centric, scriptable, and plays in pipelines.
Emacs is extensible and integrative, but its model of absorbing everything into itself arguably violates the spirit of “do one thing well.”

In other words, if we’re being strict about Unix philosophy, Emacs is powerful, but it’s less “composable” than vim—it’s more of an operating system in itself.

@demiguru

I am not an expert of elisp. But may be you are interested to:

https://www.gnu.org/software/emacs/manual/html_node/emacs/Command-Example.html

echo "Hello World" | emacs --batch --eval '(progn (insert (read-from-minibuffer "Input: ")) (write-region (point-min) (point-max) "output.txt"))'

I just simply copy paste the code above

@Zenie @weavejester

Command Example (GNU Emacs Manual)

Command Example (GNU Emacs Manual)

@restorante @Zenie @weavejester

What you’ve shown is Emacs scripting, which is fine, but it’s internal extensibility — you’re writing Emacs Lisp that runs inside Emacs. That’s not the same as taking existing Unix tools and composing them through standard input/output.

In your example: echo "Hello World" | emacs --batch --eval '…'

@restorante @Zenie @weavejester

Notice that echo isn’t actually feeding data through stdin to Emacs in a composable pipeline — instead Emacs is prompting and inserting via Elisp. Compare that to: echo "Hello World" | sed 's/World/Unix/'

@restorante @Zenie @weavejester echo "Hello World" | awk '{ print toupper($0) }'

Here, the tools are directly transforming streams, no custom scripting language needed.

@demiguru @restorante @Zenie @weavejester awk or sed is your scripting language in that case (much simpler and more limited than emacs lisp but still). 's/World/Unix' is your script, and you could write that to a file, use more complex or multiple commands etc. With emacs you can achieve the same thing and make it more composable by making a few adjustments to the example above:

@demiguru @restorante @Zenie @weavejester

echo "Hello World" | emacs --batch --eval '(progn (insert (read-from-minibuffer "")) (princ (string-replace "World" "Unix" (buffer-string))))'

This does the same as 's/World/Unix/' in sed and prints it again to standard output. It's not that #emacs is not composable like that, but that it's usually used interactively (because that's more fun).

@demiguru @restorante @Zenie @weavejester also, I think of #emacs more like a universal interface to your computer, the command line, all the tools you call etc. And in that sense I would interpret the unix philosophy a bit differently: the individual tools, whether that's a command line program or a lisp function, should do one thing well and be composable, but IMO it doesn't matter if you pipe them together on the command line or you combine them within emacs.

@eruwero
I get that view — #Emacs can feel like a universal interface to the system, and its #Lisp functions are composable in their own right.

Where I see the #Unix philosophy diverge is in the boundary: tools designed as separate executables, with contracts enforced by the #OS, versus composition inside a single runtime. Both routes achieve interoperability, but one leans on external guarantees, the other on internal extensibility.

@restorante @Zenie @weavejester

@demiguru @restorante @Zenie @weavejester IMO this internal/external distinction is relatively arbitrary. In one case you have "separate executables" that you call from a shell, which is equivalent to calling a function, with the shell being the interpreter. In the other case you have lisp functions and a lisp interpreter (emacs) to call them.
@eruwero @restorante @Zenie @weavejester Right, that’s the analogy I was pointing at — a #shell is an interpreter, processes are its “functions.”
The key difference is that in the #Unix model, the #OS enforces the separation: each process is isolated, communicates through defined streams, and can be swapped out independently. In #Emacs, #Lisp functions share one runtime. Both are composable, but the guarantees differ.
@demiguru @eruwero @restorante @Zenie @weavejester for me this sounds not GNU Emacs specific. Most/many Lisp implementations on UNIX will work that way: code & data are shared in a single address space and provide a resident programming system.
@demiguru @eruwero @restorante @Zenie @weavejester the typical Lisp Machine then had only one Lisp runtime with one address space, booted as the operating system
@symbolics @eruwero @restorante @Zenie @weavejester Right — and that’s why I see #Emacs as #Lisp-lineage first, #Unix-lineage second.
@Zach  🇮🇱 🇺🇸

I think you're basically correct to point out that Emacs does not adhere to the unix philosophy as it is classically understood.

The locus classicus is presumably the widely quoted McIlroy et. al. in the 'Unix Time-Sharing System" where Emacs clearly violates the first maxim:

Make each program do one thing well. To do a new job, build afresh rather than complicate old programs by adding new "features."

Or to take a later articulation, Gancarz in Linux and the Unix Philosophy's first two tenets: 'Small is Beautiful', 'Each Program Does One Thing Well'  — again Emacs is the opposite of this.

(This kind of thing comes up constantly in every mention of the Unix philosophy that I've ever seen until this conversation and is easily verified by simply searching.  It's strange to see people insisting that there is simply no emphasis on small, special purpose programs. Of course one can decide that principle isn't important, but let's not ignore that it was (and I believe usually still is) considered to be so)

We can also note that in the original incarnation of Unix ('Research Unix') there was nothing remotely like emacs and there was, as far as I know, very little scriptability.  The Thompson shell allowed for piping but not scripting.  OG unix is everything is written in C, with a simple interactive shell and if you want anything else it's back to C programming.

Whereas Emacs does (as you say) have more in common with the Lisp ethos, where there aren't separate programs per se and everything is a function in one giant Lisp environment (same is true of Smalltalk) — although there are differences there, too which I might get on to later.

I think it pays to note that these views are informed by very different language traditions.  C is a static language with an explicit and slow compilation step.  Lisp on the other hand has always been a dynamic language where you can just throw a new function into your environment pretty much immediately, usually without having to (explicitly) compile or restart anything.

If you think about it a bit you might start to see that if all you've got is C, small programs that don't keep their own state but write to text files may be the simplest option for getting any composibility, particularly on limited machines.

However, I think we can fairly say that Unix found that only having a statically compiled language available was far too limiting and early on started haltingly down a path to acquiring dynamic programming abilities, starting with scriptable shells, and later larger programs with their own scripting languages.

People often cite the Unix philosophy as though it's obviously the right way to program.  Modularity, clear interfaces, and composibilty we all think are good things, of course.  But why is 'one program does one thing' better - or even equal to - 'one function does one thing'?

#emacs #lisp #unix
Zotum

@jamie quoted:
> «Make each program do one thing well. To do a new job, build afresh rather than complicate old programs by adding new "features."»

Hmmm... how does this apply to the GNU Compiler Collection?

@Vassil Nikolov | Васил Николов

My guess is that the GCC does not fare well under that rule. It is huge and does many things even by today's standards.  A unix purist from the late 70s I can only imagine would be horrified and astounded.

In the memo written by McIlroy, Pinson and Tague it mentions:

Surprising to outsiders is the fact that UNIX compilers produce no listings: printing can be done better and more flexibly by a separate program.

Not too sure what they mean by this, it almost sounds like it's just about printing source code? which I suppose is sometimes called 'listing' today, but a listing from a compiler normally would mean information about the compilation process I would think.

Tho it seems to indicate the first generation or two of c compilers were rather bare bones.  And the earliest one on record today is only 100K of source code.

Although they do mention lex and yacc as compiler 'front ends', so there was some thought of tooling collections.  Perhaps one could imagine a more Unix philosophy compiler collection emerging over time, more modular than GCC?

But GNU never really bought in to the Unix philosophy wholesale.  GNU Emacs obviously does not as we've been discussing, but I'm pretty sure unix purists have always complained about GNU tools being too big and complex and too many options?

Stallman has always been pretty clear that he's never been a big fan of Unix.  GNU copied unix because it was widespread, portable, tractable, and the division into individual programs allowed them to replace/produce it bit by bit.

(@Panicz Maciej Godek bet me to the GNU's Not Unix utterance...)

#unix #gcc
Zotum

@jamie wrote:
> a listing is about printing source code? ... information about the compilation process

Right.
A listing has line numbers and shows compiler input.
(Punch cards can be loaded in the wrong order...)
It also lists all symbols and their locations for reading dumps.

> Stallman has always been pretty clear that he's never been a big fan of Unix.

Not just Stallman.
Perhaps the best single paper about this view at MIT at the time is Gabriel's "Worse Is Better" (as is usually called).

@jamie wrote:
> unix purists have always complained about GNU tools being too big and complex and too many options?

Not just purists and not just about GNU tools.
This Bell Labs (kt? dmr?) quote comes to mind:
"cat came back from Berkeley waving flags".

P.S.
Notice of Correction:
That quote is attributed to Rob Pike.
I don't know if I should mention him with the at-sign here.

P.P.S.
Forgot to mention _The UNIX-HATERS Handbook_,
an important historical document, whether you are for or against.

@Vassil Nikolov | Васил Николов

Re: GCC's size and complexity, there are I suppose a few categories of size and complexity when seen through the eyes of computing autonomy:

- too complex for me to understand
- mostly too complex for me to understand but I can understand & perhaps customize / automate some of it
- too big for me to understand all of it, but I can understand any part of it if I need (or want) to
- small enough and simple enough for me to understand all of it
- I actually do understand all of it

(The picture could be complicated by including what I can't understand right now but could if I put effort into learning new areas, and how much effort, and what my friends and allies can understand, these are important but let's leave them aside for now)

GNU Emacs is in the third category for me (perhaps not the C layer) and that seems like a nice enough place to live.  I can enjoy lots of features yet hack what I like.

GCC and the Linux kernel seem like they're in the top two categories for me. Even people who implement computer programming languages sometimes opine about the opaque behaviour of the compilation, which seems a bit concerning.

I can see the attraction of putting together a third-category system, which perforce would require a different kernel and a different C compiler.

That is certainly a good way of looking at this matter.

Do those people consider compilation as a whole to be opaque,
or just the optimization part of it?
(I don't know: I haven't come across such texts.)

And now for a different area—the above made me wonder in which of those three categories mathematics falls:
"Mathematics is a science of simple things,
arranged appropriately."
(I do regret that I don't know who said this.)

@jamie