A habit I'm thinking of adopting for my shell scripting:

Write scripts in the form of a sequence of function definitions, with no commands at the top level that actually do anything. Then, at the very end, write a compound statement that calls one of the functions and ends with an exit. Like this:

#!/bin/bash
subroutine() { ... }
main() { ... }
{ main "$@"; exit $?; }

The idea is that the shell doesn't actually _do_ anything while reading this script until it reads the braced compound statement at the end. And then it's committed to exiting during the execution of that statement. So it won't ever read from the script file again.

With a normal shell script, it's dangerous to edit the script file while an instance of the shell is still running it, because the shell will read from the modified version of the file starting at the file position it had got to in the old one, perhaps reading a partial command or the wrong command and doing something you didn't want. But with a script in this style, the shell finishes reading the script before it does anything, so it's safe to edit.

(Of course, if your editor saves the new script file to a different file name and renames it over the top, you're safe anyway. But not all editors do: emacs, in particular, reopens the existing file and overwrites its contents.)

@simontatham Why would someone want to edit a shell script whist it's running?

And even if they did, that'd be in a sandboxed developer test environment so quite harmless if it went wrong.

@TimWardCam you're kidding, right? Shell scripts are useful for the wrapper layer _around_ projects that live in nice organised repositories.

The example I've used elsewhere in this thread is backups. You have some backup tool which takes a command line saying 'back up _this_ machine to _that_ disk', or whatever. That tool itself, of course, is Properly Organised. It has a source control repo and numbered releases, and its developers take care to test changes on non-live data before releasing them to users.

But as a user of it, you personally always want to back up the same set of three machines to a disk mounted at the same pathname. So you write a personal shell script that contains your usual three backup commands and maybe also a mount/umount around them.

Seriously, would you have a dev version of _that script_ and a live version, and mock up an entire test framework you can run before deploying the one to the other? There's "well organised" and then there's just ludicrous. If you set up all that infrastructure for every 5-line script, you'd never get _anything_ done.

But then one day you run it, and you realise you don't know how far through the backup it is, and you think "oh yeah, there's a --progress-report option to the backup tool, it'd be nice if I'd put that on the command line". So you want to edit your script.

If it were in Python, you'd be able to do that right now, while the backup is still running. Too late to affect this run (unless you want to abort the backup and restart from scratch), but it will be better next time.

But in shell, you've got to write a note to yourself for later, and wait until the backup finishes to make your change. Or organise the script the way I said, in which case you can do it now, avoiding forgetting to do it later.

@simontatham My backup system exposes all that in the GUI that sits on top of the underlying tool, no need for me to mess with shell scripts.

But yeah, I appreciate that some people prefer doing this sort of thing in other ways.