Current status: implementing read-write locks in bash.

Regretting my life choices right now.

@siguza hmmm

Create lock file ${pid}-R or ${pid}-W, *then* check if there's another lock file present that conflicts (*-W if you're setting a read lock, *-* if you're setting a write lock, ignore your own). If there is, delete yours and fail.

Is there any possible race condition here? I think by creating first and checking for conflicts second you should be fine, but I have not thought about it too hard, nor tested it with contention...

@nicolas17 I don't trust bash glob to give you an atomic snapshot of the file system. The usual primitive used to build locks is `mkdir`, since that either atomically fails or atomically succeeds.

My current plan is to use a simple mkdir for the write-lock, and then do pid-based read locks, but gate globbing behind another simple lock (which would only be held for a very short amount of time though).

Contention isn't too much of an issue for me, I have at most 10 scripts running at the same time. My main worries are just a previous cronjob not finishing before the next one gets started, and also two cronjobs doing git add/commit/push at the same time...

@siguza Does it matter if the glob/list is not atomic though?

Process A creates lock. Process A lists locks. In the middle of listing, process B creates lock. A's glob happens to miss it because scandir is not atomic enough. But when B lists locks it will definitely see A's lock, right?

@nicolas17 I'm thinking more like:

- Process A creates lock and lists locks, finds none but itself.
- Process B creates lock and lists locks.
- In the middle of B listing locks, process C creates lock and that makes B skip over A's lock.
- A and B are now running concurrently.

@siguza ah the three body problem... yeah that does sound plausible :/

@siguza @nicolas17 (this is not what claude proposed I wanted to come up with my own solution).

-writers is making a file
-readers is owning a file in a directory

let's say the path for your lock is in $lck_path then:

wait()
{
sleep 0.5 # or something smarter like inotify
}

write_lock()
{
touch $lck_path.$$
while ! mv -n $lck_path.$$ $lck_path 2>/dev/null; do
wait()
done
}

write_unlock()
{
rm -f $lck_path
}

read_lock()
{
while ! mkdir -p $lck_path/$$ 2>/dev/null; do
wait()
done
}

read_unlock()
{
rmdir $lck_path/$$
rmdir $lck_path 2>/dev/null
}

@siguza @nicolas17 you probably also want to add some `trap` based auto-cleanup in case the shell script dies, and I literaly wrote that in mastodon so I'm rather sure it doesn't work for stupid syntax errors, but the idea is that I use the filesystem semantics to make it work:

the writer uses `mv -n` which refuses the move if the destination exists, whatever the destination is, obviously gives you x-lock (there might be a better way to do open(O_EXCL|O_CREAT) that's what I'm after here).

for readers I basically make them create a sub-directory -- which is exclusive from writers because writers make a file -- and the reader lock is cleaned up by all read unlockers by trying to delete the parent directory which is only ever allowed if it's empty.

that gives you a reader biased rwlock.

you're welcome

@siguza @nicolas17 if you don't need portability you can use lockf(1) not sure if that exists on !BSD systems.

your lock is a file whose content is empty when unlocked and you implement your read_lock/... as sub-shells such as:

write_lock.sh:

```
#!/bin/sh -e

test ! -s “$1"
echo w > "$1”
```

write_unlock()
{
:>”$lck_path”
}

read_lock.sh:
```
#!/bin/sh -e

if test -s "$1”; then
read l < "$1”
test $l != “w” # will cause sh -e to exit
false

echo 1 >> "$1”
```

read_unlock()
{
lockf “$lck_path” <some `dd` command to remove 2 bytes from $lck_path, I'm too lazy to look>
}

@siguza @nicolas17 you also probably can do that with sub-shells instead, like lockf(1) suggests

wait()
{
sleep 0.5
}

try_write_lock()
{
(
lockf 9
test ! -s "$lck_path”
echo w > "$lck_path”
) 9>”$lck_path”
}

write_lock()
{
while ! try_write_lock(); do
wait()
done
}

I'll let you translate the read_lock etc ;)

@madcoder @nicolas17 Linux has flock(1), which actually supports both exclusive (-x) and shared (-s) locks, so that would work out of the box.

BSD lockf(1) doesn't seem to yield much over mkdir(1).

So I guess the portable solution would be:

mutex_lock()
{
while ! mkdir "$lck_path.w"; do
sleep 1;
done;
}

mutex_unlock()
{
rmdir "$lck_path.w";
}

write_lock()
{
while true; do
mutex_lock;
if ! [ -s "$lck_path.r" ]; then
break;
fi;
mutex_unlock;
sleep 1;
done;
}

write_unlock()
{
mutex_unlock;
}

read_lock()
{
mutex_lock;
echo >>"$lck_path.r";
mutex_unlock;
}

read_unlock()
{
mutex_lock;
truncate -s -1 "$lck_path.r";
mutex_unlock;
}
  • Lock consists of a directory and a file, side by side
  • Directory acts as s mutex, acquired with mkdir, released with rmdir
  • Mutex must be held for any operations on the file, even read
  • Readers take the mutex, either append to (lock) or remove from (unlock) the file one character/line, then drop the mutex again.
  • Writers take the mutex, then make sure the file is zero-sized. They keep the mutex for their entire work and only drop it once they want to release the write-lock.

Of course reader-biased and not very efficient, but... should be safe? truncate might need a manual replacement for systems other than BSD and Linux, but ehh...

@madcoder @nicolas17 using files vs directories is a very elegant solution, and on Linux this might actually work... but on BSD, mv -n is not atomic, so two writers could race each other. I know the renameatx_np syscall supports RENAME_EXCL, but mv doesn't use that. I'm not entirely sure that GNU mv -n is atomic, but at least there is a RENAME_NOREPLACE in some code it calls into, so it's at least plausible.
@siguza @nicolas17 thank you. I was proud of myself I shall say
@siguza @nicolas17 and what you want is not rename really it’s creating the file. Touch(1) didn’t seem to give you any way to do that. I didn’t look at other ways to do it.