Current status: implementing read-write locks in bash.

Regretting my life choices right now.

@siguza hmmm

Create lock file ${pid}-R or ${pid}-W, *then* check if there's another lock file present that conflicts (*-W if you're setting a read lock, *-* if you're setting a write lock, ignore your own). If there is, delete yours and fail.

Is there any possible race condition here? I think by creating first and checking for conflicts second you should be fine, but I have not thought about it too hard, nor tested it with contention...

@nicolas17 I don't trust bash glob to give you an atomic snapshot of the file system. The usual primitive used to build locks is `mkdir`, since that either atomically fails or atomically succeeds.

My current plan is to use a simple mkdir for the write-lock, and then do pid-based read locks, but gate globbing behind another simple lock (which would only be held for a very short amount of time though).

Contention isn't too much of an issue for me, I have at most 10 scripts running at the same time. My main worries are just a previous cronjob not finishing before the next one gets started, and also two cronjobs doing git add/commit/push at the same time...

@siguza Does it matter if the glob/list is not atomic though?

Process A creates lock. Process A lists locks. In the middle of listing, process B creates lock. A's glob happens to miss it because scandir is not atomic enough. But when B lists locks it will definitely see A's lock, right?

@nicolas17 I'm thinking more like:

- Process A creates lock and lists locks, finds none but itself.
- Process B creates lock and lists locks.
- In the middle of B listing locks, process C creates lock and that makes B skip over A's lock.
- A and B are now running concurrently.

@siguza @nicolas17 (this is not what claude proposed I wanted to come up with my own solution).

-writers is making a file
-readers is owning a file in a directory

let's say the path for your lock is in $lck_path then:

wait()
{
sleep 0.5 # or something smarter like inotify
}

write_lock()
{
touch $lck_path.$$
while ! mv -n $lck_path.$$ $lck_path 2>/dev/null; do
wait()
done
}

write_unlock()
{
rm -f $lck_path
}

read_lock()
{
while ! mkdir -p $lck_path/$$ 2>/dev/null; do
wait()
done
}

read_unlock()
{
rmdir $lck_path/$$
rmdir $lck_path 2>/dev/null
}

@siguza @nicolas17 you probably also want to add some `trap` based auto-cleanup in case the shell script dies, and I literaly wrote that in mastodon so I'm rather sure it doesn't work for stupid syntax errors, but the idea is that I use the filesystem semantics to make it work:

the writer uses `mv -n` which refuses the move if the destination exists, whatever the destination is, obviously gives you x-lock (there might be a better way to do open(O_EXCL|O_CREAT) that's what I'm after here).

for readers I basically make them create a sub-directory -- which is exclusive from writers because writers make a file -- and the reader lock is cleaned up by all read unlockers by trying to delete the parent directory which is only ever allowed if it's empty.

that gives you a reader biased rwlock.

you're welcome

@madcoder @nicolas17 using files vs directories is a very elegant solution, and on Linux this might actually work... but on BSD, mv -n is not atomic, so two writers could race each other. I know the renameatx_np syscall supports RENAME_EXCL, but mv doesn't use that. I'm not entirely sure that GNU mv -n is atomic, but at least there is a RENAME_NOREPLACE in some code it calls into, so it's at least plausible.
@siguza @nicolas17 thank you. I was proud of myself I shall say
@siguza @nicolas17 and what you want is not rename really it’s creating the file. Touch(1) didn’t seem to give you any way to do that. I didn’t look at other ways to do it.