Current status: implementing read-write locks in bash.
Regretting my life choices right now.
Current status: implementing read-write locks in bash.
Regretting my life choices right now.
@siguza hmmm
Create lock file ${pid}-R or ${pid}-W, *then* check if there's another lock file present that conflicts (*-W if you're setting a read lock, *-* if you're setting a write lock, ignore your own). If there is, delete yours and fail.
Is there any possible race condition here? I think by creating first and checking for conflicts second you should be fine, but I have not thought about it too hard, nor tested it with contention...
@nicolas17 I don't trust bash glob to give you an atomic snapshot of the file system. The usual primitive used to build locks is `mkdir`, since that either atomically fails or atomically succeeds.
My current plan is to use a simple mkdir for the write-lock, and then do pid-based read locks, but gate globbing behind another simple lock (which would only be held for a very short amount of time though).
Contention isn't too much of an issue for me, I have at most 10 scripts running at the same time. My main worries are just a previous cronjob not finishing before the next one gets started, and also two cronjobs doing git add/commit/push at the same time...
@siguza Does it matter if the glob/list is not atomic though?
Process A creates lock. Process A lists locks. In the middle of listing, process B creates lock. A's glob happens to miss it because scandir is not atomic enough. But when B lists locks it will definitely see A's lock, right?
@nicolas17 I'm thinking more like:
- Process A creates lock and lists locks, finds none but itself.
- Process B creates lock and lists locks.
- In the middle of B listing locks, process C creates lock and that makes B skip over A's lock.
- A and B are now running concurrently.