How to avoid race condition when using a lock-file to avoid two instances of a script running simultaneously?
A typical approach to avoid two instances of the same script running simultaneously looks like this:
[ -f ".lock" ] && exit 1 touch .lock # do something rm .lock
Is there a better way to lock on files from a shell-script, avoiding a race condition? Must directories be used instead?
Yes, there is indeed a race condition in the sample script. You can use bash's noclobber option in order to get a failure in case of a race, when a different script sneaks in between the -f test and the touch.
The following is a sample code-snippet (inspired by this article) that illustrates the mechanism:
if (set -o noclobber; echo "$$" > "$lockfile") 2> /dev/null; then # This will cause the lock-file to be deleted in case of a # premature exit. trap 'rm -f "$lockfile"; exit $?' INT TERM EXIT # Critical Section: Here you'd place the code/commands you want # to be protected (i.e., not run in multiple processes at once). rm -f "$lockfile" trap - INT TERM EXIT else echo "Failed to acquire lock-file: $lockfile." echo "Held by process $(cat $lockfile)." fi
Try flock command:
exec 200>"$LOCK_FILE" flock -e -n 200 || exit 1
It will exit if the lock file is locked. It is atomic and it will work over recent version of NFS.
I did a test. I have created a counter file with 0 in it and executed the following in a loop on two servers simultaneously 500 times:
#!/bin/bash exec 200>/nfs/mount/testlock flock -e 200 NO=`cat /nfs/mount/counter` echo "$NO" let NO=NO+1 echo "$NO" > /nfs/mount/counter
One node was fighting with the other for the lock. When both runs finished the file content was 1000. I have tried multiple times and it always works!
Note: NFS client is RHEL 5.2 and server used is NetApp.
Lock your script (against parallel run)
seems like I've found an easier solution: man lockfile