have you tried fcntl? logfile = open(LOGFILE, 'w') #fileno() returns the file descriptor of opened file, #lock_ex for exclusive locking, the other process waits for file to unlock before proceeding. (if the file is locked already). you don't have to lock the LOGFILE, lock a new file, as long as it's locked, other processes won't be able to write to LOGFILE. fcntl.flock(logfile.fileno(), fcntl.LOCK_EX) #code for writing to file.... blahblah.... #unlock file fcntl.flock(logfile.fileno(), fcntl.LOCK_UN) logfile.close() Martin Kaufmann wrote: > Could somebody help me? I have a script that writes its output in a > logfile. This logfile is needed to check whether a task has already be > done. Several workstations are now using the same script (via nfs). So I > implemented the following file locking mechanism: > > logfile = posixfile.open(LOGFILE, 'a') > logfile.lock('|w') > [code snipped...] > log_string = '%s %s%s %d %s %s\n' % (mytime, host, url, > error_code, message, hostname) > logfile.write(log_string) > logfile.lock('u') > logfile.close() > > But I still get problems with two processes trying to write at the same > time. What is wrong with my implementation? > > Thanks for your help. > > Regards, > > Martin
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4