Le 05/11/2010 21:33, Dimitar Zhekov a écrit :
On Fri, 05 Nov 2010 19:50:59 +0100 Colomban Wendling lists.ban@herbesfolles.org wrote:
Le 05/11/2010 20:08, Dimitar Zhekov a ecrit :
a. Create filename-foo, write data to it, abort and unlink on failure. [...]
Hey, that's quite clever :) We would then avoid to have to read the original file.
It's a well-known algorithm, but there is a better one, assuming that the underlying I/O system supports open for read-write and truncate. I checked GIO, and it does: g_file_open_readwrite, g_seekable_truncate. :D
- Open the file for reading and writing.
- If the new data is longer, append the extra part only (thus claiming
the required disk space). If that fails, truncate to the original size and abort. 3. Write the data (without the extra part, if any). 4. If the new data is shorter, truncate.
That's almost 100% safe (there is no apparent reason for truncate to a smaller size to fail), preserves everything, no extra disk space, not even extra I/O.
Even if it seems really interesting, I'm not sure it is really reliable. What if the program terminates (power loss, whatever) during the final write? What if a network connection is broken before we truncate? Since the whole operation is not atomic, it's not 100% safe, and I still believe that the backup is needed.
This said, it could be a good implementation for non-safe file saving, that would be safer than without. But that's probably much work for less gain IMHO.
Regards, Colomban