On Fri, 05 Nov 2010 23:08:57 +0100 Colomban Wendling lists.ban@herbesfolles.org wrote:
Le 05/11/2010 21:33, Dimitar Zhekov a ecrit :
On Fri, 05 Nov 2010 19:50:59 +0100
- Open the file for reading and writing.
- If the new data is longer, append the extra part only (thus claiming
the required disk space). If that fails, truncate to the original size and abort. 3. Write the data (without the extra part, if any). 4. If the new data is shorter, truncate.
That's almost 100% safe (there is no apparent reason for truncate to a smaller size to fail), preserves everything, no extra disk space, not even extra I/O.
Even if it seems really interesting, I'm not sure it is really reliable. What if the program terminates (power loss, whatever) during the final write? What if a network connection is broken before we truncate?
In any given moment, the file is either OK, or it contains the old data followed by garbade. The latter should be reported to the user.
Since the whole operation is not atomic, it's not 100% safe, and I still believe that the backup is needed. This said, it could be a good implementation for non-safe file saving, that would be safer than without.
Of course, I propose this only for use_safe_file_saving = FALSE.
But that's probably much work for less gain IMHO.
I'll write a non-GIO variant first, as a proof of concept. The current non-GIO is buggy anyway. First:
if (G_UNLIKELY(len != bytes_written)) err = errno;
but fwrite() is not guaranteed to set errno, only write() is.
Second, and more important, the result of fclose() is not checked, for a buffered file stream. On lack of disk space, on my system fwrite() happily returns written == len, but fclose() fails. YMMV.
If not anything else, we should use non-buffered I/O, with fsync(), and check the result of close() anyway.