Data that our code writes to a file may not be accessible immediately after writing.
One reason is that some output stream implementations use buffers,
and prefer to write out their whole buffer in one go.
In these situations, if we want to be sure the data can be read from the file right away, we flush the buffer.
Another option in Java is to open the file for writing with the
Requires that every update to the file's content or metadata be written synchronously to the underlying storage device.
Note that this sounds like it could lead to performance issues, as opposed to a single sync
when finished writing for the current task.
While investigating a flaky end-to-end test at work, I have found that the data may not be available after writing even if we have flushed the buffer, the output stream implementation does not buffer, or we used the SYNC option when opening the file for writing.
It is true that the SYNC option comes with some cautions . As far as I can tell we are fulfilling the requirements needed to get the expected behaviour.
How could the data not be on disk?
Several possible reasons occurred to me. Perhaps the runtime implementation does not honor the SYNC instruction. This seems unlikely because our error situation was occurring on an Oracle JVM implementation. I don't expect a bug in file writing code there.
Neither do I expect a bug in the operating system disk cache . Please check out the diagram listed on that page! It is incredible how many components our bytes might flow through.
Perhaps some other buffer is getting in between me and my data. SSDs and hard disks have a disk buffer , similar to RAM. Disk buffer is going to be read from first before looking further. That is the idea behind disk buffers; providing fast storage for recent or popular data. This article claims that they have seen write operations on SSDs take as long as 6 seconds, so the hardware write time may still be an interesting angle if there is no disk buffer, or there is a disk buffer bypass for some reason.
A filesystem driver could do additional buffering. I suppose it is possible that the filesystem driver does not honor the SYNC instruction.
Certainly a remote filesystem cannot be expected to honor that; it would require a lot more work to do so across computers and I would expect quite low performance, especially if the protocol is built on a reliable protocol such as TCP . The guarantees such reliable protocols give require a lot more network traffic than is required for transferring the data itself. I imagine that when very small byte chunks have to be transferred and immediately written to disk it will require more resources because it requires more traffic. It might also incur more delays because the bytes have to be written out in the correct order.
I see reliability in the face of hardware or software failure as the main benefit of direct data synchronization. The extra network traffic required to make a protocol reliable makes it less likely that this benefit of direct synchronization can be achieved. Several network packets may stay buffered on the network adapter because one packet is out of order. The missing packet may never arrive because of a failure in any of the involved systems. In that case the data inside buffered packets would not be written to disk. Note that the same is true with disk buffers; most SSDs don't have enough power stored to write their disk buffer to disk when power is lost. That kind of feature is called 'power loss protection'.
Other ways, in Java
Is there another way in Java to ask for the file contents to be written to the file immediately? The FileDescriptor class has a sync method . When opening the file with the SYNC option we use the java.nio.file.Files class and receive an output stream. The FileDescriptor class is in the java.io package. These two avenues cannot be combined. FileChannel's force method won't help us as we need to pass an output stream to an external library.
Our situation occurred in an end-to-end test. It is not clear to me how our specific situation can be solved without adding a wait period in between. This situation where we write to disk and immediately want to write the same data from disk should not occur in production code. It requires less disk I/O to keep the data in memory after writing it to disk, instead of reading it from disk directly after writing. That means the performance as a whole should be considerably better because RAM is considerably faster to access than disks. This problem should only occur in practice if you cannot change the code that writes to disk, and the code that reads from disk. In our production environment our testing problem is not relevant. Still, I wonder why the data could not be read from disk even though we had asked for it to be written out immediately.