[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <Pine.LNX.4.64.1304031656420.31007@file.rdu.redhat.com>
Date: Wed, 3 Apr 2013 17:05:38 -0400 (EDT)
From: Mikulas Patocka <mpatocka@...hat.com>
To: Jeff Moyer <jmoyer@...hat.com>
cc: Jens Axboe <axboe@...nel.dk>,
"Alasdair G. Kergon" <agk@...hat.com>, Tejun Heo <tj@...nel.org>,
Mike Snitzer <msnitzer@...hat.com>,
Christoph Hellwig <chellwig@...hat.com>, dm-devel@...hat.com,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH] Track block device users that created dirty pages
On Mon, 1 Apr 2013, Jeff Moyer wrote:
> Mikulas Patocka <mpatocka@...hat.com> writes:
>
> > The new semantics is: if a process did some buffered writes to the block
> > device (with write or mmap), the cache is flushed when the process
> > closes the block device. Processes that didn't do any buffered writes to
> > the device don't cause cache flush. It has these advantages:
> > * processes that don't do buffered writes (such as "lvm") don't flush
> > other process's data.
> > * if the user runs "dd" on a block device, it is actually guaranteed
> > that the data is flushed when "dd" exits.
>
> Why don't applications that want data to go to disk just call fsync
> instead of relying on being the last process to have had the device
> open?
>
> Cheers,
> Jeff
Because the user may forget to specify "conv=fsync" on dd command line.
Anyway, when using dd to copy partitions, it should either always flush
buffers on exit or never do it. The current state, when dd mostly flushes
buffers, but doesn't with very low probability (if it races with lvm or
udev) is confusing.
If the admin sees that dd flushes buffers on block devices for all his
trials, he assumes that dd always flushes buffers on block devices. He
doesn't know that there is a tiny race condition that makes dd not flush
buffers.
Mikulas
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists