[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <5091B075.7030608@panasas.com>
Date: Wed, 31 Oct 2012 16:12:53 -0700
From: Boaz Harrosh <bharrosh@...asas.com>
To: "Darrick J. Wong" <darrick.wong@...cle.com>
CC: Jan Kara <jack@...e.cz>, NeilBrown <neilb@...e.de>,
"Martin K. Petersen" <martin.petersen@...cle.com>,
"Theodore Ts'o" <tytso@....edu>,
linux-ext4 <linux-ext4@...r.kernel.org>,
linux-fsdevel <linux-fsdevel@...r.kernel.org>
Subject: Re: [RFC PATCH 1/2] bdi: Create a flag to indicate that a backing
device needs stable page writes
On 10/31/2012 12:36 PM, Darrick J. Wong wrote:
> On Wed, Oct 31, 2012 at 12:56:14PM +0100, Jan Kara wrote:
<snip>
>> You are right that we need a mechanism to push the flags from the devices
>> through various storage layers up into the bdi filesystem sees to make
>> things reliable.
>
> md/dm will call blk_integrity_register, so pushing the "stable page writes
> required" flag through the various layers is already taken care of. If devices
> and filesystems can both indicate that they want stable page writes, I'll have
> to keep track of however many users there are.
>
Please note again the iscsi case. Say the admin defined half of an md iscsi devices
with data_integrity and half without.
For me I would like an OR. If any underline device needs "stable pages" they all
get them.
Please also provide - or how easy is to make an API - for the like of iscsi that
given a request_queue, set the BDI's "stable pages" on. Something like:
/* stable_pages can only be turned on never off */
blk_set_stable_pages(struct request_queue);
> It does seem like less work to fix all the filesystems than to dork around with
> another flag.
Sure if that is possible, that will be perfect, then I do not need to keep
the old "unstable pages" code around at all.
Thanks for working on this
Boaz
--
To unsubscribe from this list: send the line "unsubscribe linux-ext4" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists