[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20101203091445.GK18195@tux1.beaverton.ibm.com>
Date: Fri, 3 Dec 2010 01:14:45 -0800
From: "Darrick J. Wong" <djwong@...ibm.com>
To: Christoph Hellwig <hch@...radead.org>
Cc: "Theodore Ts'o" <tytso@....edu>,
linux-kernel <linux-kernel@...r.kernel.org>,
linux-ext4 <linux-ext4@...r.kernel.org>
Subject: Re: [PATCH] ext4: Set barrier=0 when block device does not
advertise flush support
On Fri, Dec 03, 2010 at 02:09:50AM -0500, Christoph Hellwig wrote:
> On Thu, Dec 02, 2010 at 04:16:59PM -0800, Darrick J. Wong wrote:
> > If the user tries to enable write flushes with "barrier=1" and the underlying
> > block device does not support flushes, print a message and set barrier=0.
>
> That doesn't make any sense at all with the ne wFLUSH+FUA code, which is
> designed to make the cache flushing entirely transparanent. Basically
> with the new code the barrier option should become a no-op and always
> enabled.
Are we ready to remove the barrier= mount option at this point? How many users
exist who use barrier=0 to speed up write performance when they're willing to
take on the added safety risk?
I've noticed that provisioning goes faster if one mounts the filesystem with
barrier=0; so long as the control software turns barriers on after the deploy
finishes and always restarts the deploy after a failure, a power failure on the
client system won't cause problems. Unfortunately, there doesn't seem to be
any other way to communicate that relaxation to the code.
Personally I'd rather the knob remain in ext4 on the grounds that I know my
workloads and can judge the appropriate level of risk, especially since ext4
picks the safe option by default. However, I'd prefer /proc/mounts not
misrepresent the status of flush support, to the best of ext4's knowledge.
--D
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists