lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 21 Apr 2008 12:51:58 +0100
From:	Jamie Lokier <jamie@...reable.org>
To:	Andi Kleen <andi@...stfloor.org>
Cc:	Theodore Tso <tytso@....edu>, Eric Sandeen <sandeen@...hat.com>,
	Alexey Zaytsev <alexey.zaytsev@...il.com>,
	linux-ext4@...r.kernel.org, linux-fsdevel@...r.kernel.org,
	Rik van Riel <riel@...riel.com>
Subject: Re: Mentor for a GSoC application wanted (Online ext2/3 filesystem checker)

Andi Kleen wrote:
> On Mon, Apr 21, 2008 at 12:42:42AM +0100, Jamie Lokier wrote:
> > Andi Kleen wrote:
> > > [LVM] always disables barriers if you don't apply a so far unmerged
> > > patch that enables them in some special circumstances (only single
> > > backing device)
> > 
> > (I continue to be surprised at the un-safety of Linux fsync)
> 
> Note barrier less does not necessarily always mean unsafe fsync,
> it just often means that.
> 
> Also surprisingly lot more syncs or write cache off tend to lower the MTBF 
> of your disk significantly, so "unsafer" fsync might actually be more safe
> for your unbackuped data.

That's really interesting, thanks.  Do you have something to cite
about syncs reducing the MTBF?

( I'm really glad I added barriers instead of write cache off to my
2.4.26 based disk using devices now ;-) )

> > > Not having barriers sometimes makes your workloads faster (and less
> > > safe) and in other cases slower.
> > 
> > I'm curious, how does it make them slower?  Merely not issuing barrier
> > calls seems like it will always be the same speed or faster.
> 
> Some setups detect the no barrier case and switch to full sync +
> wait (or write cache off) which depending on the disk supporting NCQ
> can be slower.

But to issue full syncs, that's implemented as barrier calls in the
block request layers isn't it?  The filesystem isn't given a facility
to request the block device do full syncs or disable the write cache.

So when a blockdev doesn't offer barriers to the filesystem, it means
the driver doesn't support full syncs or cache disabling either, since
if it did, the request layer would expose them to the fs as barriers.

What am I missing from this picture?  Do you mean that manual setup
(such as by a DBA) tends to disable the write cache?

Thanks,
-- Jamie
--
To unsubscribe from this list: send the line "unsubscribe linux-ext4" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ