lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Fri, 31 Aug 2007 09:33:30 -0500
From:	Eric Sandeen <sandeen@...deen.net>
To:	David Chinner <dgc@....com>
CC:	Peter Zijlstra <peterz@...radead.org>,
	linux-kernel Mailing List <linux-kernel@...r.kernel.org>,
	xfs-oss <xfs@....sgi.com>, Ingo Molnar <mingo@...e.hu>
Subject: Re: [PATCH] Increase lockdep MAX_LOCK_DEPTH

David Chinner wrote:
> On Fri, Aug 31, 2007 at 08:39:49AM +0200, Peter Zijlstra wrote:
>> On Thu, 2007-08-30 at 23:43 -0500, Eric Sandeen wrote:
>>> The xfs filesystem can exceed the current lockdep 
>>> MAX_LOCK_DEPTH, because when deleting an entire cluster of inodes,
>>> they all get locked in xfs_ifree_cluster().  The normal cluster
>>> size is 8192 bytes, and with the default (and minimum) inode size 
>>> of 256 bytes, that's up to 32 inodes that get locked.  Throw in a 
>>> few other locks along the way, and 40 seems enough to get me through
>>> all the tests in the xfsqa suite on 4k blocks.  (block sizes
>>> above 8K will still exceed this though, I think)
>> As 40 will still not be enough for people with larger block sizes, this
>> does not seems like a solid solution. Could XFS possibly batch in
>> smaller (fixed sized) chunks, or does that have significant down sides?
> 
> The problem is not filesystem block size, it's the xfs inode cluster buffer
> size / the size of the inodes that determines the lock depth. the common case
> is 8k/256 = 32 inodes in a buffer, and they all get looked during inode
> cluster writeback.

Right, but as I understand it, the cluster size *minimum* is the block
size; that's why I made reference to block size - 16k blocks would have
64 inodes per cluster, minimum, potentially all locked in these paths.
Just saying that today, larger blocks -> larger clusters -> more locks.

Even though MAX_LOCK_DEPTH of 40 may not accomodate these scenarios, at
least it would accomodate the most common case today...

Peter, unless there is some other reason to do so, changing xfs
performance behavior simply to satisfy lockdep limitations* doesn't seem
like the best plan.

I suppose one slightly flakey option would be for xfs to see whether
lockdep is enabled and adjust cluster size based on MAX_LOCK_DEPTH... on
the argument that lockdep is likely used in debugging kernels where
sheer performance is less important... but, that sounds pretty flakey to me.

-Eric

*and I don't mean that in a pejorative sense; just the fact that some
max depth must be chosen - the literal "limitation."
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ