lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:	Wed, 28 Jan 2009 18:09:26 -0800
From:	Mike Waychison <mikew@...gle.com>
To:	Mike Waychison <mikew@...gle.com>, linux-kernel@...r.kernel.org,
	linux-fsdevel@...r.kernel.org
Subject: Re: [PATCH v1 0/8] Deferred dput() and iput() -- reducing lock	contention

Dave Chinner wrote:
> On Fri, Jan 16, 2009 at 06:29:36PM -0800, Mike Waychison wrote:
>> We've noticed that at times it can become very easy to have a system begin to
>> livelock on dcache_lock/inode_lock (specifically in atomic_dec_and_lock()) when
>> a lot of dentries are getting finalized at the same time (massive delete and
>> large fdtable destructions are two paths I've seen cause problems).
>>
>> This patchset is an attempt to try and reduce the locking overheads associated
>> with final dput() and final iput().  This is done by batching dentries and
>> inodes into per-process queues and processing them in 'parallel' to consolidate
>> some of the locking.
> 
> Hmmmm. This deferring of dput/iput will have the same class of
> effects on filesystems as the recent reverted changes to make
> generic_delete_inode() an asynchronous process. That is, it
> temporally separates the transaction for namespace deletion (i.e.
> unlink) from the transaction that completes the inode deletion that
> occurs, typically, during ->clear_inode. See the recent thread
> titled:
> 
> [PATCH] async: Don't call async_synchronize_full_special() while holding sb_lock
> 
> For more details.
> 
> I suspect that change is likely to cause worse problems than the
> async changes in that it doesn't have a cap on the number of
> deferred operations.....

*sigh*.  So I did some testing with XFS today and you're right, 
separating the namespace operation from the actual file delete really 
slows things down.

I'll follow up with a v2 patchset with more precise details, but 
basically, I create (on one fs) 100K files (each empty) per cpu (8 core 
machine).  I then time how long it takes to have one-process-per-cpu 
delete it's own file hierarchy of 100K files.

XFS performs horribly at this test (taking approx 34 minutes to 
complete).  With my changes, it takes approx 57 minutes.  For 
comparison, ext2 takes ~4.5 seconds, ext4 ~30 seconds and ext4 without a 
journal ~4.1 seconds.

Seems to me like there is a design issue causing XFS to suck at mass 
deletes, and I can understand why you wouldn't want to separate the 
namespace+inode drop.

What say you to letting the fs itself specify whether it's inodes should 
be handled serially (like it is today) or in deferred batches?  Patch 4 
of this series introduced an I_SYNCIPUT flag to deal with umount issues; 
perhaps it would make sense to have inodes from XFS tagged with this flag?

Mike Waychison
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ