lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Fri, 09 Jan 2009 10:09:29 -0500
From:	Chris Mason <chris.mason@...cle.com>
To:	Dave Chinner <david@...morbit.com>
Cc:	Arjan van de Ven <arjan@...ux.intel.com>,
	Dave Kleikamp <shaggy@...ux.vnet.ibm.com>,
	Linus Torvalds <torvalds@...ux-foundation.org>,
	Grissiom <chaos.proton@...il.com>, linux-kernel@...r.kernel.org,
	linux-fsdevel <linux-fsdevel@...r.kernel.org>
Subject: Re: [PATCH] async: Don't call async_synchronize_full_special()
 while holding sb_lock

On Fri, 2009-01-09 at 19:22 +1100, Dave Chinner wrote:
> >
> > the async code throttles at 32k outstanding.
> > Yes 32K is arbitrary, but if you delete  a million files fast, all
> > but the first few thousand are synchronous.
> 
> No, after the first 32k you get a mix of synchronous and async as
> the async queue gets processed in parallel. This means you are
> screwing up the temporal locality of the unlink of inodes. i.e. some
> unlinks are being done immediately, others are being delayed until
> another 32k inodes have been processed of the unlink list.
>
>
> If we've been careful about allocation locality during untar so
> that inodes that are allocated sequentially by untar will be
> deleted sequential by rm -rf, then this new pattern will break those
> optimisations because the unlink locality is being broken by
> the some sync/some async behaviour that this queuing mechanism
> creates.
> 

You won't notice on ext34, but xfs, btrfs and probably jfs should go
slower with this.  The throttle should wait for one of the entries on
the queue to finish, and then put this inode on the  queue in order.
Doing the current one synchronously will generate random IO.

On XFS, create 256,000 4k files, drop caches and then time find . |
xargs rm

> Hmmmm - I suspect that rm -rf will also trigger lots of kernel
> threads to be created. We do not want 256 kernel threads to
> be spawned to bang on the serialised regions of filesystem
> journals (generating lock contention) when one thread would
> be sufficient....

This could be good and bad.

-chris


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ