lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 27 Jun 2013 09:11:43 +1000
From:	Dave Chinner <david@...morbit.com>
To:	OGAWA Hirofumi <hirofumi@...l.parknet.co.jp>
Cc:	Al Viro <viro@...iv.linux.org.uk>, linux-kernel@...r.kernel.org,
	linux-fsdevel@...r.kernel.org, tux3@...3.org
Subject: Re: [PATCH] Optimize wait_sb_inodes()

On Wed, Jun 26, 2013 at 05:45:23PM +0900, OGAWA Hirofumi wrote:
> Hi,
> 
> On the following stress, sync(2) became top of cpu load.
> 
> 	fsstress -s 1 -n 50000 -p 3 -d `pwd`
> 
> After profile by perf, cpu load was lock contention of inode_sb_list_lock.
> 
> sync(2) is data integrity operation, so it has to make sure all dirty
> data was written before sync(2) point. The bdi flusher flushes current
> dirty data and wait those. But, if there is in-flight I/O before
> sync_inodes_sb(), sync_inodes_sb() can be done before prior in-flight I/O.
> 
> So, wait_sb_inodes() is walking the all inodes on sb to make sure
> in-flight I/O was done too. When it is walking inodes,
> inode_sb_list_lock is contending with other operations like
> create(2). This is the cause of cpu load.

Sure, but avoiding wait_sb_inodes() doesn't address the fast-path
lock contention inode_sb_list_lock has which is in the create and
evict paths. The problems caused by wait_sb_inodes() are background
noise compared to contention multiple CPUs can put on this lock just
walking large directory structures in parallel.  Hence hacking
around wait_sb_inodes() doesn't address the underlying lock
contention problem.

> On another view, wait_sb_inodes() would (arguably) be necessary for
> legacy FSes. But, for example, if data=journal on ext*, wait_sb_inodes()
> would be more than useless, because ext* can be done it by own
> transaction list (and more efficient way).
> 
> Likewise, on tux3, the state is same with data=journal.
> 
> Also, even if data=ordered, ext* might be able to check in-flight I/O by
> ordered data list (with some new additional check, I'm not sure).

Why would you bother solving this problem differently in every
single filesystem? It's solvable at the VFS by tracking inodes that
are no longer dirty but still under writeback on the BDI. Then
converting wait_sb_inodes() to walk all the dirty and writeback
inodes would be sufficient for data integrity purposes, and it would
be done under the bdi writeback lock, not the inode_sb_list_lock....

Alternatively, splitting up the inode sb list and lock (say via the
per-node list_lru structures in -mm and -next that are being added
for exactly this purpose) would also significantly reduce lock
contention on both the create/evict fast paths and the
wait_sb_inodes() walk that is currently done....

So I think that you should address the problem properly at the VFS
level so everyone benefits, not push interfaces that allow
filesystem specific hacks to work around VFS level deficiencies...

Cheers,

Dave.
-- 
Dave Chinner
david@...morbit.com
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ