lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 02 Jun 2014 13:07:26 -0700
From:	Daniel Phillips <daniel@...nq.net>
To:	Dave Chinner <david@...morbit.com>
CC:	linux-kernel@...r.kernel.org, linux-fsdevel@...r.kernel.org,
	Linus Torvalds <torvalds@...ux-foundation.org>,
	Andrew Morton <akpm@...ux-foundation.org>,
	OGAWA Hirofumi <hirofumi@...l.parknet.co.jp>
Subject: Re: [RFC][PATCH 2/2] tux3: Use writeback hook to remove duplicated
 core code


On 06/01/2014 08:30 PM, Dave Chinner wrote:
> I get very worried whenever I see locks inside inode->i_lock. In
> general, i_lock is supposed to be the innermost lock that is taken,
> and there are very few exceptions to that - the inode LRU list is
> one of the few.

I generally trust Hirofumi to ensure that our locking is sane, but please point out any specific
issue. We are well aware of the need to get out of our critical section fast, as is apparent in
tux3_clear_dirty_inode_nolock. Hogging our own i_locks would mainly hurt our own benchmarks.

For what it is worth, the proposed writeback API improves our SMP situation with respect to other
filesystems by moving tux3_clear_dirty_inode_nolock outside the wb list lock.

> I don't know what the tuxnode->lock is, but I found this:
>
>  *     inode->i_lock
>  *         tuxnode->lock (to protect tuxnode data)
>  *             tuxnode->dirty_inodes_lock (for i_ddc->dirty_inodes,
>  *                                         Note: timestamp can be updated
>  *                                         outside inode->i_mutex)
>
> and this:
>
>  *     inode->i_lock
>  *         tuxnode->lock
>  *         sb->dirty_inodes_lock
>
> Which indicates that you take a filesystem global lock a couple of
> layers underneath the VFS per-inode i_lock. I'd suggest you want to
> separate the use of the vfs inode ilock from the locking heirarchy
> of the tux3 inode....
>

Our nested locks synchronize VFS state with Tux3 state, which is not optional. Alternatively, we
could rely on i_lock alone, which would increase contention.

The sb->dirty_inodes_lock is held briefly, which you can see in tux3_dirty_inode and
tux3_clear_dirty_inode_nolock. If it shows up in a profile we could break it up.

Regards,

Daniel
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ