lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 12 Feb 2014 19:01:17 -0700
From:	Andreas Dilger <adilger@...ger.ca>
To:	Thavatchai Makphaibulchoke <thavatchai.makpahibulchoke@...com>
Cc:	Andi Kleen <andi@...stfloor.org>, T Makphaibulchoke <tmac@...com>,
	"viro@...iv.linux.org.uk" <viro@...iv.linux.org.uk>,
	"tytso@....edu" <tytso@....edu>,
	"linux-ext4@...r.kernel.org" <linux-ext4@...r.kernel.org>,
	"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
	"linux-fsdevel@...r.kernel.org" <linux-fsdevel@...r.kernel.org>,
	"aswin@...com" <aswin@...com>
Subject: Re: [PATCH v4 0/3] ext4: increase mbcache scalability


On Feb 11, 2014, at 12:58 PM, Thavatchai Makphaibulchoke <thavatchai.makpahibulchoke@...com> wrote:
> On 01/24/2014 11:09 PM, Andreas Dilger wrote:
>> I think the ext4 block groups are locked with the blockgroup_lock that has about the same number of locks as the number of cores, with a max of 128, IIRC.  See blockgroup_lock.h. 
>> 
>> While there is some chance of contention, it is also unlikely that all of the cores are locking this area at the same time.  
>> 
>> Cheers, Andreas
>> 
> 
> Andreas, looks like your assumption is correct.  On all 3 systems, 80, 60 and 20 cores, I got almost identical aim7 results using either a smaller dedicated lock array or the block group lock.  I'm inclined to go with using the block group lock as it does not incur any extra space.
> 
> One problem is that, with the current implementation mbcache has no knowledge of the super block, including its block group lock, of the filesystem.  In my implementation I have to change the first argument of mb_cache_create() from char * to struct super_block * to be able to access the super block's block group lock.

Note that you don't have to use the ext4_sb_info->s_blockgroup_lock.
You can allocate and use a separate struct blockgroup_lock for mbcache
instead of allocating a spinlock array (and essentially reimplementing
the bgl_lock_*() code).  While it isn't a huge amount of duplication,
that code is already tuned for different SMP core configurations and
there is no reason NOT to use struct blockgroup_lock.

> This works with my proposed change to allocate an mb_cache for each mounted ext4 filesystem.  This would also require the same change, allocating an mb_cache for each mounted filesystem, to both ext2 and ext3, which would increase the scope of the patch.  The other alternative, allocating a new smaller spinlock array, would not require any change to either ext2 and ext3.
> 
> I'm working on resubmitting my patches using the block group locks and extending the changes to also include both ext2 and ext3.  With this approach, not only that no addition space for dedicated new spinlock array is required, the e_bdev member of struct mb_cache_entry could also be removed, reducing the space required for each mb_cache_entry.
> 
> Please let me know if you have any concern or suggestion.

I'm not against re-using the s_blockgroup_lock in the superblock, since
the chance of contention on this lock between threads is very small, as
there are normally 4x the number of spinlocks as cores.

You might consider starting with a dedicated struct blockgroup_lock in
the mbcache code, then move to use the in-superblock struct in a later
patch.  That would allow you to push and land the mbcache and ext4 patches
independently of the ext2 and ext3 patches (if they are big).  If the
ext2 and ext3 patches are relatively small then this extra complexity
in the patches may not be needed.

Cheers, Andreas






Download attachment "signature.asc" of type "application/pgp-signature" (834 bytes)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ