lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Mon, 9 Oct 2017 12:14:25 -0400
From:   Waiman Long <longman@...hat.com>
To:     Jan Kara <jack@...e.cz>
Cc:     Alexander Viro <viro@...iv.linux.org.uk>, Jan Kara <jack@...e.com>,
        Jeff Layton <jlayton@...chiereds.net>,
        "J. Bruce Fields" <bfields@...ldses.org>,
        Tejun Heo <tj@...nel.org>,
        Christoph Lameter <cl@...ux-foundation.org>,
        linux-fsdevel@...r.kernel.org, linux-kernel@...r.kernel.org,
        Ingo Molnar <mingo@...hat.com>,
        Peter Zijlstra <peterz@...radead.org>,
        Andi Kleen <andi@...stfloor.org>,
        Dave Chinner <dchinner@...hat.com>,
        Boqun Feng <boqun.feng@...il.com>,
        Davidlohr Bueso <dave@...olabs.net>
Subject: Re: [PATCH v7 4/6] lib/dlock-list: Make sibling CPUs share the same
 linked list

On 10/09/2017 11:40 AM, Jan Kara wrote:
> On Thu 05-10-17 14:43:26, Waiman Long wrote:
>> The dlock list needs one list for each of the CPUs available. However,
>> for sibling CPUs, they are sharing the L2 and probably L1 caches
>> too. As a result, there is not much to gain in term of avoiding
>> cacheline contention while increasing the cacheline footprint of the
>> L1/L2 caches as separate lists may need to be in the cache.
>>
>> This patch makes all the sibling CPUs share the same list, thus
>> reducing the number of lists that need to be maintained in each
>> dlock list without having any noticeable impact on performance. It
>> also improves dlock list iteration performance as fewer lists need
>> to be iterated.
>>
>> Signed-off-by: Waiman Long <longman@...hat.com>
> ...
>
>> @@ -118,7 +156,7 @@ bool dlock_lists_empty(struct dlock_list_heads *dlist)
>>  {
>>  	int idx;
>>  
>> -	for (idx = 0; idx < nr_cpu_ids; idx++)
>> +	for (idx = 0; idx < nr_dlock_lists; idx++)
>>  		if (!list_empty(&dlist->heads[idx].list))
>>  			return false;
>>  	return true;
>> @@ -207,7 +245,7 @@ struct dlock_list_node *__dlock_list_next_list(struct dlock_list_iter *iter)
>>  	/*
>>  	 * Try next list
>>  	 */
>> -	if (++iter->index >= nr_cpu_ids)
>> +	if (++iter->index >= nr_dlock_lists)
>>  		return NULL;	/* All the entries iterated */
>>  
>>  	if (list_empty(&iter->head[iter->index].list))
> Why these two do not need a similar treatment as alloc_dlist_heads()?
>
> 								Honza
>
I am aware that there is one vfs superblock allocation before
nr_dlock_list is inited. However, I don't believe a list emptiness test
or an iteration will happen that early in the boot sequence. Maybe I
should put a BUG_ON(!nr_dlock_list) test in those function just to be sure.

Cheers,
Longman

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ