lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 22 Jun 2017 22:01:39 +0530
From:   Sahitya Tummala <stummala@...eaurora.org>
To:     Vladimir Davydov <vdavydov.dev@...il.com>
Cc:     Alexander Polakov <apolyakov@...et.ru>,
        Andrew Morton <akpm@...ux-foundation.org>,
        Jan Kara <jack@...e.cz>, viro@...iv.linux.org.uk,
        linux-mm@...ck.org, linux-kernel@...r.kernel.org,
        linux-fsdevel@...r.kernel.org
Subject: Re: [PATCH v2] fs/dcache.c: fix spin lockup issue on nlru->lock



On 6/21/2017 10:01 PM, Vladimir Davydov wrote:
>
>> index cddf397..c8ca150 100644
>> --- a/fs/dcache.c
>> +++ b/fs/dcache.c
>> @@ -1133,10 +1133,11 @@ void shrink_dcache_sb(struct super_block *sb)
>>   		LIST_HEAD(dispose);
>>   
>>   		freed = list_lru_walk(&sb->s_dentry_lru,
>> -			dentry_lru_isolate_shrink, &dispose, UINT_MAX);
>> +			dentry_lru_isolate_shrink, &dispose, 1024);
>>   
>>   		this_cpu_sub(nr_dentry_unused, freed);
>>   		shrink_dentry_list(&dispose);
>> +		cond_resched();
>>   	} while (freed > 0);
> In an extreme case, a single invocation of list_lru_walk() can skip all
> 1024 dentries, in which case 'freed' will be 0 forcing us to break the
> loop prematurely. I think we should loop until there's at least one
> dentry left on the LRU, i.e.
>
> 	while (list_lru_count(&sb->s_dentry_lru) > 0)
>
> However, even that wouldn't be quite correct, because list_lru_count()
> iterates over all memory cgroups to sum list_lru_one->nr_items, which
> can race with memcg offlining code migrating dentries off a dead cgroup
> (see memcg_drain_all_list_lrus()). So it looks like to make this check
> race-free, we need to account the number of entries on the LRU not only
> per memcg, but also per node, i.e. add list_lru_node->nr_items.
> Fortunately, list_lru entries can't be migrated between NUMA nodes.
It looks like list_lru_count() is iterating per node before iterating 
over all memory
cgroups as below -

unsigned long list_lru_count_node(struct list_lru *lru, int nid)
{
         long count = 0;
         int memcg_idx;

         count += __list_lru_count_one(lru, nid, -1);
         if (list_lru_memcg_aware(lru)) {
                 for_each_memcg_cache_index(memcg_idx)
                         count += __list_lru_count_one(lru, nid, memcg_idx);
         }
         return count;
}

The first call to __list_lru_count_one() is iterating all the items per 
node i.e, nlru->lru->nr_items.
Is my understanding correct? If not, could you please clarify on how to 
get the lru items per node?

-- 
Qualcomm India Private Limited, on behalf of Qualcomm Innovation Center, Inc.
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, a Linux Foundation Collaborative Project.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ