lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 22 Jun 2017 20:49:29 +0300
From:   Vladimir Davydov <vdavydov.dev@...il.com>
To:     Sahitya Tummala <stummala@...eaurora.org>
Cc:     Alexander Polakov <apolyakov@...et.ru>,
        Andrew Morton <akpm@...ux-foundation.org>,
        Jan Kara <jack@...e.cz>, viro@...iv.linux.org.uk,
        linux-mm@...ck.org, linux-kernel@...r.kernel.org,
        linux-fsdevel@...r.kernel.org
Subject: Re: [PATCH v2] fs/dcache.c: fix spin lockup issue on nlru->lock

On Thu, Jun 22, 2017 at 10:01:39PM +0530, Sahitya Tummala wrote:
> 
> 
> On 6/21/2017 10:01 PM, Vladimir Davydov wrote:
> >
> >>index cddf397..c8ca150 100644
> >>--- a/fs/dcache.c
> >>+++ b/fs/dcache.c
> >>@@ -1133,10 +1133,11 @@ void shrink_dcache_sb(struct super_block *sb)
> >>  		LIST_HEAD(dispose);
> >>  		freed = list_lru_walk(&sb->s_dentry_lru,
> >>-			dentry_lru_isolate_shrink, &dispose, UINT_MAX);
> >>+			dentry_lru_isolate_shrink, &dispose, 1024);
> >>  		this_cpu_sub(nr_dentry_unused, freed);
> >>  		shrink_dentry_list(&dispose);
> >>+		cond_resched();
> >>  	} while (freed > 0);
> >In an extreme case, a single invocation of list_lru_walk() can skip all
> >1024 dentries, in which case 'freed' will be 0 forcing us to break the
> >loop prematurely. I think we should loop until there's at least one
> >dentry left on the LRU, i.e.
> >
> >	while (list_lru_count(&sb->s_dentry_lru) > 0)
> >
> >However, even that wouldn't be quite correct, because list_lru_count()
> >iterates over all memory cgroups to sum list_lru_one->nr_items, which
> >can race with memcg offlining code migrating dentries off a dead cgroup
> >(see memcg_drain_all_list_lrus()). So it looks like to make this check
> >race-free, we need to account the number of entries on the LRU not only
> >per memcg, but also per node, i.e. add list_lru_node->nr_items.
> >Fortunately, list_lru entries can't be migrated between NUMA nodes.
> It looks like list_lru_count() is iterating per node before iterating over
> all memory
> cgroups as below -
> 
> unsigned long list_lru_count_node(struct list_lru *lru, int nid)
> {
>         long count = 0;
>         int memcg_idx;
> 
>         count += __list_lru_count_one(lru, nid, -1);
>         if (list_lru_memcg_aware(lru)) {
>                 for_each_memcg_cache_index(memcg_idx)
>                         count += __list_lru_count_one(lru, nid, memcg_idx);
>         }
>         return count;
> }
> 
> The first call to __list_lru_count_one() is iterating all the items per node
> i.e, nlru->lru->nr_items.

lru->node[nid].lru.nr_items returned by __list_lru_count_one(lru, nid, -1)
only counts items accounted to the root cgroup, not the total number of
entries on the node.

> Is my understanding correct? If not, could you please clarify on how to get
> the lru items per node?

What I mean is iterating over list_lru_node->memcg_lrus to count the
number of entries on the node is racy. For example, suppose you have
three cgroups with the following values of list_lru_one->nr_items:

  0   0   10

While list_lru_count_node() is at #1, cgroup #2 is offlined and its
list_lru_one is drained, i.e. its entries are migrated to the parent
cgroup, which happens to be #0, i.e. we see the following picture:

 10   0   0

     ^^^
  memcg_ids points here in list_lru_count_node() 

Then the count returned by list_lru_count_node() will be 0, although
there are still 10 entries on the list.

To avoid this race, we could keep list_lru_node->lock locked while
walking over list_lru_node->memcg_lrus, but that's too heavy. I'd prefer
adding list_lru_node->nr_count which would be equal to the total number
of list_lru entries on the node, i.e. sum of list_lru_node->lru.nr_lrus
and list_lru_node->memcg_lrus->lru[]->nr_items.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ