[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20201130230911.GB1375014@carbon.DHCP.thefacebook.com>
Date: Mon, 30 Nov 2020 15:09:11 -0800
From: Roman Gushchin <guro@...com>
To: Yang Shi <shy828301@...il.com>
CC: Vladimir Davydov <vdavydov.dev@...il.com>,
Kirill Tkhai <ktkhai@...tuozzo.com>,
Shakeel Butt <shakeelb@...gle.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Linux MM <linux-mm@...ck.org>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH] mm: list_lru: hold nlru lock to avoid reading transient
negative nr_items
On Mon, Nov 30, 2020 at 02:57:23PM -0800, Yang Shi wrote:
> On Mon, Nov 30, 2020 at 2:53 PM Roman Gushchin <guro@...com> wrote:
> >
> > On Mon, Nov 30, 2020 at 12:57:47PM -0800, Yang Shi wrote:
> > > On Mon, Nov 30, 2020 at 12:09 PM Roman Gushchin <guro@...com> wrote:
> > > >
> > > > On Mon, Nov 30, 2020 at 10:45:14AM -0800, Yang Shi wrote:
> > > > > When investigating a slab cache bloat problem, significant amount of
> > > > > negative dentry cache was seen, but confusingly they neither got shrunk
> > > > > by reclaimer (the host has very tight memory) nor be shrunk by dropping
> > > > > cache. The vmcore shows there are over 14M negative dentry objects on lru,
> > > > > but tracing result shows they were even not scanned at all. The further
> > > > > investigation shows the memcg's vfs shrinker_map bit is not set. So the
> > > > > reclaimer or dropping cache just skip calling vfs shrinker. So we have
> > > > > to reboot the hosts to get the memory back.
> > > > >
> > > > > I didn't manage to come up with a reproducer in test environment, and the
> > > > > problem can't be reproduced after rebooting. But it seems there is race
> > > > > between shrinker map bit clear and reparenting by code inspection. The
> > > > > hypothesis is elaborated as below.
> > > > >
> > > > > The memcg hierarchy on our production environment looks like:
> > > > > root
> > > > > / \
> > > > > system user
> > > > >
> > > > > The main workloads are running under user slice's children, and it creates
> > > > > and removes memcg frequently. So reparenting happens very often under user
> > > > > slice, but no task is under user slice directly.
> > > > >
> > > > > So with the frequent reparenting and tight memory pressure, the below
> > > > > hypothetical race condition may happen:
> > > > >
> > > > > CPU A CPU B CPU C
> > > > > reparent
> > > > > dst->nr_items == 0
> > > > > shrinker:
> > > > > total_objects == 0
> > > > > add src->nr_items to dst
> > > > > set_bit
> > > > > retrun SHRINK_EMPTY
> > > > > clear_bit
> > > > > list_lru_del()
> > > > > reparent again
> > > > > dst->nr_items may go negative
> > > > > due to current list_lru_del()
> > > > > on CPU C
> > > > > The second run of shrinker:
> > > > > read nr_items without any
> > > > > synchronization, so it may
> > > > > see intermediate negative
> > > > > nr_items then total_objects
> > > > > may return 0 conincidently
> > > > >
> > > > > keep the bit cleared
> > > > > dst->nr_items != 0
> > > > > skip set_bit
> > > > > add scr->nr_item to dst
> >
> > Btw, I think I have a simpler explanation:
> >
> > A (0 objects)
> > |
> > B (N objects)
> >
> > Let's say the reparenting races with the deletion of a single slab object.
> > list_lru_del() can see parent's lru list and substract 1 from nr_items == 0,
> > setting A's nr_items to -1 (the item is actually still in B's list).
> >
> > memcg_drain_list_lru_node() will check !dst->nr_items && src->nr_items
> > !-1 && N => 0 and not set the bit. But now we have (N-1) objects in A's list
> > and the shrinker bit not set.
>
> Yes, this is the exact race I elaborated in the commit log.
Yes, the same problem for sure, I just think if we don't need to actually run
into the shrinker code to mentally reproduce it, it's a bit easier model to follow.
>
> >
> > My proposed fix should resolve it. Alternatively, we maybe can check if
> > dst->nr_items <= 0 and only then set the bit, but it seems to be an unnecessary
> > optimization.
>
> Yes, I think "src->nr_items != 0" is good enough.
I agree.
Powered by blists - more mailing lists