[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20201130230709.GA1375014@carbon.DHCP.thefacebook.com>
Date: Mon, 30 Nov 2020 15:07:09 -0800
From: Roman Gushchin <guro@...com>
To: Yang Shi <shy828301@...il.com>
CC: Vladimir Davydov <vdavydov.dev@...il.com>,
Kirill Tkhai <ktkhai@...tuozzo.com>,
Shakeel Butt <shakeelb@...gle.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Linux MM <linux-mm@...ck.org>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH] mm: list_lru: hold nlru lock to avoid reading transient
negative nr_items
On Mon, Nov 30, 2020 at 02:54:02PM -0800, Yang Shi wrote:
> On Mon, Nov 30, 2020 at 2:33 PM Roman Gushchin <guro@...com> wrote:
> >
> > On Mon, Nov 30, 2020 at 12:57:47PM -0800, Yang Shi wrote:
> > > On Mon, Nov 30, 2020 at 12:09 PM Roman Gushchin <guro@...com> wrote:
> > > >
> > > > On Mon, Nov 30, 2020 at 10:45:14AM -0800, Yang Shi wrote:
> > > > > When investigating a slab cache bloat problem, significant amount of
> > > > > negative dentry cache was seen, but confusingly they neither got shrunk
> > > > > by reclaimer (the host has very tight memory) nor be shrunk by dropping
> > > > > cache. The vmcore shows there are over 14M negative dentry objects on lru,
> > > > > but tracing result shows they were even not scanned at all. The further
> > > > > investigation shows the memcg's vfs shrinker_map bit is not set. So the
> > > > > reclaimer or dropping cache just skip calling vfs shrinker. So we have
> > > > > to reboot the hosts to get the memory back.
> > > > >
> > > > > I didn't manage to come up with a reproducer in test environment, and the
> > > > > problem can't be reproduced after rebooting. But it seems there is race
> > > > > between shrinker map bit clear and reparenting by code inspection. The
> > > > > hypothesis is elaborated as below.
> > > > >
> > > > > The memcg hierarchy on our production environment looks like:
> > > > > root
> > > > > / \
> > > > > system user
> > > > >
> > > > > The main workloads are running under user slice's children, and it creates
> > > > > and removes memcg frequently. So reparenting happens very often under user
> > > > > slice, but no task is under user slice directly.
> > > > >
> > > > > So with the frequent reparenting and tight memory pressure, the below
> > > > > hypothetical race condition may happen:
> > > > >
> > > > > CPU A CPU B CPU C
> > > > > reparent
> > > > > dst->nr_items == 0
> > > > > shrinker:
> > > > > total_objects == 0
> > > > > add src->nr_items to dst
> > > > > set_bit
> > > > > retrun SHRINK_EMPTY
> > > > > clear_bit
> > > > > list_lru_del()
> > > > > reparent again
> > > > > dst->nr_items may go negative
> > > > > due to current list_lru_del()
> > > > > on CPU C
> > > > > The second run of shrinker:
> > > > > read nr_items without any
> > > > > synchronization, so it may
> > > > > see intermediate negative
> > > > > nr_items then total_objects
> > > > > may return 0 conincidently
> > > > >
> > > > > keep the bit cleared
> > > > > dst->nr_items != 0
> > > > > skip set_bit
> > > > > add scr->nr_item to dst
> > > > >
> > > > > After this point dst->nr_item may never go zero, so reparenting will not
> > > > > set shrinker_map bit anymore. And since there is no task under user
> > > > > slice directly, so no new object will be added to its lru to set the
> > > > > shrinker map bit either. That bit is kept cleared forever.
> > > > >
> > > > > How does list_lru_del() race with reparenting? It is because
> > > > > reparenting replaces childen's kmemcg_id to parent's without protecting
> > > > > from nlru->lock, so list_lru_del() may see parent's kmemcg_id but
> > > > > actually deleting items from child's lru, but dec'ing parent's nr_items,
> > > > > so the parent's nr_items may go negative as commit
> > > > > 2788cf0c401c268b4819c5407493a8769b7007aa ("memcg: reparent list_lrus and
> > > > > free kmemcg_id on css offline") says.
> >
> > Also note that since the introduction of the slab reparenting, list_lru_from_kmem()
> > can return the parent lru.
>
> Do you mean slab charge reparenting or lru reparenting? I think
> list_lru_from_kmem() can return the parent lru since lru reparenting.
objcg reparenting to be precise. It's actually kinda weird now, because
there are two slightly different reparenting mechanisms. We might to
wanna merge them in the future.
>
> >
> > > > >
> > > > > Can we move kmemcg_id replacement after reparenting? No, because the
> > > > > race with list_lru_del() may result in negative src->nr_items, but it
> > > > > will never be fixed. So the shrinker may never return SHRINK_EMPTY then
> > > > > keep the shrinker map bit set always. The shrinker will be always
> > > > > called for nonsense.
> > > > >
> > > > > Can we synchronize list_lru_del() and reparenting? Yes, it could be
> > > > > done. But it seems we need introduce a new lock or use nlru->lock. But
> > > > > it sounds complicated to move kmemcg_id replacement code under nlru->lock.
> > > > > And list_lru_del() may be called quite often to exacerbate some hot
> > > > > path, i.e. dentry kill.
> > > > >
> > > > > So, it sounds acceptable to synchronize reading nr_items to avoid seeing
> > > > > intermediate negative nr_items given the simplicity and it is typically
> > > > > just called by shrinkers when counting the freeable objects.
> > > > >
> > > > > The patch is tested with some shrinker intensive workloads, no
> > > > > noticeable regression is soptted.
> > > >
> > > > Hi Yang!
> > > >
> > > > It's really tricky, thank you for digging in! It's a perfect analysis!
> > > >
> > > > I wonder though, if it's better to just always set the shrinker bit on reparenting
> > > > if we do reparent some items? Then we'll avoid adding new synchronization
> > > > to the hot path. What do you think?
> > >
> > > Thanks a lot for the suggestion. I was thinking about the same
> > > approach too, but I thought src->nr_items may go zero due to
> > > concurrent list_lru_del() at the first place. But I just rethought the
> > > whole thing, it seems impossible that dst->nr_items goes negative and
> > > src->nr_items goes zero at the same time.
> >
> > Even if it would be possible, it seems less scary: the next reparenting
> > will likely set the bit. So we'll not get into the permanently bad state.
>
> Unfortunately, no. Once the race happens, reparenting won't set the
> bit anymore since dst->nr_items won't go zero because the shrinker
> will not be called.
I mean if we don't check dst->nr_items. Anyway, because it's an impossible case,
no matter to discuss it :)
Powered by blists - more mailing lists