[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Ye8mi80ObVZvLdS1@google.com>
Date: Mon, 24 Jan 2022 14:22:03 -0800
From: Minchan Kim <minchan@...nel.org>
To: Michal Hocko <mhocko@...e.com>
Cc: Andrew Morton <akpm@...ux-foundation.org>,
David Hildenbrand <david@...hat.com>,
linux-mm <linux-mm@...ck.org>,
LKML <linux-kernel@...r.kernel.org>,
Suren Baghdasaryan <surenb@...gle.com>,
John Dias <joaodias@...gle.com>
Subject: Re: [RESEND][PATCH v2] mm: don't call lru draining in the nested
lru_cache_disable
On Mon, Jan 24, 2022 at 10:57:36AM +0100, Michal Hocko wrote:
> On Fri 21-01-22 13:56:31, Minchan Kim wrote:
> > On Fri, Jan 21, 2022 at 10:59:32AM +0100, Michal Hocko wrote:
> > > On Thu 20-01-22 13:07:55, Minchan Kim wrote:
> > > > On Thu, Jan 20, 2022 at 09:24:22AM +0100, Michal Hocko wrote:
> > > > > On Wed 19-01-22 20:25:54, Minchan Kim wrote:
> > > > > > On Wed, Jan 19, 2022 at 10:20:22AM +0100, Michal Hocko wrote:
> > > > > [...]
> > > > > > > What does prevent you from calling lru_cache_{disable,enable} this way
> > > > > > > with the existing implementation? AFAICS calls can be nested just fine.
> > > > > > > Or am I missing something?
> > > > > >
> > > > > > It just increases more IPI calls since we drain the lru cache
> > > > > > both upper layer and lower layer. That's I'd like to avoid
> > > > > > in this patch. Just disable lru cache one time for entire
> > > > > > allocation path.
> > > > >
> > > > > I do not follow. Once you call lru_cache_disable at the higher level
> > > > > then no new pages are going to be added to the pcp caches. At the same
> > > > > time existing caches are flushed so the inner lru_cache_disable will not
> > > > > trigger any new IPIs.
> > > >
> > > > lru_cache_disable calls __lru_add_drain_all with force_all_cpus
> > > > unconditionally so keep calling the IPI.
> > >
> > > OK, this is something I have missed. Why cannot we remove the force_all
> > > mode for lru_disable_count>0 when there are no pcp caches populated?
> >
> > Couldn't gaurantee whether the IPI is finished with only atomic counter.
> >
> > CPU 0 CPU 1
> > lru_cache_disable lru_cache_disable
> > ret = atomic_inc_return
> >
> > ret = atomic_inc_return
> > lru_add_drain_all(ret == 1); lru_add_drain_all(ret == 1)
> > IPI ongoing skip IPI
> > alloc_contig_range
> > fail
> > ..
> > ..
> >
> > IPI done
>
> But __lru_add_drain_all uses a local mutex while the IPI flushing is
> done so the racing lru_cache_disable would block until
> flush_work(&per_cpu(lru_add_drain_work, cpu)) completes so all IPIs are
> handled. Or am I missing something?
CPU 0 CPU 1
lru_cache_disable lru_cache_disable
ret = atomic_inc_return;(ret = 1)
ret = atomic_inc_return;(ret = 2)
lru_add_drain_all(true);
lru_add_drain_all(false)
mutex_lock() is holding
mutex_lock() is waiting
IPI with !force_all_cpus
...
...
IPI done but it skipped some CPUs
..
..
Thus, lru_cache_disable on CPU 1 doesn't run with every CPUs so it
introduces race of lru_disable_count so some pages on cores
which didn't run the IPI could accept upcoming pages into per-cpu
cache.
Powered by blists - more mailing lists