[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <Yn1mJEjP3LH8rl3t@google.com>
Date: Thu, 12 May 2022 12:55:16 -0700
From: Minchan Kim <minchan@...nel.org>
To: Andrew Morton <akpm@...ux-foundation.org>
Cc: LKML <linux-kernel@...r.kernel.org>, linux-mm <linux-mm@...ck.org>,
Suren Baghdasaryan <surenb@...gle.com>,
Michal Hocko <mhocko@...e.com>,
John Dias <joaodias@...gle.com>,
Tim Murray <timmurray@...gle.com>,
Matthew Wilcox <willy@...radead.org>,
Vladimir Davydov <vdavydov.dev@...il.com>,
Martin Liu <liumartin@...gle.com>,
Johannes Weiner <hannes@...xchg.org>
Subject: Re: [PATCH v4] mm: don't be stuck to rmap lock on reclaim path
On Wed, May 11, 2022 at 07:05:23PM -0700, Andrew Morton wrote:
> On Wed, 11 May 2022 15:57:09 -0700 Minchan Kim <minchan@...nel.org> wrote:
>
> > >
> > > Could we burn much CPU time pointlessly churning though the LRU? Could
> > > it mess up aging decisions enough to be performance-affecting in any
> > > workload?
> >
> > Yes, correct. However, we are already churning LRUs by several
> > ways. For example, isolate and putback from LRU list for page
> > migration from several sources(typical example is compaction)
> > and trylock_page and sc->gfp_mask not allowing page to be
> > reclaimed in shrink_page_list.
>
> Well. "we're already doing a risky thing so it's OK to do more of that
> thing"?
I meant the aging is not rocket science.
>
> > >
> > > Something else?
> >
> > One thing I am worry about was the granularity of the churning.
> > Example above was page granuarity churning so might be execuse
> > but this one is address space's churning, especically for file LRU
> > (i_mmap_rwsem) which might cause too many rotating and live-lock
> > in the end(keey rotating in small LRU with heavy memory pressure).
> >
> > If it could be a problem, maybe we use sc->priority to stop
> > the skipping on a certain level of memory pressure.
> >
> > Any thought? Do we really need it?
>
> Are we able to think of a test which might demonstrate any worst case?
> Whip that up and see what the numbers say?
Yeah, let me create a worst test case to see how it goes.
A thread keep reading a file-backed vma with 2xRAM file but other threads
keep changing other vmas mapped at the same file so heavy i_mmap_rwsem
contention in aging path.
>
> It's a bit of a drag, but if we don't do it, our users surely will ;)
Powered by blists - more mailing lists