[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190827083215.lrgaonueazq7etl5@box>
Date: Tue, 27 Aug 2019 11:32:15 +0300
From: "Kirill A. Shutemov" <kirill@...temov.name>
To: Michal Hocko <mhocko@...nel.org>
Cc: Yang Shi <yang.shi@...ux.alibaba.com>,
kirill.shutemov@...ux.intel.com, hannes@...xchg.org,
vbabka@...e.cz, rientjes@...gle.com, akpm@...ux-foundation.org,
linux-mm@...ck.org, linux-kernel@...r.kernel.org
Subject: Re: [v2 PATCH -mm] mm: account deferred split THPs into MemAvailable
On Tue, Aug 27, 2019 at 07:59:41AM +0200, Michal Hocko wrote:
> > > > > IIUC deferred splitting is mostly a workaround for nasty locking issues
> > > > > during splitting, right? This is not really an optimization to cache
> > > > > THPs for reuse or something like that. What is the reason this is not
> > > > > done from a worker context? At least THPs which would be freed
> > > > > completely sound like a good candidate for kworker tear down, no?
> > > > Yes, deferred split THP was introduced to avoid locking issues according to
> > > > the document. Memcg awareness would help to trigger the shrinker more often.
> > > >
> > > > I think it could be done in a worker context, but when to trigger to worker
> > > > is a subtle problem.
> > > Why? What is the problem to trigger it after unmap of a batch worth of
> > > THPs?
> >
> > This leads to another question, how many THPs are "a batch of worth"?
>
> Some arbitrary reasonable number. Few dozens of THPs waiting for split
> are no big deal. Going into GB as you pointed out above is definitely a
> problem.
This will not work if these GBs worth of THPs are pinned (like with
RDMA).
We can kick the deferred split each N calls of deferred_split_huge_page()
if more than M pages queued or something.
Do we want to kick it again after some time if split from deferred queue
has failed?
The check if the page is splittable is not exactly free, so everyting has
trade offs.
--
Kirill A. Shutemov
Powered by blists - more mailing lists