[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YWQDqtnA5FXk7xan@dhcp22.suse.cz>
Date: Mon, 11 Oct 2021 11:28:10 +0200
From: Michal Hocko <mhocko@...e.com>
To: David Hildenbrand <david@...hat.com>
Cc: ultrachin@....com, akpm@...ux-foundation.org, linux-mm@...ck.org,
linux-kernel@...r.kernel.org, brookxu.cn@...il.com,
chen xiaoguang <xiaoggchen@...cent.com>,
zeng jingxiang <linuszeng@...cent.com>,
lu yihui <yihuilu@...cent.com>,
Claudio Imbrenda <imbrenda@...ux.ibm.com>,
Daniel Jordan <daniel.m.jordan@...cle.com>
Subject: Re: [PATCH] mm: Free per cpu pages async to shorten program exit time
On Fri 08-10-21 10:17:50, David Hildenbrand wrote:
> On 08.10.21 08:39, ultrachin@....com wrote:
> > From: chen xiaoguang <xiaoggchen@...cent.com>
> >
> > The exit time is long when program allocated big memory and
> > the most time consuming part is free memory which takes 99.9%
> > of the total exit time. By using async free we can save 25% of
> > exit time.
> >
> > Signed-off-by: chen xiaoguang <xiaoggchen@...cent.com>
> > Signed-off-by: zeng jingxiang <linuszeng@...cent.com>
> > Signed-off-by: lu yihui <yihuilu@...cent.com>
>
> I recently discussed with Claudio if it would be possible to tear down the
> process MM deferred, because for some use cases (secure/encrypted
> virtualization, very large mmaps) tearing down the page tables is already
> the much more expensive operation.
>
> There is mmdrop_async(), and I wondered if one could reuse that concept when
> tearing down a process -- I didn't look into feasibility, however, so it's
> just some very rough idea.
This is not a new problem. Large process tear down can take ages. The
primary road block has been accounting. This lot of work has to be
accounted to the proper domain (e.g. cpu cgroup). A deferred and
properly accounted context implementation is still lacking AFAIK. I have
a vague recollection we have padata framework but I am not sure anybody
has explored this to be used for the address space shutdown. IIRC Daniel
Jordan was active in that area.
--
Michal Hocko
SUSE Labs
Powered by blists - more mailing lists