[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAJuCfpGC1Kv2rC7oq-TT2dX1soy5J_R+y6DU8xEzVuJgOqHKAw@mail.gmail.com>
Date: Wed, 18 Nov 2020 11:22:21 -0800
From: Suren Baghdasaryan <surenb@...gle.com>
To: Michal Hocko <mhocko@...e.com>
Cc: Andrew Morton <akpm@...ux-foundation.org>,
David Rientjes <rientjes@...gle.com>,
Matthew Wilcox <willy@...radead.org>,
Johannes Weiner <hannes@...xchg.org>,
Roman Gushchin <guro@...com>, Rik van Riel <riel@...riel.com>,
Christian Brauner <christian@...uner.io>,
Oleg Nesterov <oleg@...hat.com>,
Tim Murray <timmurray@...gle.com>, linux-api@...r.kernel.org,
linux-mm <linux-mm@...ck.org>,
LKML <linux-kernel@...r.kernel.org>,
kernel-team <kernel-team@...roid.com>,
Minchan Kim <minchan@...nel.org>
Subject: Re: [PATCH 1/1] RFC: add pidfd_send_signal flag to reclaim mm while
killing a process
On Wed, Nov 18, 2020 at 11:10 AM Michal Hocko <mhocko@...e.com> wrote:
>
> On Fri 13-11-20 18:16:32, Andrew Morton wrote:
> [...]
> > It's all sounding a bit painful (but not *too* painful). But to
> > reiterate, I do think that adding the ability for a process to shoot
> > down a large amount of another process's memory is a lot more generally
> > useful than tying it to SIGKILL, agree?
>
> I am not sure TBH. Is there any reasonable usecase where uncoordinated
> memory tear down is OK and a target process which is able to see the
> unmapped memory?
I think uncoordinated memory tear down is a special case which makes
sense only when the target process is being killed (and we can enforce
that by allowing MADV_DONTNEED to be used only if the target process
has pending SIGKILL). However, the ability to apply other flavors of
process_madvise() to large memory areas spanning multiple VMAs can be
useful in more cases. For example in Android we will use
process_madvise(MADV_PAGEOUT) to "shrink" an inactive background
process. Today we have to read /proc/maps and construct the vector of
VMAs even when applying this advice to the entire process. With such a
special mode we could achieve this more efficiently and with less
hussle.
> --
> Michal Hocko
> SUSE Labs
Powered by blists - more mailing lists