lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAKOZuessAYS9Vq8GKf2ykx7T-JhRBmUOtFfs_08OAE3FvP0BWQ@mail.gmail.com>
Date:   Fri, 12 Apr 2019 07:20:21 -0700
From:   Daniel Colascione <dancol@...gle.com>
To:     Suren Baghdasaryan <surenb@...gle.com>
Cc:     Michal Hocko <mhocko@...nel.org>,
        Matthew Wilcox <willy@...radead.org>,
        Andrew Morton <akpm@...ux-foundation.org>,
        David Rientjes <rientjes@...gle.com>,
        yuzhoujian@...ichuxing.com,
        Souptick Joarder <jrdr.linux@...il.com>,
        Roman Gushchin <guro@...com>,
        Johannes Weiner <hannes@...xchg.org>,
        Tetsuo Handa <penguin-kernel@...ove.sakura.ne.jp>,
        "Eric W. Biederman" <ebiederm@...ssion.com>,
        Shakeel Butt <shakeelb@...gle.com>,
        Christian Brauner <christian@...uner.io>,
        Minchan Kim <minchan@...nel.org>,
        Tim Murray <timmurray@...gle.com>,
        Joel Fernandes <joel@...lfernandes.org>,
        Jann Horn <jannh@...gle.com>, linux-mm <linux-mm@...ck.org>,
        lsf-pc@...ts.linux-foundation.org,
        LKML <linux-kernel@...r.kernel.org>,
        kernel-team <kernel-team@...roid.com>
Subject: Re: [RFC 2/2] signal: extend pidfd_send_signal() to allow expedited
 process killing

On Fri, Apr 12, 2019 at 7:15 AM Suren Baghdasaryan <surenb@...gle.com> wrote:
>
> On Thu, Apr 11, 2019 at 11:49 PM Michal Hocko <mhocko@...nel.org> wrote:
> >
> > On Thu 11-04-19 10:47:50, Daniel Colascione wrote:
> > > On Thu, Apr 11, 2019 at 10:36 AM Matthew Wilcox <willy@...radead.org> wrote:
> > > >
> > > > On Thu, Apr 11, 2019 at 10:33:32AM -0700, Daniel Colascione wrote:
> > > > > On Thu, Apr 11, 2019 at 10:09 AM Suren Baghdasaryan <surenb@...gle.com> wrote:
> > > > > > On Thu, Apr 11, 2019 at 8:33 AM Matthew Wilcox <willy@...radead.org> wrote:
> > > > > > >
> > > > > > > On Wed, Apr 10, 2019 at 06:43:53PM -0700, Suren Baghdasaryan wrote:
> > > > > > > > Add new SS_EXPEDITE flag to be used when sending SIGKILL via
> > > > > > > > pidfd_send_signal() syscall to allow expedited memory reclaim of the
> > > > > > > > victim process. The usage of this flag is currently limited to SIGKILL
> > > > > > > > signal and only to privileged users.
> > > > > > >
> > > > > > > What is the downside of doing expedited memory reclaim?  ie why not do it
> > > > > > > every time a process is going to die?
> > > > > >
> > > > > > I think with an implementation that does not use/abuse oom-reaper
> > > > > > thread this could be done for any kill. As I mentioned oom-reaper is a
> > > > > > limited resource which has access to memory reserves and should not be
> > > > > > abused in the way I do in this reference implementation.
> > > > > > While there might be downsides that I don't know of, I'm not sure it's
> > > > > > required to hurry every kill's memory reclaim. I think there are cases
> > > > > > when resource deallocation is critical, for example when we kill to
> > > > > > relieve resource shortage and there are kills when reclaim speed is
> > > > > > not essential. It would be great if we can identify urgent cases
> > > > > > without userspace hints, so I'm open to suggestions that do not
> > > > > > involve additional flags.
> > > > >
> > > > > I was imagining a PI-ish approach where we'd reap in case an RT
> > > > > process was waiting on the death of some other process. I'd still
> > > > > prefer the API I proposed in the other message because it gets the
> > > > > kernel out of the business of deciding what the right signal is. I'm a
> > > > > huge believer in "mechanism, not policy".
> > > >
> > > > It's not a question of the kernel deciding what the right signal is.
> > > > The kernel knows whether a signal is fatal to a particular process or not.
> > > > The question is whether the killing process should do the work of reaping
> > > > the dying process's resources sometimes, always or never.  Currently,
> > > > that is never (the process reaps its own resources); Suren is suggesting
> > > > sometimes, and I'm asking "Why not always?"
> > >
> > > FWIW, Suren's initial proposal is that the oom_reaper kthread do the
> > > reaping, not the process sending the kill. Are you suggesting that
> > > sending SIGKILL should spend a while in signal delivery reaping pages
> > > before returning? I thought about just doing it this way, but I didn't
> > > like the idea: it'd slow down mass-killing programs like killall(1).
> > > Programs expect sending SIGKILL to be a fast operation that returns
> > > immediately.
> >
> > I was thinking about this as well. And SYNC_SIGKILL would workaround the

SYNC_SIGKILL (which, I presume, blocks in kill(2)) was proposed in
many occasions while we discussed pidfd waits over the past six months
or so. We've decided to just make pidfds pollable instead. The kernel
already has several ways to express the idea that a task should wait
for another task to die, and I don't think we need another. If you
want a process that's waiting for a task to exit to help reap that
task, great --- that's an option we've talked about --- but we don't
need new interface to do it, since the kernel already has all the
information it needs.

> > current expectations of how quick the current implementation is. The
> > harder part would what is the actual semantic. Does the kill wait until
> > the target task is TASK_DEAD or is there an intermediate step that would
> > we could call it end of the day and still have a reasonable semantic
> > (e.g. the original pid is really not alive anymore).
>
> I think Daniel's proposal was trying to address that. With an input of
> how many pages user wants to reclaim asynchronously and return value
> of how much was actually reclaimed it contains the condition when to
> stop and the reply how successful we could accomplish that. Since it
> returns the number of pages reclaimed I assume the call does not
> return until it reaped enough pages.

Right. I want to punt as much "policy" as possible to userspace. Just
using a user thread to do the reaping not only solves the policy
problem (since it's userspace that controls priority, affinity,
retries, and so on), but also simplifies the implementation
kernel-side. I can imagine situations where, depending on device
energy state or even charger or screen state we might want to reap
more or less aggressively, or not at all. I wouldn't want to burden
the kernel with having to get that right when userspace could make the
decision.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ