lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20180314152203.c06fce436d221d34d3e4cf4a@linux-foundation.org>
Date:   Wed, 14 Mar 2018 15:22:03 -0700
From:   Andrew Morton <akpm@...ux-foundation.org>
To:     Tejun Heo <tj@...nel.org>
Cc:     Kirill Tkhai <ktkhai@...tuozzo.com>, cl@...ux.com,
        linux-mm@...ck.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH] percpu: Allow to kill tasks doing pcpu_alloc() and
 waiting for pcpu_balance_workfn()

On Wed, 14 Mar 2018 15:09:09 -0700 Tejun Heo <tj@...nel.org> wrote:

> Hello, Andrew.
> 
> On Wed, Mar 14, 2018 at 01:56:31PM -0700, Andrew Morton wrote:
> > It would benefit from a comment explaining why we're doing this (it's
> > for the oom-killer).
> 
> Will add.
> 
> > My memory is weak and our documentation is awful.  What does
> > mutex_lock_killable() actually do and how does it differ from
> > mutex_lock_interruptible()?  Userspace tasks can run pcpu_alloc() and I
> 
> IIRC, killable listens only to SIGKILL.
> 
> > wonder if there's any way in which a userspace-delivered signal can
> > disrupt another userspace task's memory allocation attempt?
> 
> Hmm... maybe.  Just honoring SIGKILL *should* be fine but the alloc
> failure paths might be broken, so there are some risks.  Given that
> the cases where userspace tasks end up allocation percpu memory is
> pretty limited and/or priviledged (like mount, bpf), I don't think the
> risks are high tho.

hm.  spose so.  Maybe.  Are there other ways?  I assume the time is
being spent in pcpu_create_chunk()?  We could drop the mutex while
running that stuff and take the appropriate did-we-race-with-someone
testing after retaking it.  Or similar.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ