lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20170612161433.GB19206@htj.duckdns.org>
Date:   Mon, 12 Jun 2017 12:14:33 -0400
From:   Tejun Heo <tj@...nel.org>
To:     Michael Bringmann <mwb@...ux.vnet.ibm.com>
Cc:     Lai Jiangshan <jiangshanlai@...il.com>,
        linux-kernel@...r.kernel.org,
        Nathan Fontenot <nfont@...ux.vnet.ibm.com>
Subject: Re: [PATCH] workqueue: Ensure that cpumask set for pools created
 after boot

Hello,

On Mon, Jun 12, 2017 at 09:47:31AM -0500, Michael Bringmann wrote:
> > I'm not sure because it doesn't make any logical sense and it's not
> > right in terms of correctness.  The above would be able to enable CPUs
> > which are explicitly excluded from a workqueue.  The only fallback
> > which makes sense is falling back to the default pwq.
> 
> What would that look like?  Are you sure that would always be valid?
> In a system that is hot-adding and hot-removing CPUs?

The reason why we're ending up with empty masks is because
wq_calc_node_cpumask() is assuming that the possible node cpumask is
always a superset of online (as it should).  We can trigger a fat
warning there if that isn't so and just return false from that
function.

> > The only way offlining can lead to this failure is when wq numa
> > possible cpu mask is a proper subset of the matching online mask.  Can
> > you please print out the numa online cpu and wq_numa_possible_cpumask
> > masks and verify that online stays within the possible for each node?
> > If not, the ppc arch init code needs to be updated so that cpu <->
> > node binding is establish for all possible cpus on boot.  Note that
> > this isn't a requirement coming solely from wq.  All node affine (thus
> > percpu) allocations depend on that.
> 
> The ppc arch init code already records all nodes used by the CPUs visible in
> the device-tree at boot time into the possible and online node bindings.  The
> problem here occurs when we hot-add new CPUs to the powerpc system -- they may
> require nodes that are mentioned by the VPHN hcall, but which were not used
> at boot time.

We need all the possible (so, for cpus which aren't online yet too)
CPU -> node mappings to be established on boot.  This isn't just a
requirement from workqueue.  We don't have any synchronization
regarding cpu <-> numa mapping in memory allocation paths either.

> I will run a test that dumps these masks later this week to try to provide
> the information that you are interested in.
> 
> Right now we are having a discussion on another thread as to how to properly
> set the possible node mask at boot given that there is no mechanism to hot-add
> nodes to the system.  The latest idea appears to be adding another property
> or two to define the maximum number of nodes that should be added to the
> possible / online node masks to allow for dynamic growth after boot.

I have no idea about the specifics of ppc but at least the code base
we have currently expect all possible cpus and nodes and their
mappings to be established on boot.

Thanks.

-- 
tejun

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ