lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190917100854.GC1872@dhcp22.suse.cz>
Date:   Tue, 17 Sep 2019 12:08:54 +0200
From:   Michal Hocko <mhocko@...nel.org>
To:     Yunsheng Lin <linyunsheng@...wei.com>
Cc:     Michael Ellerman <mpe@...erman.id.au>, catalin.marinas@....com,
        will@...nel.org, mingo@...hat.com, bp@...en8.de, rth@...ddle.net,
        ink@...assic.park.msu.ru, mattst88@...il.com,
        benh@...nel.crashing.org, paulus@...ba.org,
        heiko.carstens@...ibm.com, gor@...ux.ibm.com,
        borntraeger@...ibm.com, ysato@...rs.sourceforge.jp,
        dalias@...c.org, davem@...emloft.net, ralf@...ux-mips.org,
        paul.burton@...s.com, jhogan@...nel.org, jiaxun.yang@...goat.com,
        chenhc@...ote.com, akpm@...ux-foundation.org, rppt@...ux.ibm.com,
        anshuman.khandual@....com, tglx@...utronix.de, cai@....pw,
        robin.murphy@....com, linux-arm-kernel@...ts.infradead.org,
        linux-kernel@...r.kernel.org, hpa@...or.com, x86@...nel.org,
        dave.hansen@...ux.intel.com, luto@...nel.org, peterz@...radead.org,
        len.brown@...el.com, axboe@...nel.dk, dledford@...hat.com,
        jeffrey.t.kirsher@...el.com, linux-alpha@...r.kernel.org,
        naveen.n.rao@...ux.vnet.ibm.com, mwb@...ux.vnet.ibm.com,
        linuxppc-dev@...ts.ozlabs.org, linux-s390@...r.kernel.org,
        linux-sh@...r.kernel.org, sparclinux@...r.kernel.org,
        tbogendoerfer@...e.de, linux-mips@...r.kernel.org,
        rafael@...nel.org, gregkh@...uxfoundation.org
Subject: Re: [PATCH v5] numa: make node_to_cpumask_map() NUMA_NO_NODE aware

On Tue 17-09-19 17:53:57, Yunsheng Lin wrote:
> On 2019/9/17 17:36, Michal Hocko wrote:
> > On Tue 17-09-19 14:20:11, Yunsheng Lin wrote:
> >> On 2019/9/17 13:28, Michael Ellerman wrote:
> >>> Yunsheng Lin <linyunsheng@...wei.com> writes:
> > [...]
> >>>> But we cannot really copy the page allocator logic. Simply because the
> >>>> page allocator doesn't enforce the near node affinity. It just picks it
> >>>> up as a preferred node but then it is free to fallback to any other numa
> >>>> node. This is not the case here and node_to_cpumask_map will only restrict
> >>>> to the particular node's cpus which would have really non deterministic
> >>>> behavior depending on where the code is executed. So in fact we really
> >>>> want to return cpu_online_mask for NUMA_NO_NODE.
> >>>>
> >>>> Some arches were already NUMA_NO_NODE aware, but they return cpu_all_mask,
> >>>> which should be identical with cpu_online_mask when those arches do not
> >>>> support cpu hotplug, this patch also changes them to return cpu_online_mask
> >>>> in order to be consistent and use NUMA_NO_NODE instead of "-1".
> >>>
> >>> Except some of those arches *do* support CPU hotplug, powerpc and sparc
> >>> at least. So switching from cpu_all_mask to cpu_online_mask is a
> >>> meaningful change.
> >>
> >> Yes, thanks for pointing out.
> >>
> >>>
> >>> That doesn't mean it's wrong, but you need to explain why it's the right
> >>> change.
> >>
> >> How about adding the below to the commit log:
> >> Even if some of the arches do support CPU hotplug, it does not make sense
> >> to return the cpu that has been hotplugged.
> >>
> >> Any suggestion?
> > 
> > Again, for the third time, I believe. Make it a separate patch please.
> > There is absolutely no reason to conflate those two things.
> 
> Ok, thanks.
> Will make the cpu_all_mask -> cpu_online_mask change a separate patch.

Thanks. This really needs per arch maintainer to check closely.

> Also, do you think it is better to resend this as individual patches for each arch
> or have all these changes in a single patch? I am not sure which is the common
> practice for a multi-arches changes like this.

It really depends on arch maintainers. Both approaches have some pros
and cons. A single patch is more compact and and parts are not going to
get lost in noise. They might generate some conflicts with parallel
changes. I suspect a conflict risk is quite low in this code considering
from a recent activity. A follow up arch specific patch would have to be
routed via Andrew as well.

If Andrew is ok routing it through his tree and none of the arch
maintainers is opposed then I would go with a single patch.
-- 
Michal Hocko
SUSE Labs

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ