lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 26 Jul 2016 13:53:42 +0200
From:	"Rafael J. Wysocki" <rjw@...ysocki.net>
To:	Dou Liyang <douly.fnst@...fujitsu.com>
Cc:	Andrew Morton <akpm@...ux-foundation.org>, cl@...ux.com,
	tj@...nel.org, mika.j.penttila@...il.com, mingo@...hat.com,
	hpa@...or.com, yasu.isimatu@...il.com,
	isimatu.yasuaki@...fujitsu.com, kamezawa.hiroyu@...fujitsu.com,
	izumi.taku@...fujitsu.com, gongzhaogang@...pur.com,
	len.brown@...el.com, lenb@...nel.org, tglx@...utronix.de,
	chen.tang@...ystack.cn, rafael@...nel.org, x86@...nel.org,
	linux-acpi@...r.kernel.org, linux-kernel@...r.kernel.org,
	linux-mm@...ck.org
Subject: Re: [PATCH v9 0/7] Make cpuid <-> nodeid mapping persistent

On Tuesday, July 26, 2016 11:59:38 AM Dou Liyang wrote:
> 
> 在 2016年07月26日 07:20, Andrew Morton 写道:
> > On Mon, 25 Jul 2016 16:35:42 +0800 Dou Liyang <douly.fnst@...fujitsu.com> wrote:
> >
> >> [Problem]
> >>
> >> cpuid <-> nodeid mapping is firstly established at boot time. And workqueue caches
> >> the mapping in wq_numa_possible_cpumask in wq_numa_init() at boot time.
> >>
> >> When doing node online/offline, cpuid <-> nodeid mapping is established/destroyed,
> >> which means, cpuid <-> nodeid mapping will change if node hotplug happens. But
> >> workqueue does not update wq_numa_possible_cpumask.
> >>
> >> So here is the problem:
> >>
> >> Assume we have the following cpuid <-> nodeid in the beginning:
> >>
> >>   Node | CPU
> >> ------------------------
> >> node 0 |  0-14, 60-74
> >> node 1 | 15-29, 75-89
> >> node 2 | 30-44, 90-104
> >> node 3 | 45-59, 105-119
> >>
> >> and we hot-remove node2 and node3, it becomes:
> >>
> >>   Node | CPU
> >> ------------------------
> >> node 0 |  0-14, 60-74
> >> node 1 | 15-29, 75-89
> >>
> >> and we hot-add node4 and node5, it becomes:
> >>
> >>   Node | CPU
> >> ------------------------
> >> node 0 |  0-14, 60-74
> >> node 1 | 15-29, 75-89
> >> node 4 | 30-59
> >> node 5 | 90-119
> >>
> >> But in wq_numa_possible_cpumask, cpu30 is still mapped to node2, and the like.
> >>
> >> When a pool workqueue is initialized, if its cpumask belongs to a node, its
> >> pool->node will be mapped to that node. And memory used by this workqueue will
> >> also be allocated on that node.
> >
> > Plan B is to hunt down and fix up all the workqueue structures at
> > hotplug-time.  Has that option been evaluated?
> >
> 
> Yes, the option has been evaluate in this patch:
> http://www.gossamer-threads.com/lists/linux/kernel/2116748
> 
> >
> > Your fix is x86-only and this bug presumably affects other
> > architectures, yes?I think a "Plan B" would fix all architectures?
> >
> 
> Yes, the bug may presumably affect few architectures which support CPU 
> hotplug and NUMA.
> 
> We have sent the "Plan B" in our community and got a lot of advice and 
> ideas. Based on these suggestions, We carefully balance that two plan. 
> Then we choice the first.
> 
> >
> > Thirdly, what is the merge path for these patches?  Is an x86
> > or ACPI maintainer working with you on them?
> 
> Yes, we get a lot of guidance and help from RJ who is an ACPI maintainer.

FWIW, the patches are fine by me from the ACPI perspective.

If you want me to apply them, though, ACKs from the x86 and mm maintainers
will be necessary.

Thanks,
Rafael

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ