lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <SJ0PR21MB1872CFBAFEA8152CE3B362BDBFDE9@SJ0PR21MB1872.namprd21.prod.outlook.com>
Date:   Wed, 23 Dec 2020 20:39:53 +0000
From:   Dexuan Cui <decui@...rosoft.com>
To:     'Lai Jiangshan' <jiangshanlai@...il.com>,
        'Dexuan-Linux Cui' <dexuan.linux@...il.com>
CC:     'Linux Kernel Mailing List' <linux-kernel@...r.kernel.org>,
        'Valentin Schneider' <valentin.schneider@....com>,
        'Peter Zijlstra' <peterz@...radead.org>,
        'Qian Cai' <cai@...hat.com>,
        'Vincent Donnefort' <vincent.donnefort@....com>,
        'Lai Jiangshan' <laijs@...ux.alibaba.com>,
        'Hillf Danton' <hdanton@...a.com>, 'Tejun Heo' <tj@...nel.org>
Subject: RE: [PATCH -tip V2 00/10] workqueue: break affinity initiatively

> From: Dexuan Cui
> Sent: Wednesday, December 23, 2020 12:27 PM
> ...
> The warning only repros if there are more than 1 node, and it only prints once
> for the first vCPU of the second node (i.e. node #1).

A correction: if I configure the 32 vCPUs evenly into 4 nodes, I get the warning
once for node #1~#3, respectively.

Thanks,
-- Dexuan

--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -2376,9 +2376,14 @@ static int __set_cpus_allowed_ptr(struct task_struct *p,
                 * For kernel threads that do indeed end up on online &&
                 * !active we want to ensure they are strict per-CPU threads.
                 */
-               WARN_ON(cpumask_intersects(new_mask, cpu_online_mask) &&
+               WARN(cpumask_intersects(new_mask, cpu_online_mask) &&
                        !cpumask_intersects(new_mask, cpu_active_mask) &&
-                       p->nr_cpus_allowed != 1);
+                       p->nr_cpus_allowed != 1, "%*pbl, %*pbl, %*pbl, %d\n",
+                       cpumask_pr_args(new_mask),
+                       cpumask_pr_args(cpu_online_mask),
+                       cpumask_pr_args(cpu_active_mask),
+                       p->nr_cpus_allowed
+                       );
        }

[    1.791611] smp: Bringing up secondary CPUs ...
[    1.795225] x86: Booting SMP configuration:
[    1.798964] .... node  #0, CPUs:        #1  #2  #3  #4  #5  #6  #7
[    1.807068] .... node  #1, CPUs:    #8
[    1.094226] smpboot: CPU 8 Converting physical 0 to logical die 1
[    1.895211] ------------[ cut here ]------------
[    1.899058] 8-15, 0-8, 0-7, 8
[    1.899058] WARNING: CPU: 8 PID: 50 at kernel/sched/core.c:2386 __set_cpus_allowed_ptr+0x1c7/0x1e0
[    1.899058] CPU: 8 PID: 50 Comm: cpuhp/8 Not tainted 5.10.0+ #4
[    1.899058] RIP: 0010:__set_cpus_allowed_ptr+0x1c7/0x1e0
[    1.899058] Call Trace:
[    1.899058]  worker_attach_to_pool+0x53/0xd0
[    1.899058]  create_worker+0xf9/0x190
[    1.899058]  alloc_unbound_pwq+0x3a5/0x3b0
[    1.899058]  wq_update_unbound_numa+0x112/0x1c0
[    1.899058]  workqueue_online_cpu+0x1d0/0x220
[    1.899058]  cpuhp_invoke_callback+0x82/0x4a0
[    1.899058]  cpuhp_thread_fun+0xb8/0x120
[    1.899058]  smpboot_thread_fn+0x198/0x230
[    1.899058]  kthread+0x13d/0x160
[    1.899058]  ret_from_fork+0x22/0x30
[    1.903058]   #9 #10 #11 #12 #13 #14 #15
[    1.907092] .... node  #2, CPUs:   #16
[    1.094226] smpboot: CPU 16 Converting physical 0 to logical die 2
[    1.995205] ------------[ cut here ]------------
[    1.999058] 16-23, 0-16, 0-15, 8
[    1.999058] WARNING: CPU: 16 PID: 91 at kernel/sched/core.c:2386 __set_cpus_allowed_ptr+0x1c7/0x1e0
[    1.999058] CPU: 16 PID: 91 Comm: cpuhp/16 Tainted: G        W         5.10.0+ #4
[    1.999058] RIP: 0010:__set_cpus_allowed_ptr+0x1c7/0x1e0
[    1.999058] Call Trace:
[    1.999058]  worker_attach_to_pool+0x53/0xd0
[    1.999058]  create_worker+0xf9/0x190
[    1.999058]  alloc_unbound_pwq+0x3a5/0x3b0
[    1.999058]  wq_update_unbound_numa+0x112/0x1c0
[    1.999058]  workqueue_online_cpu+0x1d0/0x220
[    1.999058]  cpuhp_invoke_callback+0x82/0x4a0
[    1.999058]  cpuhp_thread_fun+0xb8/0x120
[    1.999058]  smpboot_thread_fn+0x198/0x230
[    1.999058]  kthread+0x13d/0x160
[    1.999058]  ret_from_fork+0x22/0x30
[    2.003058]  #17 #18 #19 #20 #21 #22 #23
[    2.007092] .... node  #3, CPUs:   #24
[    1.094226] smpboot: CPU 24 Converting physical 0 to logical die 3
[    2.095220] ------------[ cut here ]------------
[    2.099058] 24-31, 0-24, 0-23, 8
[    2.099058] WARNING: CPU: 24 PID: 132 at kernel/sched/core.c:2386 __set_cpus_allowed_ptr+0x1c7/0x1e0
[    2.099058] CPU: 24 PID: 132 Comm: cpuhp/24 Tainted: G        W         5.10.0+ #4
[    2.099058] Call Trace:
[    2.099058]  worker_attach_to_pool+0x53/0xd0
[    2.099058]  create_worker+0xf9/0x190
[    2.099058]  alloc_unbound_pwq+0x3a5/0x3b0
[    2.099058]  wq_update_unbound_numa+0x112/0x1c0
[    2.099058]  workqueue_online_cpu+0x1d0/0x220
[    2.099058]  cpuhp_invoke_callback+0x82/0x4a0
[    2.099058]  cpuhp_thread_fun+0xb8/0x120
[    2.099058]  smpboot_thread_fn+0x198/0x230
[    2.099058]  kthread+0x13d/0x160
[    2.099058]  ret_from_fork+0x22/0x30
[    2.103058]  #25 #26 #27 #28 #29 #30 #31
[    2.108091] smp: Brought up 4 nodes, 32 CPUs
[    2.115065] smpboot: Max logical packages: 4
[    2.119067] smpboot: Total of 32 processors activated (146992.31 BogoMIPS)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ