[<prev] [next>] [day] [month] [year] [list]
Message-ID: <20210120134633.GB11090@xsang-OptiPlex-9020>
Date: Wed, 20 Jan 2021 21:46:33 +0800
From: Oliver Sang <oliver.sang@...el.com>
To: Hillf Danton <hdanton@...a.com>
Cc: Peter Zijlstra <peterz@...radead.org>,
"Paul E . McKenney" <paulmck@...nel.org>,
Lai Jiangshan <laijs@...ux.alibaba.com>,
LKML <linux-kernel@...r.kernel.org>, lkp@...el.com,
lkp@...ts.01.org, zhengjun.xing@...ux.intel.com
Subject: Re: [workqueue] d5bff968ea:
WARNING:at_kernel/workqueue.c:#process_one_work
On Fri, Jan 15, 2021 at 03:24:32PM +0800, Hillf Danton wrote:
> Thu, 14 Jan 2021 15:45:11 +0800
> >
> > FYI, we noticed the following commit (built with gcc-9):
> >
> > commit: d5bff968ea9cc005e632d9369c26cbd8148c93d5 ("workqueue: break affinity initiatively")
> > https://git.kernel.org/cgit/linux/kernel/git/paulmck/linux-rcu.git dev.2021.01.11b
> >
> [...]
> >
> > [ 73.794288] WARNING: CPU: 0 PID: 22 at kernel/workqueue.c:2192 process_one_work
>
> Thanks for your report.
>
> We can also break CPU affinity by checking POOL_DISASSOCIATED at attach
> time without extra cost paid; that way we have the same behavior as at
> the unbind time.
>
> What is more the change that makes kworker pcpu is cut because they are
> going to not help either hotplug or the mechanism of stop machine.
hi, by applying below patch, the issue still happened.
[ 4.574467] pci 0000:00:00.0: Limiting direct PCI/PCI transfers
[ 4.575651] pci 0000:00:01.0: Activating ISA DMA hang workarounds
[ 4.576900] pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
[ 4.578648] PCI: CLS 0 bytes, default 64
[ 4.579685] Unpacking initramfs...
[ 8.878031] -----------[ cut here ]-----------
[ 8.879083] WARNING: CPU: 0 PID: 22 at kernel/workqueue.c:2187 process_one_work+0x92/0x9e0
[ 8.880688] Modules linked in:
[ 8.881274] CPU: 0 PID: 22 Comm: kworker/1:0 Not tainted 5.11.0-rc3-gc213503139bb #2
[ 8.882518] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.12.0-1 04/01/2014
[ 8.887539] Workqueue: 0x0 (events)
[ 8.887838] EIP: process_one_work+0x92/0x9e0
[ 8.887838] Code: 37 64 a1 58 54 4c 43 39 45 24 74 2c 31 c9 ba 01 00 00 00 c7 04 24 01 00 00 00 b8 08 1d f5 42 e8 74 85 13 00 ff 05 b8 30 04 43 <0f> 0b ba 01 00 00 00 eb 22 8d 74 26 00 90 c7 04 24 01 00 00 00 31
[ 8.887838] EAX: 42f51d08 EBX: 00000000 ECX: 00000000 EDX: 00000001
[ 8.887838] ESI: 43c04720 EDI: 42e45620 EBP: de7f23c0 ESP: 43d7bf08
[ 8.887838] DS: 007b ES: 007b FS: 00d8 GS: 0000 SS: 0068 EFLAGS: 00010002
[ 8.887838] CR0: 80050033 CR2: 00000000 CR3: 034e3000 CR4: 000406d0
[ 8.887838] Call Trace:
[ 8.887838] ? worker_thread+0x98/0x6a0
[ 8.887838] ? worker_thread+0x2dd/0x6a0
[ 8.887838] ? kthread+0x1ba/0x1e0
[ 8.887838] ? create_worker+0x1e0/0x1e0
[ 8.887838] ? kzalloc+0x20/0x20
[ 8.887838] ? ret_from_fork+0x1c/0x28
[ 8.887838] _warn_unseeded_randomness: 63 callbacks suppressed
[ 8.887838] random: get_random_bytes called from init_oops_id+0x2b/0x60 with crng_init=0
[ 8.887838] --[ end trace ac461b4d54c37cfa ]--
[ 11.287055] Freeing initrd memory: 174228K
[ 11.289225] RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer
[ 11.290889] clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x26d34b60feb, max_idle_ns: 440795225049 ns
[ 11.292884] mce: Machine check injector initialized
[ 11.313019] The force parameter has not been set to 1. The Iris poweroff handler will not be installed.
>
> --- a/kernel/workqueue.c
> +++ b/kernel/workqueue.c
> @@ -1847,22 +1847,17 @@ static void worker_attach_to_pool(struct
> struct worker_pool *pool)
> {
> mutex_lock(&wq_pool_attach_mutex);
> -
> - /*
> - * set_cpus_allowed_ptr() will fail if the cpumask doesn't have any
> - * online CPUs. It'll be re-applied when any of the CPUs come up.
> - */
> - set_cpus_allowed_ptr(worker->task, pool->attrs->cpumask);
> -
> /*
> * The wq_pool_attach_mutex ensures %POOL_DISASSOCIATED remains
> * stable across this function. See the comments above the flag
> * definition for details.
> */
> - if (pool->flags & POOL_DISASSOCIATED)
> + if (pool->flags & POOL_DISASSOCIATED) {
> worker->flags |= WORKER_UNBOUND;
> - else
> - kthread_set_per_cpu(worker->task, true);
> + set_cpus_allowed_ptr(worker->task, cpu_possible_mask);
> + } else {
> + set_cpus_allowed_ptr(worker->task, pool->attrs->cpumask);
> + }
>
> list_add_tail(&worker->node, &pool->workers);
> worker->pool = pool;
> @@ -4922,7 +4917,6 @@ static void unbind_workers(int cpu)
> raw_spin_unlock_irq(&pool->lock);
>
> for_each_pool_worker(worker, pool) {
> - kthread_set_per_cpu(worker->task, false);
> WARN_ON_ONCE(set_cpus_allowed_ptr(worker->task, cpu_possible_mask) < 0);
> }
>
> @@ -4979,7 +4973,6 @@ static void rebind_workers(struct worker
> for_each_pool_worker(worker, pool) {
> WARN_ON_ONCE(set_cpus_allowed_ptr(worker->task,
> pool->attrs->cpumask) < 0);
> - kthread_set_per_cpu(worker->task, true);
> }
>
> raw_spin_lock_irq(&pool->lock);
> --
Download attachment "dmesg-2.xz" of type "application/x-xz" (39392 bytes)
Powered by blists - more mailing lists