lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Date:   Sun, 13 Mar 2022 20:49:50 +0800
From:   kernel test robot <oliver.sang@...el.com>
To:     "Paul E. McKenney" <paulmck@...nel.org>
Cc:     lkp@...ts.01.org, lkp@...el.com,
        LKML <linux-kernel@...r.kernel.org>
Subject: [EXP rcutorture]  556d8afe4a: BUG:workqueue_lockup-pool


Hi Paul,

we reported this commit as "[EXP rcutorture]  cd7bd64af5: BUG:workqueue_lockup-pool"
last month.

at that time, you requested us to test
25c0b105b7ba ("EXP rcu: Add polled expedited grace-period primitives")
and we confirmed the issue gone on this commit.

however, now we found this commit is already in linux-next/master,
but the issue seems still exists.

and we found the "EXP rcu: Add polled expedited grace-period primitives" is
actually the parent of this commit.

* 556d8afe4a779 EXP rcutorture: Test polled expedited grace-period primitives
* 6227afdc95e49 EXP rcu: Add polled expedited grace-period primitives

and since the issue still exists in lastest linux-next/master, so we report
this again for your information.


Greeting,

FYI, we noticed the following commit (built with gcc-9):

commit: 556d8afe4a779f41dfc8fa373993a88e43f1c5dc ("EXP rcutorture: Test polled expedited grace-period primitives")
https://git.kernel.org/cgit/linux/kernel/git/next/linux-next.git master

in testcase: rcutorture
version: 
with following parameters:

	runtime: 300s
	test: default
	torture_type: rcu

test-description: rcutorture is rcutorture kernel module load/unload test.
test-url: https://www.kernel.org/doc/Documentation/RCU/torture.txt


on test machine: qemu-system-x86_64 -enable-kvm -cpu SandyBridge -smp 2 -m 16G

caused below changes (please refer to attached dmesg/kmsg for entire log/backtrace):


+--------------------------------------------------+------------+------------+
|                                                  | 6227afdc95 | 556d8afe4a |
+--------------------------------------------------+------------+------------+
| boot_successes                                   | 10         | 4          |
| boot_failures                                    | 0          | 6          |
| BUG:workqueue_lockup-pool                        | 0          | 6          |
| INFO:task_blocked_for_more_than#seconds          | 0          | 6          |
| Kernel_panic-not_syncing:hung_task:blocked_tasks | 0          | 6          |
+--------------------------------------------------+------------+------------+


If you fix the issue, kindly add following tag
Reported-by: kernel test robot <oliver.sang@...el.com>


[  408.705502][    C1] BUG: workqueue lockup - pool cpus=1 node=0 flags=0x0 nice=0 stuck for 338s!
[  408.706448][    C1] Showing busy workqueues and worker pools:
[  408.707057][    C1] workqueue events: flags=0x0
[  408.707553][    C1]   pwq 2: cpus=1 node=0 flags=0x0 nice=0 active=15/256 refcnt=16
[  408.707613][    C1]     pending: do_free_init, e1000_watchdog, vmstat_shepherd, kfree_rcu_monitor, regulator_init_complete_work_function, kernfs_notify_workfn, release_one_tty, key_garbage_collector, release_one_tty, release_one_tty, release_one_tty, release_one_tty, release_one_tty, release_one_tty, release_one_tty
[  408.707984][    C1] workqueue events_unbound: flags=0x2
[  408.711681][    C1]   pwq 4: cpus=0-1 flags=0x4 nice=0 active=4/512 refcnt=7
[  408.711733][    C1]     in-flight: 40:fsnotify_mark_destroy_workfn fsnotify_mark_destroy_workfn BAR(1), 9:fsnotify_connector_destroy_workfn fsnotify_connector_destroy_workfn
[  408.711871][    C1] workqueue events_power_efficient: flags=0x80
[  408.714333][    C1]   pwq 2: cpus=1 node=0 flags=0x0 nice=0 active=7/256 refcnt=8
[  408.714384][    C1]     pending: neigh_managed_work, neigh_managed_work, neigh_managed_work, neigh_periodic_work, neigh_periodic_work, neigh_periodic_work, check_lifetime
[  408.716530][    C1] workqueue rcu_gp: flags=0x8
[  408.716915][    C1]   pwq 2: cpus=1 node=0 flags=0x0 nice=0 active=3/256 refcnt=4
[  408.716964][    C1]     in-flight: 179:sync_rcu_do_polled_gp
[  408.717002][    C1]     pending: sync_rcu_do_polled_gp, process_srcu
[  408.717069][    C1] workqueue mm_percpu_wq: flags=0x8
[  408.719169][    C1]   pwq 2: cpus=1 node=0 flags=0x0 nice=0 active=1/256 refcnt=2
[  408.719220][    C1]     pending: vmstat_update
[  408.719267][    C1] workqueue cgroup_destroy: flags=0x0
[  408.720719][    C1]   pwq 2: cpus=1 node=0 flags=0x0 nice=0 active=1/1 refcnt=9
[  408.720775][    C1]     pending: css_killed_work_fn
[  408.720808][    C1]     inactive: css_killed_work_fn, css_killed_work_fn, css_release_work_fn, css_release_work_fn, css_killed_work_fn, css_killed_work_fn, css_killed_work_fn
[  408.720969][    C1] workqueue ipv6_addrconf: flags=0x40008
[  408.723813][    C1]   pwq 2: cpus=1 node=0 flags=0x0 nice=0 active=1/1 refcnt=2
[  408.723858][    C1]     pending: addrconf_verify_work
[  408.723897][    C1] pool 2: cpus=1 node=0 flags=0x0 nice=0 hung=338s workers=3 idle: 23 36
[  408.723947][    C1] pool 4: cpus=0-1 flags=0x4 nice=0 hung=0s workers=4 idle: 1871 1873
[  438.914180][    C1] BUG: workqueue lockup - pool cpus=1 node=0 flags=0x0 nice=0 stuck for 368s!
[  438.915000][    C1] Showing busy workqueues and worker pools:
[  438.915495][    C1] workqueue events: flags=0x0
[  438.915869][    C1]   pwq 2: cpus=1 node=0 flags=0x0 nice=0 active=15/256 refcnt=16
[  438.915918][    C1]     pending: do_free_init, e1000_watchdog, vmstat_shepherd, kfree_rcu_monitor, regulator_init_complete_work_function, kernfs_notify_workfn, release_one_tty, key_garbage_collector, release_one_tty, release_one_tty, release_one_tty, release_one_tty, release_one_tty, release_one_tty, release_one_tty
[  438.916204][    C1] workqueue events_unbound: flags=0x2
[  438.919292][    C1]   pwq 4: cpus=0-1 flags=0x4 nice=0 active=4/512 refcnt=7
[  438.919334][    C1]     in-flight: 40:fsnotify_mark_destroy_workfn fsnotify_mark_destroy_workfn BAR(1), 9:fsnotify_connector_destroy_workfn fsnotify_connector_destroy_workfn
[  438.919451][    C1] workqueue events_power_efficient: flags=0x80
[  438.921618][    C1]   pwq 2: cpus=1 node=0 flags=0x0 nice=0 active=7/256 refcnt=8
[  438.921676][    C1]     pending: neigh_managed_work, neigh_managed_work, neigh_managed_work, neigh_periodic_work, neigh_periodic_work, neigh_periodic_work, check_lifetime
[  438.921825][    C1] workqueue rcu_gp: flags=0x8
[  438.923936][    C1]   pwq 2: cpus=1 node=0 flags=0x0 nice=0 active=3/256 refcnt=4
[  438.923981][    C1]     in-flight: 179:sync_rcu_do_polled_gp
[  438.924018][    C1]     pending: sync_rcu_do_polled_gp, process_srcu
[  438.924077][    C1] workqueue mm_percpu_wq: flags=0x8
[  438.925910][    C1]   pwq 2: cpus=1 node=0 flags=0x0 nice=0 active=1/256 refcnt=2
[  438.925947][    C1]     pending: vmstat_update
[  438.925979][    C1] workqueue cgroup_destroy: flags=0x0
[  438.926981][    C1]   pwq 2: cpus=1 node=0 flags=0x0 nice=0 active=1/1 refcnt=9
[  438.927014][    C1]     pending: css_killed_work_fn
[  438.927034][    C1]     inactive: css_killed_work_fn, css_killed_work_fn, css_release_work_fn, css_release_work_fn, css_killed_work_fn, css_killed_work_fn, css_killed_work_fn
[  438.927140][    C1] workqueue ipv6_addrconf: flags=0x40008
[  438.929584][    C1]   pwq 2: cpus=1 node=0 flags=0x0 nice=0 active=1/1 refcnt=2
[  438.929631][    C1]     pending: addrconf_verify_work
[  438.929677][    C1] pool 2: cpus=1 node=0 flags=0x0 nice=0 hung=368s workers=3 idle: 23 36
[  438.929734][    C1] pool 4: cpus=0-1 flags=0x4 nice=0 hung=0s workers=4 idle: 1873 1871
[  469.121528][    C1] BUG: workqueue lockup - pool cpus=1 node=0 flags=0x0 nice=0 stuck for 399s!
[  469.122375][    C1] Showing busy workqueues and worker pools:
[  469.122940][    C1] workqueue events: flags=0x0
[  469.123375][    C1]   pwq 2: cpus=1 node=0 flags=0x0 nice=0 active=15/256 refcnt=16
[  469.123443][    C1]     pending: do_free_init, e1000_watchdog, vmstat_shepherd, kfree_rcu_monitor, regulator_init_complete_work_function, kernfs_notify_workfn, release_one_tty, key_garbage_collector, release_one_tty, release_one_tty, release_one_tty, release_one_tty, release_one_tty, release_one_tty, release_one_tty
[  469.126663][    C1] workqueue events_unbound: flags=0x2
[  469.127113][    C1]   pwq 4: cpus=0-1 flags=0x4 nice=0 active=4/512 refcnt=7
[  469.127162][    C1]     in-flight: 40:fsnotify_mark_destroy_workfn fsnotify_mark_destroy_workfn BAR(1), 9:fsnotify_connector_destroy_workfn fsnotify_connector_destroy_workfn
[  469.127286][    C1] workqueue events_power_efficient: flags=0x80
[  469.129649][    C1]   pwq 2: cpus=1 node=0 flags=0x0 nice=0 active=7/256 refcnt=8
[  469.129693][    C1]     pending: neigh_managed_work, neigh_managed_work, neigh_managed_work, neigh_periodic_work, neigh_periodic_work, neigh_periodic_work, check_lifetime
[  469.129817][    C1] workqueue rcu_gp: flags=0x8
[  469.131968][    C1]   pwq 2: cpus=1 node=0 flags=0x0 nice=0 active=3/256 refcnt=4
[  469.132021][    C1]     in-flight: 179:sync_rcu_do_polled_gp
[  469.132064][    C1]     pending: sync_rcu_do_polled_gp, process_srcu
[  469.132132][    C1] workqueue mm_percpu_wq: flags=0x8
[  469.134219][    C1]   pwq 2: cpus=1 node=0 flags=0x0 nice=0 active=1/256 refcnt=2
[  469.134275][    C1]     pending: vmstat_update
[  469.134323][    C1] workqueue cgroup_destroy: flags=0x0
[  469.135788][    C1]   pwq 2: cpus=1 node=0 flags=0x0 nice=0 active=1/1 refcnt=9
[  469.135843][    C1]     pending: css_killed_work_fn
[  469.135877][    C1]     inactive: css_killed_work_fn, css_killed_work_fn, css_release_work_fn, css_release_work_fn, css_killed_work_fn, css_killed_work_fn, css_killed_work_fn
[  469.136035][    C1] workqueue ipv6_addrconf: flags=0x40008
[  469.138818][    C1]   pwq 2: cpus=1 node=0 flags=0x0 nice=0 active=1/1 refcnt=2
[  469.138876][    C1]     pending: addrconf_verify_work
[  469.138928][    C1] pool 2: cpus=1 node=0 flags=0x0 nice=0 hung=399s workers=3 idle: 23 36
[  469.138999][    C1] pool 4: cpus=0-1 flags=0x4 nice=0 hung=0s workers=4 idle: 1871 1873
[  499.329506][    C1] BUG: workqueue lockup - pool cpus=1 node=0 flags=0x0 nice=0 stuck for 429s!
[  499.330360][    C1] Showing busy workqueues and worker pools:
[  499.330933][    C1] workqueue events: flags=0x0
[  499.331287][    C1]   pwq 2: cpus=1 node=0 flags=0x0 nice=0 active=15/256 refcnt=16
[  499.331332][    C1]     pending: do_free_init, e1000_watchdog, vmstat_shepherd, kfree_rcu_monitor, regulator_init_complete_work_function, kernfs_notify_workfn, release_one_tty, key_garbage_collector, release_one_tty, release_one_tty, release_one_tty, release_one_tty, release_one_tty, release_one_tty, release_one_tty
[  499.334522][    C1] workqueue events_unbound: flags=0x2
[  499.334966][    C1]   pwq 4: cpus=0-1 flags=0x4 nice=0 active=4/512 refcnt=7
[  499.334999][    C1]     in-flight: 40:fsnotify_mark_destroy_workfn fsnotify_mark_destroy_workfn BAR(1), 9:fsnotify_connector_destroy_workfn fsnotify_connector_destroy_workfn
[  499.335119][    C1] workqueue events_power_efficient: flags=0x80
[  499.337451][    C1]   pwq 2: cpus=1 node=0 flags=0x0 nice=0 active=7/256 refcnt=8
[  499.337512][    C1]     pending: neigh_managed_work, neigh_managed_work, neigh_managed_work, neigh_periodic_work, neigh_periodic_work, neigh_periodic_work, check_lifetime
[  499.339525][    C1] workqueue rcu_gp: flags=0x8
[  499.339915][    C1]   pwq 2: cpus=1 node=0 flags=0x0 nice=0 active=3/256 refcnt=4
[  499.339967][    C1]     in-flight: 179:sync_rcu_do_polled_gp
[  499.340008][    C1]     pending: sync_rcu_do_polled_gp, process_srcu
[  499.340073][    C1] workqueue mm_percpu_wq: flags=0x8
[  499.342140][    C1]   pwq 2: cpus=1 node=0 flags=0x0 nice=0 active=1/256 refcnt=2
[  499.342197][    C1]     pending: vmstat_update
[  499.342244][    C1] workqueue cgroup_destroy: flags=0x0
[  499.343789][    C1]   pwq 2: cpus=1 node=0 flags=0x0 nice=0 active=1/1 refcnt=9
[  499.343843][    C1]     pending: css_killed_work_fn
[  499.343877][    C1]     inactive: css_killed_work_fn, css_killed_work_fn, css_release_work_fn, css_release_work_fn, css_killed_work_fn, css_killed_work_fn, css_killed_work_fn
[  499.344041][    C1] workqueue ipv6_addrconf: flags=0x40008
[  499.346756][    C1]   pwq 2: cpus=1 node=0 flags=0x0 nice=0 active=1/1 refcnt=2
[  499.346813][    C1]     pending: addrconf_verify_work
[  499.346866][    C1] pool 2: cpus=1 node=0 flags=0x0 nice=0 hung=429s workers=3 idle: 23 36
[  499.346933][    C1] pool 4: cpus=0-1 flags=0x4 nice=0 hung=0s workers=4 idle: 1873 1871
[  529.537551][    C1] BUG: workqueue lockup - pool cpus=1 node=0 flags=0x0 nice=0 stuck for 459s!
[  529.538415][    C1] Showing busy workqueues and worker pools:
[  529.538982][    C1] workqueue events: flags=0x0
[  529.539387][    C1]   pwq 2: cpus=1 node=0 flags=0x0 nice=0 active=15/256 refcnt=16
[  529.539447][    C1]     pending: do_free_init, e1000_watchdog, vmstat_shepherd, kfree_rcu_monitor, regulator_init_complete_work_function, kernfs_notify_workfn, release_one_tty, key_garbage_collector, release_one_tty, release_one_tty, release_one_tty, release_one_tty, release_one_tty, release_one_tty, release_one_tty
[  529.542641][    C1] workqueue events_unbound: flags=0x2
[  529.543117][    C1]   pwq 4: cpus=0-1 flags=0x4 nice=0 active=5/512 refcnt=8
[  529.543154][    C1]     in-flight: 40:fsnotify_mark_destroy_workfn fsnotify_mark_destroy_workfn BAR(1), 9:fsnotify_connector_destroy_workfn fsnotify_connector_destroy_workfn
[  529.543254][    C1]     pending: toggle_allocation_gate
[  529.543280][    C1] workqueue events_power_efficient: flags=0x80
[  529.546120][    C1]   pwq 2: cpus=1 node=0 flags=0x0 nice=0 active=7/256 refcnt=8
[  529.546177][    C1]     pending: neigh_managed_work, neigh_managed_work, neigh_managed_work, neigh_periodic_work, neigh_periodic_work, neigh_periodic_work, check_lifetime
[  529.546344][    C1] workqueue rcu_gp: flags=0x8
[  529.548606][    C1]   pwq 2: cpus=1 node=0 flags=0x0 nice=0 active=3/256 refcnt=4
[  529.548662][    C1]     in-flight: 179:sync_rcu_do_polled_gp
[  529.548704][    C1]     pending: sync_rcu_do_polled_gp, process_srcu
[  529.548767][    C1] workqueue mm_percpu_wq: flags=0x8
[  529.550856][    C1]   pwq 2: cpus=1 node=0 flags=0x0 nice=0 active=1/256 refcnt=2
[  529.550912][    C1]     pending: vmstat_update
[  529.550969][    C1] workqueue cgroup_destroy: flags=0x0
[  529.552552][    C1]   pwq 2: cpus=1 node=0 flags=0x0 nice=0 active=1/1 refcnt=9
[  529.552608][    C1]     pending: css_killed_work_fn
[  529.552641][    C1]     inactive: css_killed_work_fn, css_killed_work_fn, css_release_work_fn, css_release_work_fn, css_killed_work_fn, css_killed_work_fn, css_killed_work_fn
[  529.552804][    C1] workqueue ipv6_addrconf: flags=0x40008
[  529.555564][    C1]   pwq 2: cpus=1 node=0 flags=0x0 nice=0 active=1/1 refcnt=2
[  529.555620][    C1]     pending: addrconf_verify_work
[  529.555671][    C1] pool 2: cpus=1 node=0 flags=0x0 nice=0 hung=459s workers=3 idle: 23 36
[  529.555739][    C1] pool 4: cpus=0-1 flags=0x4 nice=0 hung=0s workers=4 idle: 1871 1873
[  559.746304][    C1] BUG: workqueue lockup - pool cpus=1 node=0 flags=0x0 nice=0 stuck for 489s!
[  559.747235][    C1] Showing busy workqueues and worker pools:
[  559.747797][    C1] workqueue events: flags=0x0
[  559.748212][    C1]   pwq 2: cpus=1 node=0 flags=0x0 nice=0 active=15/256 refcnt=16
[  559.748262][    C1]     pending: do_free_init, e1000_watchdog, vmstat_shepherd, kfree_rcu_monitor, regulator_init_complete_work_function, kernfs_notify_workfn, release_one_tty, key_garbage_collector, release_one_tty, release_one_tty, release_one_tty, release_one_tty, release_one_tty, release_one_tty, release_one_tty
[  559.751350][    C1] workqueue events_unbound: flags=0x2
[  559.751813][    C1]   pwq 4: cpus=0-1 flags=0x4 nice=0 active=4/512 refcnt=7
[  559.751860][    C1]     in-flight: 40:fsnotify_mark_destroy_workfn fsnotify_mark_destroy_workfn BAR(1), 9:fsnotify_connector_destroy_workfn fsnotify_connector_destroy_workfn
[  559.751984][    C1] workqueue events_power_efficient: flags=0x80
[  559.754423][    C1]   pwq 2: cpus=1 node=0 flags=0x0 nice=0 active=7/256 refcnt=8
[  559.754481][    C1]     pending: neigh_managed_work, neigh_managed_work, neigh_managed_work, neigh_periodic_work, neigh_periodic_work, neigh_periodic_work, check_lifetime
[  559.756567][    C1] workqueue rcu_gp: flags=0x8
[  559.756967][    C1]   pwq 2: cpus=1 node=0 flags=0x0 nice=0 active=3/256 refcnt=4
[  559.757012][    C1]     in-flight: 179:sync_rcu_do_polled_gp
[  559.757049][    C1]     pending: sync_rcu_do_polled_gp, process_srcu
[  559.757108][    C1] workqueue mm_percpu_wq: flags=0x8
[  559.759130][    C1]   pwq 2: cpus=1 node=0 flags=0x0 nice=0 active=1/256 refcnt=2
[  559.759182][    C1]     pending: vmstat_update
[  559.759228][    C1] workqueue cgroup_destroy: flags=0x0
[  559.760711][    C1]   pwq 2: cpus=1 node=0 flags=0x0 nice=0 active=1/1 refcnt=9
[  559.760768][    C1]     pending: css_killed_work_fn
[  559.760802][    C1]     inactive: css_killed_work_fn, css_killed_work_fn, css_release_work_fn, css_release_work_fn, css_killed_work_fn, css_killed_work_fn, css_killed_work_fn
[  559.760967][    C1] workqueue ipv6_addrconf: flags=0x40008
[  559.763732][    C1]   pwq 2: cpus=1 node=0 flags=0x0 nice=0 active=1/1 refcnt=2
[  559.763789][    C1]     pending: addrconf_verify_work
[  559.763837][    C1] pool 2: cpus=1 node=0 flags=0x0 nice=0 hung=489s workers=3 idle: 23 36
[  559.763906][    C1] pool 4: cpus=0-1 flags=0x4 nice=0 hung=0s workers=4 idle: 1873 1871
[  589.953507][    C1] BUG: workqueue lockup - pool cpus=1 node=0 flags=0x0 nice=0 stuck for 519s!
[  589.954311][    C1] Showing busy workqueues and worker pools:
[  589.954843][    C1] workqueue events: flags=0x0
[  589.955222][    C1]   pwq 2: cpus=1 node=0 flags=0x0 nice=0 active=15/256 refcnt=16
[  589.955276][    C1]     pending: do_free_init, e1000_watchdog, vmstat_shepherd, kfree_rcu_monitor, regulator_init_complete_work_function, kernfs_notify_workfn, release_one_tty, key_garbage_collector, release_one_tty, release_one_tty, release_one_tty, release_one_tty, release_one_tty, release_one_tty, release_one_tty
[  589.958545][    C1] workqueue events_unbound: flags=0x2
[  589.959008][    C1]   pwq 4: cpus=0-1 flags=0x4 nice=0 active=4/512 refcnt=7
[  589.959055][    C1]     in-flight: 40:fsnotify_mark_destroy_workfn fsnotify_mark_destroy_workfn BAR(1), 9:fsnotify_connector_destroy_workfn fsnotify_connector_destroy_workfn
[  589.959183][    C1] workqueue events_power_efficient: flags=0x80
[  589.961591][    C1]   pwq 2: cpus=1 node=0 flags=0x0 nice=0 active=7/256 refcnt=8
[  589.961640][    C1]     pending: neigh_managed_work, neigh_managed_work, neigh_managed_work, neigh_periodic_work, neigh_periodic_work, neigh_periodic_work, check_lifetime
[  589.961788][    C1] workqueue rcu_gp: flags=0x8
[  589.963963][    C1]   pwq 2: cpus=1 node=0 flags=0x0 nice=0 active=3/256 refcnt=4
[  589.964015][    C1]     in-flight: 179:sync_rcu_do_polled_gp
[  589.964056][    C1]     pending: sync_rcu_do_polled_gp, process_srcu
[  589.964123][    C1] workqueue mm_percpu_wq: flags=0x8
[  589.966276][    C1]   pwq 2: cpus=1 node=0 flags=0x0 nice=0 active=1/256 refcnt=2
[  589.966332][    C1]     pending: vmstat_update
[  589.966379][    C1] workqueue cgroup_destroy: flags=0x0
[  589.967806][    C1]   pwq 2: cpus=1 node=0 flags=0x0 nice=0 active=1/1 refcnt=9
[  589.967856][    C1]     pending: css_killed_work_fn
[  589.967888][    C1]     inactive: css_killed_work_fn, css_killed_work_fn, css_release_work_fn, css_release_work_fn, css_killed_work_fn, css_killed_work_fn, css_killed_work_fn
[  589.968037][    C1] workqueue ipv6_addrconf: flags=0x40008
[  589.970821][    C1]   pwq 2: cpus=1 node=0 flags=0x0 nice=0 active=1/1 refcnt=2
[  589.970879][    C1]     pending: addrconf_verify_work
[  589.970930][    C1] pool 2: cpus=1 node=0 flags=0x0 nice=0 hung=519s workers=3 idle: 23 36
[  589.970998][    C1] pool 4: cpus=0-1 flags=0x4 nice=0 hung=0s workers=4 idle: 1871 1873
[  620.161550][    C1] BUG: workqueue lockup - pool cpus=1 node=0 flags=0x0 nice=0 stuck for 550s!
[  620.162374][    C1] Showing busy workqueues and worker pools:
[  620.162952][    C1] workqueue events: flags=0x0
[  620.163356][    C1]   pwq 2: cpus=1 node=0 flags=0x0 nice=0 active=15/256 refcnt=16
[  620.163429][    C1]     pending: do_free_init, e1000_watchdog, vmstat_shepherd, kfree_rcu_monitor, regulator_init_complete_work_function, kernfs_notify_workfn, release_one_tty, key_garbage_collector, release_one_tty, release_one_tty, release_one_tty, release_one_tty, release_one_tty, release_one_tty, release_one_tty
[  620.166742][    C1] workqueue events_unbound: flags=0x2
[  620.167203][    C1]   pwq 4: cpus=0-1 flags=0x4 nice=0 active=4/512 refcnt=7
[  620.167247][    C1]     in-flight: 40:fsnotify_mark_destroy_workfn fsnotify_mark_destroy_workfn BAR(1), 9:fsnotify_connector_destroy_workfn fsnotify_connector_destroy_workfn
[  620.167384][    C1] workqueue events_power_efficient: flags=0x80
[  620.169800][    C1]   pwq 2: cpus=1 node=0 flags=0x0 nice=0 active=7/256 refcnt=8
[  620.169855][    C1]     pending: neigh_managed_work, neigh_managed_work, neigh_managed_work, neigh_periodic_work, neigh_periodic_work, neigh_periodic_work, check_lifetime
[  620.170007][    C1] workqueue rcu_gp: flags=0x8
[  620.172416][    C1]   pwq 2: cpus=1 node=0 flags=0x0 nice=0 active=3/256 refcnt=4
[  620.172466][    C1]     in-flight: 179:sync_rcu_do_polled_gp
[  620.172506][    C1]     pending: sync_rcu_do_polled_gp, process_srcu
[  620.174272][    C1] workqueue mm_percpu_wq: flags=0x8
[  620.174791][    C1]   pwq 2: cpus=1 node=0 flags=0x0 nice=0 active=1/256 refcnt=2
[  620.174850][    C1]     pending: vmstat_update
[  620.174900][    C1] workqueue cgroup_destroy: flags=0x0
[  620.176449][    C1]   pwq 2: cpus=1 node=0 flags=0x0 nice=0 active=1/1 refcnt=9
[  620.176511][    C1]     pending: css_killed_work_fn
[  620.176545][    C1]     inactive: css_killed_work_fn, css_killed_work_fn, css_release_work_fn, css_release_work_fn, css_killed_work_fn, css_killed_work_fn, css_killed_work_fn
[  620.178995][    C1] workqueue ipv6_addrconf: flags=0x40008
[  620.179458][    C1]   pwq 2: cpus=1 node=0 flags=0x0 nice=0 active=1/1 refcnt=2
[  620.179508][    C1]     pending: addrconf_verify_work
[  620.180619][    C1] pool 2: cpus=1 node=0 flags=0x0 nice=0 hung=550s workers=3 idle: 23 36
[  620.180678][    C1] pool 4: cpus=0-1 flags=0x4 nice=0 hung=0s workers=4 idle: 1873 1871
[  650.369599][    C1] BUG: workqueue lockup - pool cpus=1 node=0 flags=0x0 nice=0 stuck for 580s!
[  650.370205][    C1] Showing busy workqueues and worker pools:
[  650.370772][    C1] workqueue events: flags=0x0
[  650.371192][    C1]   pwq 2: cpus=1 node=0 flags=0x0 nice=0 active=15/256 refcnt=16
[  650.371248][    C1]     pending: do_free_init, e1000_watchdog, vmstat_shepherd, kfree_rcu_monitor, regulator_init_complete_work_function, kernfs_notify_workfn, release_one_tty, key_garbage_collector, release_one_tty, release_one_tty, release_one_tty, release_one_tty, release_one_tty, release_one_tty, release_one_tty
[  650.374574][    C1] workqueue events_unbound: flags=0x2
[  650.375046][    C1]   pwq 4: cpus=0-1 flags=0x4 nice=0 active=4/512 refcnt=7
[  650.375094][    C1]     in-flight: 40:fsnotify_mark_destroy_workfn fsnotify_mark_destroy_workfn BAR(1), 9:fsnotify_connector_destroy_workfn fsnotify_connector_destroy_workfn
[  650.375217][    C1] workqueue events_power_efficient: flags=0x80
[  650.377523][    C1]   pwq 2: cpus=1 node=0 flags=0x0 nice=0 active=7/256 refcnt=8
[  650.377581][    C1]     pending: neigh_managed_work, neigh_managed_work, neigh_managed_work, neigh_periodic_work, neigh_periodic_work, neigh_periodic_work, check_lifetime
[  650.377756][    C1] workqueue rcu_gp: flags=0x8
[  650.379967][    C1]   pwq 2: cpus=1 node=0 flags=0x0 nice=0 active=3/256 refcnt=4
[  650.380011][    C1]     in-flight: 179:sync_rcu_do_polled_gp
[  650.380044][    C1]     pending: sync_rcu_do_polled_gp, process_srcu
[  650.380097][    C1] workqueue mm_percpu_wq: flags=0x8
[  650.382130][    C1]   pwq 2: cpus=1 node=0 flags=0x0 nice=0 active=1/256 refcnt=2
[  650.382185][    C1]     pending: vmstat_update
[  650.382231][    C1] workqueue cgroup_destroy: flags=0x0
[  650.383783][    C1]   pwq 2: cpus=1 node=0 flags=0x0 nice=0 active=1/1 refcnt=9
[  650.383834][    C1]     pending: css_killed_work_fn
[  650.383864][    C1]     inactive: css_killed_work_fn, css_killed_work_fn, css_release_work_fn, css_release_work_fn, css_killed_work_fn, css_killed_work_fn, css_killed_work_fn
[  650.384013][    C1] workqueue ipv6_addrconf: flags=0x40008
[  650.386562][    C1]   pwq 2: cpus=1 node=0 flags=0x0 nice=0 active=1/1 refcnt=2
[  650.386618][    C1]     pending: addrconf_verify_work
[  650.386671][    C1] pool 2: cpus=1 node=0 flags=0x0 nice=0 hung=580s workers=3 idle: 23 36
[  650.386739][    C1] pool 4: cpus=0-1 flags=0x4 nice=0 hung=0s workers=4 idle: 1871 1873
[  680.578270][    C1] BUG: workqueue lockup - pool cpus=1 node=0 flags=0x0 nice=0 stuck for 610s!
[  680.579223][    C1] Showing busy workqueues and worker pools:
[  680.579757][    C1] workqueue events: flags=0x0
[  680.580171][    C1]   pwq 2: cpus=1 node=0 flags=0x0 nice=0 active=15/256 refcnt=16
[  680.580229][    C1]     pending: do_free_init, e1000_watchdog, vmstat_shepherd, kfree_rcu_monitor, regulator_init_complete_work_function, kernfs_notify_workfn, release_one_tty, key_garbage_collector, release_one_tty, release_one_tty, release_one_tty, release_one_tty, release_one_tty, release_one_tty, release_one_tty
[  680.583530][    C1] workqueue events_unbound: flags=0x2
[  680.583987][    C1]   pwq 4: cpus=0-1 flags=0x4 nice=0 active=5/512 refcnt=8
[  680.584033][    C1]     in-flight: 40:fsnotify_mark_destroy_workfn fsnotify_mark_destroy_workfn BAR(1), 9:fsnotify_connector_destroy_workfn fsnotify_connector_destroy_workfn
[  680.584152][    C1]     pending: toggle_allocation_gate
[  680.584184][    C1] workqueue events_power_efficient: flags=0x80
[  680.587166][    C1]   pwq 2: cpus=1 node=0 flags=0x0 nice=0 active=7/256 refcnt=8
[  680.587219][    C1]     pending: neigh_managed_work, neigh_managed_work, neigh_managed_work, neigh_periodic_work, neigh_periodic_work, neigh_periodic_work, check_lifetime
[  680.587389][    C1] workqueue rcu_gp: flags=0x8
[  680.589740][    C1]   pwq 2: cpus=1 node=0 flags=0x0 nice=0 active=3/256 refcnt=4
[  680.589793][    C1]     in-flight: 179:sync_rcu_do_polled_gp
[  680.589834][    C1]     pending: sync_rcu_do_polled_gp, process_srcu
[  680.589900][    C1] workqueue mm_percpu_wq: flags=0x8
[  680.592043][    C1]   pwq 2: cpus=1 node=0 flags=0x0 nice=0 active=1/256 refcnt=2
[  680.592096][    C1]     pending: vmstat_update
[  680.592143][    C1] workqueue cgroup_destroy: flags=0x0
[  680.593720][    C1]   pwq 2: cpus=1 node=0 flags=0x0 nice=0 active=1/1 refcnt=9
[  680.593780][    C1]     pending: css_killed_work_fn
[  680.593816][    C1]     inactive: css_killed_work_fn, css_killed_work_fn, css_release_work_fn, css_release_work_fn, css_killed_work_fn, css_killed_work_fn, css_killed_work_fn
[  680.593991][    C1] workqueue ipv6_addrconf: flags=0x40008
[  680.596853][    C1]   pwq 2: cpus=1 node=0 flags=0x0 nice=0 active=1/1 refcnt=2
[  680.596933][    C1]     pending: addrconf_verify_work
[  680.596981][    C1] pool 2: cpus=1 node=0 flags=0x0 nice=0 hung=610s workers=3 idle: 23 36
[  680.597046][    C1] pool 4: cpus=0-1 flags=0x4 nice=0 hung=0s workers=4 idle: 1873 1871
[  710.785506][    C1] BUG: workqueue lockup - pool cpus=1 node=0 flags=0x0 nice=0 stuck for 640s!
[  710.786354][    C1] Showing busy workqueues and worker pools:
[  710.786896][    C1] workqueue events: flags=0x0
[  710.787277][    C1]   pwq 2: cpus=1 node=0 flags=0x0 nice=0 active=15/256 refcnt=16
[  710.787331][    C1]     pending: do_free_init, e1000_watchdog, vmstat_shepherd, kfree_rcu_monitor, regulator_init_complete_work_function, kernfs_notify_workfn, release_one_tty, key_garbage_collector, release_one_tty, release_one_tty, release_one_tty, release_one_tty, release_one_tty, release_one_tty, release_one_tty
[  710.790604][    C1] workqueue events_unbound: flags=0x2
[  710.791083][    C1]   pwq 4: cpus=0-1 flags=0x4 nice=0 active=4/512 refcnt=7
[  710.791133][    C1]     in-flight: 40:fsnotify_mark_destroy_workfn fsnotify_mark_destroy_workfn BAR(1), 9:fsnotify_connector_destroy_workfn fsnotify_connector_destroy_workfn
[  710.791267][    C1] workqueue events_power_efficient: flags=0x80
[  710.793708][    C1]   pwq 2: cpus=1 node=0 flags=0x0 nice=0 active=7/256 refcnt=8
[  710.793765][    C1]     pending: neigh_managed_work, neigh_managed_work, neigh_managed_work, neigh_periodic_work, neigh_periodic_work, neigh_periodic_work, check_lifetime
[  710.793934][    C1] workqueue rcu_gp: flags=0x8
[  710.796240][    C1]   pwq 2: cpus=1 node=0 flags=0x0 nice=0 active=3/256 refcnt=4
[  710.796294][    C1]     in-flight: 179:sync_rcu_do_polled_gp
[  710.796335][    C1]     pending: sync_rcu_do_polled_gp, process_srcu
[  710.796403][    C1] workqueue mm_percpu_wq: flags=0x8
[  710.798676][    C1]   pwq 2: cpus=1 node=0 flags=0x0 nice=0 active=1/256 refcnt=2
[  710.798728][    C1]     pending: vmstat_update
[  710.798775][    C1] workqueue cgroup_destroy: flags=0x0
[  710.800268][    C1]   pwq 2: cpus=1 node=0 flags=0x0 nice=0 active=1/1 refcnt=9
[  710.800324][    C1]     pending: css_killed_work_fn
[  710.800357][    C1]     inactive: css_killed_work_fn, css_killed_work_fn, css_release_work_fn, css_release_work_fn, css_killed_work_fn, css_killed_work_fn, css_killed_work_fn
[  710.802827][    C1] workqueue ipv6_addrconf: flags=0x40008
[  710.803289][    C1]   pwq 2: cpus=1 node=0 flags=0x0 nice=0 active=1/1 refcnt=2
[  710.803345][    C1]     pending: addrconf_verify_work
[  710.803396][    C1] pool 2: cpus=1 node=0 flags=0x0 nice=0 hung=640s workers=3 idle: 23 36
[  710.803461][    C1] pool 4: cpus=0-1 flags=0x4 nice=0 hung=0s workers=4 idle: 1873 1871
[  740.993516][    C1] BUG: workqueue lockup - pool cpus=1 node=0 flags=0x0 nice=0 stuck for 670s!
[  740.994427][    C1] Showing busy workqueues and worker pools:
[  740.994975][    C1] workqueue events: flags=0x0
[  740.995387][    C1]   pwq 2: cpus=1 node=0 flags=0x0 nice=0 active=15/256 refcnt=16
[  740.995432][    C1]     pending: do_free_init, e1000_watchdog, vmstat_shepherd, kfree_rcu_monitor, regulator_init_complete_work_function, kernfs_notify_workfn, release_one_tty, key_garbage_collector, release_one_tty, release_one_tty, release_one_tty, release_one_tty, release_one_tty, release_one_tty, release_one_tty
[  740.998508][    C1] workqueue events_unbound: flags=0x2
[  740.998989][    C1]   pwq 4: cpus=0-1 flags=0x4 nice=0 active=4/512 refcnt=7
[  740.999055][    C1]     in-flight: 40:fsnotify_mark_destroy_workfn fsnotify_mark_destroy_workfn BAR(1), 9:fsnotify_connector_destroy_workfn fsnotify_connector_destroy_workfn
[  740.999175][    C1] workqueue events_power_efficient: flags=0x80
[  741.001710][    C1]   pwq 2: cpus=1 node=0 flags=0x0 nice=0 active=7/256 refcnt=8
[  741.001765][    C1]     pending: neigh_managed_work, neigh_managed_work, neigh_managed_work, neigh_periodic_work, neigh_periodic_work, neigh_periodic_work, check_lifetime
[  741.001922][    C1] workqueue rcu_gp: flags=0x8
[  741.004173][    C1]   pwq 2: cpus=1 node=0 flags=0x0 nice=0 active=3/256 refcnt=4
[  741.004223][    C1]     in-flight: 179:sync_rcu_do_polled_gp
[  741.004262][    C1]     pending: sync_rcu_do_polled_gp, process_srcu
[  741.004326][    C1] workqueue mm_percpu_wq: flags=0x8
[  741.006403][    C1]   pwq 2: cpus=1 node=0 flags=0x0 nice=0 active=1/256 refcnt=2
[  741.006456][    C1]     pending: vmstat_update
[  741.007516][    C1] workqueue cgroup_destroy: flags=0x0
[  741.007988][    C1]   pwq 2: cpus=1 node=0 flags=0x0 nice=0 active=1/1 refcnt=9
[  741.008042][    C1]     pending: css_killed_work_fn
[  741.008072][    C1]     inactive: css_killed_work_fn, css_killed_work_fn, css_release_work_fn, css_release_work_fn, css_killed_work_fn, css_killed_work_fn, css_killed_work_fn
[  741.008235][    C1] workqueue ipv6_addrconf: flags=0x40008
[  741.011076][    C1]   pwq 2: cpus=1 node=0 flags=0x0 nice=0 active=1/1 refcnt=2
[  741.011131][    C1]     pending: addrconf_verify_work
[  741.011181][    C1] pool 2: cpus=1 node=0 flags=0x0 nice=0 hung=671s workers=3 idle: 23 36
[  741.011244][    C1] pool 4: cpus=0-1 flags=0x4 nice=0 hung=0s workers=4 idle: 1873 1871
[  771.201509][    C1] BUG: workqueue lockup - pool cpus=1 node=0 flags=0x0 nice=0 stuck for 701s!
[  771.202368][    C1] Showing busy workqueues and worker pools:
[  771.202945][    C1] workqueue events: flags=0x0
[  771.203355][    C1]   pwq 2: cpus=1 node=0 flags=0x0 nice=0 active=15/256 refcnt=16
[  771.203416][    C1]     pending: do_free_init, e1000_watchdog, vmstat_shepherd, kfree_rcu_monitor, regulator_init_complete_work_function, kernfs_notify_workfn, release_one_tty, key_garbage_collector, release_one_tty, release_one_tty, release_one_tty, release_one_tty, release_one_tty, release_one_tty, release_one_tty
[  771.206775][    C1] workqueue events_unbound: flags=0x2
[  771.207260][    C1]   pwq 4: cpus=0-1 flags=0x4 nice=0 active=4/512 refcnt=7
[  771.207307][    C1]     in-flight: 40:fsnotify_mark_destroy_workfn fsnotify_mark_destroy_workfn BAR(1), 9:fsnotify_connector_destroy_workfn fsnotify_connector_destroy_workfn
[  771.207427][    C1] workqueue events_power_efficient: flags=0x80
[  771.209848][    C1]   pwq 2: cpus=1 node=0 flags=0x0 nice=0 active=7/256 refcnt=8
[  771.209909][    C1]     pending: neigh_managed_work, neigh_managed_work, neigh_managed_work, neigh_periodic_work, neigh_periodic_work, neigh_periodic_work, check_lifetime
[  771.210088][    C1] workqueue rcu_gp: flags=0x8
[  771.212125][    C1]   pwq 2: cpus=1 node=0 flags=0x0 nice=0 active=3/256 refcnt=4
[  771.212181][    C1]     in-flight: 179:sync_rcu_do_polled_gp
[  771.212224][    C1]     pending: sync_rcu_do_polled_gp, process_srcu
[  771.212292][    C1] workqueue mm_percpu_wq: flags=0x8
[  771.214339][    C1]   pwq 2: cpus=1 node=0 flags=0x0 nice=0 active=1/256 refcnt=2
[  771.214392][    C1]     pending: vmstat_update
[  771.214437][    C1] workqueue cgroup_destroy: flags=0x0
[  771.215926][    C1]   pwq 2: cpus=1 node=0 flags=0x0 nice=0 active=1/1 refcnt=9
[  771.215987][    C1]     pending: css_killed_work_fn
[  771.216019][    C1]     inactive: css_killed_work_fn, css_killed_work_fn, css_release_work_fn, css_release_work_fn, css_killed_work_fn, css_killed_work_fn, css_killed_work_fn
[  771.216189][    C1] workqueue ipv6_addrconf: flags=0x40008
[  771.219053][    C1]   pwq 2: cpus=1 node=0 flags=0x0 nice=0 active=1/1 refcnt=2
[  771.219323][    C1]     pending: addrconf_verify_work
[  771.219374][    C1] pool 2: cpus=1 node=0 flags=0x0 nice=0 hung=701s workers=3 idle: 23 36
[  771.219439][    C1] pool 4: cpus=0-1 flags=0x4 nice=0 hung=0s workers=4 idle: 1871 1873
[  801.410329][    C1] BUG: workqueue lockup - pool cpus=1 node=0 flags=0x0 nice=0 stuck for 731s!
[  801.411198][    C1] Showing busy workqueues and worker pools:
[  801.411729][    C1] workqueue events: flags=0x0
[  801.412126][    C1]   pwq 2: cpus=1 node=0 flags=0x0 nice=0 active=15/256 refcnt=16
[  801.412180][    C1]     pending: do_free_init, e1000_watchdog, vmstat_shepherd, kfree_rcu_monitor, regulator_init_complete_work_function, kernfs_notify_workfn, release_one_tty, key_garbage_collector, release_one_tty, release_one_tty, release_one_tty, release_one_tty, release_one_tty, release_one_tty, release_one_tty
[  801.415504][    C1] workqueue events_unbound: flags=0x2
[  801.415971][    C1]   pwq 4: cpus=0-1 flags=0x4 nice=0 active=4/512 refcnt=7
[  801.416019][    C1]     in-flight: 40:fsnotify_mark_destroy_workfn fsnotify_mark_destroy_workfn BAR(1), 9:fsnotify_connector_destroy_workfn fsnotify_connector_destroy_workfn
[  801.416144][    C1] workqueue events_power_efficient: flags=0x80
[  801.418610][    C1]   pwq 2: cpus=1 node=0 flags=0x0 nice=0 active=7/256 refcnt=8
[  801.418667][    C1]     pending: neigh_managed_work, neigh_managed_work, neigh_managed_work, neigh_periodic_work, neigh_periodic_work, neigh_periodic_work, check_lifetime
[  801.418822][    C1] workqueue rcu_gp: flags=0x8
[  801.421028][    C1]   pwq 2: cpus=1 node=0 flags=0x0 nice=0 active=3/256 refcnt=4
[  801.421073][    C1]     in-flight: 179:sync_rcu_do_polled_gp
[  801.421109][    C1]     pending: sync_rcu_do_polled_gp, process_srcu
[  801.421163][    C1] workqueue mm_percpu_wq: flags=0x8
[  801.423291][    C1]   pwq 2: cpus=1 node=0 flags=0x0 nice=0 active=1/256 refcnt=2
[  801.423347][    C1]     pending: vmstat_update
[  801.423394][    C1] workqueue cgroup_destroy: flags=0x0
[  801.424904][    C1]   pwq 2: cpus=1 node=0 flags=0x0 nice=0 active=1/1 refcnt=9
[  801.424956][    C1]     pending: css_killed_work_fn
[  801.424988][    C1]     inactive: css_killed_work_fn, css_killed_work_fn, css_release_work_fn, css_release_work_fn, css_killed_work_fn, css_killed_work_fn, css_killed_work_fn
[  801.425154][    C1] workqueue ipv6_addrconf: flags=0x40008
[  801.427954][    C1]   pwq 2: cpus=1 node=0 flags=0x0 nice=0 active=1/1 refcnt=2
[  801.428011][    C1]     pending: addrconf_verify_work
[  801.428064][    C1] pool 2: cpus=1 node=0 flags=0x0 nice=0 hung=731s workers=3 idle: 23 36
[  801.428134][    C1] pool 4: cpus=0-1 flags=0x4 nice=0 hung=0s workers=4 idle: 1873 1871
[  831.617538][    C1] BUG: workqueue lockup - pool cpus=1 node=0 flags=0x0 nice=0 stuck for 761s!
[  831.618345][    C1] Showing busy workqueues and worker pools:
[  831.618921][    C1] workqueue events: flags=0x0
[  831.619320][    C1]   pwq 2: cpus=1 node=0 flags=0x0 nice=0 active=15/256 refcnt=16
[  831.619380][    C1]     pending: do_free_init, e1000_watchdog, vmstat_shepherd, kfree_rcu_monitor, regulator_init_complete_work_function, kernfs_notify_workfn, release_one_tty, key_garbage_collector, release_one_tty, release_one_tty, release_one_tty, release_one_tty, release_one_tty, release_one_tty, release_one_tty
[  831.622661][    C1] workqueue events_unbound: flags=0x2
[  831.623135][    C1]   pwq 4: cpus=0-1 flags=0x4 nice=0 active=4/512 refcnt=7
[  831.623191][    C1]     in-flight: 40:fsnotify_mark_destroy_workfn fsnotify_mark_destroy_workfn BAR(1), 9:fsnotify_connector_destroy_workfn fsnotify_connector_destroy_workfn
[  831.623337][    C1] workqueue events_power_efficient: flags=0x80
[  831.626041][    C1]   pwq 2: cpus=1 node=0 flags=0x0 nice=0 active=7/256 refcnt=8
[  831.626122][    C1]     pending: neigh_managed_work, neigh_managed_work, neigh_managed_work, neigh_periodic_work, neigh_periodic_work, neigh_periodic_work, check_lifetime
[  831.626316][    C1] workqueue rcu_gp: flags=0x8
[  831.628626][    C1]   pwq 2: cpus=1 node=0 flags=0x0 nice=0 active=3/256 refcnt=4
[  831.628682][    C1]     in-flight: 179:sync_rcu_do_polled_gp
[  831.628728][    C1]     pending: sync_rcu_do_polled_gp, process_srcu
[  831.628798][    C1] workqueue mm_percpu_wq: flags=0x8
[  831.631035][    C1]   pwq 2: cpus=1 node=0 flags=0x0 nice=0 active=1/256 refcnt=2
[  831.631098][    C1]     pending: vmstat_update
[  831.631150][    C1] workqueue cgroup_destroy: flags=0x0
[  831.632675][    C1]   pwq 2: cpus=1 node=0 flags=0x0 nice=0 active=1/1 refcnt=9
[  831.632731][    C1]     pending: css_killed_work_fn
[  831.632765][    C1]     inactive: css_killed_work_fn, css_killed_work_fn, css_release_work_fn, css_release_work_fn, css_killed_work_fn, css_killed_work_fn, css_killed_work_fn
[  831.632955][    C1] workqueue ipv6_addrconf: flags=0x40008
[  831.635610][    C1]   pwq 2: cpus=1 node=0 flags=0x0 nice=0 active=1/1 refcnt=2
[  831.635653][    C1]     pending: addrconf_verify_work
[  831.635692][    C1] pool 2: cpus=1 node=0 flags=0x0 nice=0 hung=761s workers=3 idle: 23 36
[  831.635744][    C1] pool 4: cpus=0-1 flags=0x4 nice=0 hung=0s workers=4 idle: 1871 1873
[  861.825544][    C1] BUG: workqueue lockup - pool cpus=1 node=0 flags=0x0 nice=0 stuck for 791s!
[  861.826418][    C1] Showing busy workqueues and worker pools:
[  861.826963][    C1] workqueue events: flags=0x0
[  861.827374][    C1]   pwq 2: cpus=1 node=0 flags=0x0 nice=0 active=15/256 refcnt=16
[  861.827418][    C1]     pending: do_free_init, e1000_watchdog, vmstat_shepherd, kfree_rcu_monitor, regulator_init_complete_work_function, kernfs_notify_workfn, release_one_tty, key_garbage_collector, release_one_tty, release_one_tty, release_one_tty, release_one_tty, release_one_tty, release_one_tty, release_one_tty
[  861.830571][    C1] workqueue events_unbound: flags=0x2
[  861.831021][    C1]   pwq 4: cpus=0-1 flags=0x4 nice=0 active=4/512 refcnt=7
[  861.831067][    C1]     in-flight: 40:fsnotify_mark_destroy_workfn fsnotify_mark_destroy_workfn BAR(1), 9:fsnotify_connector_destroy_workfn fsnotify_connector_destroy_workfn
[  861.831190][    C1] workqueue events_power_efficient: flags=0x80
[  861.833611][    C1]   pwq 2: cpus=1 node=0 flags=0x0 nice=0 active=7/256 refcnt=8
[  861.833668][    C1]     pending: neigh_managed_work, neigh_managed_work, neigh_managed_work, neigh_periodic_work, neigh_periodic_work, neigh_periodic_work, check_lifetime
[  861.833838][    C1] workqueue rcu_gp: flags=0x8
[  861.836085][    C1]   pwq 2: cpus=1 node=0 flags=0x0 nice=0 active=3/256 refcnt=4
[  861.836128][    C1]     in-flight: 179:sync_rcu_do_polled_gp
[  861.836165][    C1]     pending: sync_rcu_do_polled_gp, process_srcu
[  861.836223][    C1] workqueue mm_percpu_wq: flags=0x8
[  861.838322][    C1]   pwq 2: cpus=1 node=0 flags=0x0 nice=0 active=1/256 refcnt=2
[  861.838376][    C1]     pending: vmstat_update
[  861.838423][    C1] workqueue cgroup_destroy: flags=0x0
[  861.840003][    C1]   pwq 2: cpus=1 node=0 flags=0x0 nice=0 active=1/1 refcnt=9
[  861.840062][    C1]     pending: css_killed_work_fn
[  861.840120][    C1]     inactive: css_killed_work_fn, css_killed_work_fn, css_release_work_fn, css_release_work_fn, css_killed_work_fn, css_killed_work_fn, css_killed_work_fn
[  861.840255][    C1] workqueue ipv6_addrconf: flags=0x40008
[  861.842948][    C1]   pwq 2: cpus=1 node=0 flags=0x0 nice=0 active=1/1 refcnt=2
[  861.842999][    C1]     pending: addrconf_verify_work
[  861.843048][    C1] pool 2: cpus=1 node=0 flags=0x0 nice=0 hung=791s workers=3 idle: 23 36
[  861.843118][    C1] pool 4: cpus=0-1 flags=0x4 nice=0 hung=0s workers=4 idle: 1873 1871
[  892.033547][    C1] BUG: workqueue lockup - pool cpus=1 node=0 flags=0x0 nice=0 stuck for 822s!
[  892.034444][    C1] Showing busy workqueues and worker pools:
[  892.035014][    C1] workqueue events: flags=0x0
[  892.035425][    C1]   pwq 2: cpus=1 node=0 flags=0x0 nice=0 active=15/256 refcnt=16
[  892.035485][    C1]     pending: do_free_init, e1000_watchdog, vmstat_shepherd, kfree_rcu_monitor, regulator_init_complete_work_function, kernfs_notify_workfn, release_one_tty, key_garbage_collector, release_one_tty, release_one_tty, release_one_tty, release_one_tty, release_one_tty, release_one_tty, release_one_tty
[  892.038781][    C1] workqueue events_unbound: flags=0x2
[  892.039243][    C1]   pwq 4: cpus=0-1 flags=0x4 nice=0 active=5/512 refcnt=8
[  892.039288][    C1]     in-flight: 40:fsnotify_mark_destroy_workfn fsnotify_mark_destroy_workfn BAR(1), 9:fsnotify_connector_destroy_workfn fsnotify_connector_destroy_workfn
[  892.039406][    C1]     pending: toggle_allocation_gate
[  892.039437][    C1] workqueue events_power_efficient: flags=0x80
[  892.042420][    C1]   pwq 2: cpus=1 node=0 flags=0x0 nice=0 active=7/256 refcnt=8
[  892.042482][    C1]     pending: neigh_managed_work, neigh_managed_work, neigh_managed_work, neigh_periodic_work, neigh_periodic_work, neigh_periodic_work, check_lifetime
[  892.044566][    C1] workqueue rcu_gp: flags=0x8
[  892.044969][    C1]   pwq 2: cpus=1 node=0 flags=0x0 nice=0 active=3/256 refcnt=4
[  892.045018][    C1]     in-flight: 179:sync_rcu_do_polled_gp
[  892.045057][    C1]     pending: sync_rcu_do_polled_gp, process_srcu
[  892.045123][    C1] workqueue mm_percpu_wq: flags=0x8
[  892.047297][    C1]   pwq 2: cpus=1 node=0 flags=0x0 nice=0 active=1/256 refcnt=2
[  892.047355][    C1]     pending: vmstat_update
[  892.047402][    C1] workqueue cgroup_destroy: flags=0x0
[  892.048963][    C1]   pwq 2: cpus=1 node=0 flags=0x0 nice=0 active=1/1 refcnt=9
[  892.049035][    C1]     pending: css_killed_work_fn
[  892.049067][    C1]     inactive: css_killed_work_fn, css_killed_work_fn, css_release_work_fn, css_release_work_fn, css_killed_work_fn, css_killed_work_fn, css_killed_work_fn
[  892.049230][    C1] workqueue ipv6_addrconf: flags=0x40008
[  892.052055][    C1]   pwq 2: cpus=1 node=0 flags=0x0 nice=0 active=1/1 refcnt=2
[  892.052118][    C1]     pending: addrconf_verify_work
[  892.052176][    C1] pool 2: cpus=1 node=0 flags=0x0 nice=0 hung=822s workers=3 idle: 23 36
[  892.052252][    C1] pool 4: cpus=0-1 flags=0x4 nice=0 hung=0s workers=4 idle: 1871 1873
[  922.242347][    C1] BUG: workqueue lockup - pool cpus=1 node=0 flags=0x0 nice=0 stuck for 852s!
[  922.243300][    C1] Showing busy workqueues and worker pools:
[  922.243812][    C1] workqueue events: flags=0x0
[  922.244216][    C1]   pwq 2: cpus=1 node=0 flags=0x0 nice=0 active=15/256 refcnt=16
[  922.244262][    C1]     pending: do_free_init, e1000_watchdog, vmstat_shepherd, kfree_rcu_monitor, regulator_init_complete_work_function, kernfs_notify_workfn, release_one_tty, key_garbage_collector, release_one_tty, release_one_tty, release_one_tty, release_one_tty, release_one_tty, release_one_tty, release_one_tty
[  922.247381][    C1] workqueue events_unbound: flags=0x2
[  922.247877][    C1]   pwq 4: cpus=0-1 flags=0x4 nice=0 active=4/512 refcnt=7
[  922.247927][    C1]     in-flight: 40:fsnotify_mark_destroy_workfn fsnotify_mark_destroy_workfn BAR(1), 9:fsnotify_connector_destroy_workfn fsnotify_connector_destroy_workfn
[  922.248053][    C1] workqueue events_power_efficient: flags=0x80
[  922.250508][    C1]   pwq 2: cpus=1 node=0 flags=0x0 nice=0 active=7/256 refcnt=8
[  922.250553][    C1]     pending: neigh_managed_work, neigh_managed_work, neigh_managed_work, neigh_periodic_work, neigh_periodic_work, neigh_periodic_work, check_lifetime
[  922.250689][    C1] workqueue rcu_gp: flags=0x8
[  922.252888][    C1]   pwq 2: cpus=1 node=0 flags=0x0 nice=0 active=3/256 refcnt=4
[  922.252939][    C1]     in-flight: 179:sync_rcu_do_polled_gp
[  922.252979][    C1]     pending: sync_rcu_do_polled_gp, process_srcu
[  922.253044][    C1] workqueue mm_percpu_wq: flags=0x8
[  922.255182][    C1]   pwq 2: cpus=1 node=0 flags=0x0 nice=0 active=1/256 refcnt=2
[  922.255232][    C1]     pending: vmstat_update
[  922.255277][    C1] workqueue cgroup_destroy: flags=0x0
[  922.256691][    C1]   pwq 2: cpus=1 node=0 flags=0x0 nice=0 active=1/1 refcnt=9
[  922.256749][    C1]     pending: css_killed_work_fn
[  922.256784][    C1]     inactive: css_killed_work_fn, css_killed_work_fn, css_release_work_fn, css_release_work_fn, css_killed_work_fn, css_killed_work_fn, css_killed_work_fn
[  922.256969][    C1] workqueue ipv6_addrconf: flags=0x40008
[  922.259659][    C1]   pwq 2: cpus=1 node=0 flags=0x0 nice=0 active=1/1 refcnt=2
[  922.259713][    C1]     pending: addrconf_verify_work
[  922.259760][    C1] pool 2: cpus=1 node=0 flags=0x0 nice=0 hung=852s workers=3 idle: 23 36
[  922.259823][    C1] pool 4: cpus=0-1 flags=0x4 nice=0 hung=0s workers=4 idle: 1873 1871
[  952.449517][    C1] BUG: workqueue lockup - pool cpus=1 node=0 flags=0x0 nice=0 stuck for 882s!
[  952.450408][    C1] Showing busy workqueues and worker pools:
[  952.450995][    C1] workqueue events: flags=0x0
[  952.451428][    C1]   pwq 2: cpus=1 node=0 flags=0x0 nice=0 active=15/256 refcnt=16
[  952.451489][    C1]     pending: do_free_init, e1000_watchdog, vmstat_shepherd, kfree_rcu_monitor, regulator_init_complete_work_function, kernfs_notify_workfn, release_one_tty, key_garbage_collector, release_one_tty, release_one_tty, release_one_tty, release_one_tty, release_one_tty, release_one_tty, release_one_tty
[  952.454524][    C1] workqueue events_unbound: flags=0x2
[  952.455011][    C1]   pwq 4: cpus=0-1 flags=0x4 nice=0 active=4/512 refcnt=7
[  952.455061][    C1]     in-flight: 40:fsnotify_mark_destroy_workfn fsnotify_mark_destroy_workfn BAR(1), 9:fsnotify_connector_destroy_workfn fsnotify_connector_destroy_workfn
[  952.455195][    C1] workqueue events_power_efficient: flags=0x80
[  952.457674][    C1]   pwq 2: cpus=1 node=0 flags=0x0 nice=0 active=7/256 refcnt=8
[  952.457730][    C1]     pending: neigh_managed_work, neigh_managed_work, neigh_managed_work, neigh_periodic_work, neigh_periodic_work, neigh_periodic_work, check_lifetime
[  952.457880][    C1] workqueue rcu_gp: flags=0x8
[  952.460158][    C1]   pwq 2: cpus=1 node=0 flags=0x0 nice=0 active=3/256 refcnt=4
[  952.460205][    C1]     in-flight: 179:sync_rcu_do_polled_gp
[  952.460240][    C1]     pending: sync_rcu_do_polled_gp, process_srcu
[  952.460303][    C1] workqueue mm_percpu_wq: flags=0x8
[  952.462396][    C1]   pwq 2: cpus=1 node=0 flags=0x0 nice=0 active=1/256 refcnt=2
[  952.462444][    C1]     pending: vmstat_update
[  952.463534][    C1] workqueue cgroup_destroy: flags=0x0
[  952.464021][    C1]   pwq 2: cpus=1 node=0 flags=0x0 nice=0 active=1/1 refcnt=9
[  952.464080][    C1]     pending: css_killed_work_fn
[  952.464113][    C1]     inactive: css_killed_work_fn, css_killed_work_fn, css_release_work_fn, css_release_work_fn, css_killed_work_fn, css_killed_work_fn, css_killed_work_fn
[  952.464280][    C1] workqueue ipv6_addrconf: flags=0x40008
[  952.467161][    C1]   pwq 2: cpus=1 node=0 flags=0x0 nice=0 active=1/1 refcnt=2
[  952.467222][    C1]     pending: addrconf_verify_work
[  952.467275][    C1] pool 2: cpus=1 node=0 flags=0x0 nice=0 hung=882s workers=3 idle: 23 36
[  952.467347][    C1] pool 4: cpus=0-1 flags=0x4 nice=0 hung=0s workers=4 idle: 1871 1873
[  982.657536][    C1] BUG: workqueue lockup - pool cpus=1 node=0 flags=0x0 nice=0 stuck for 912s!
[  982.658425][    C1] Showing busy workqueues and worker pools:
[  982.659004][    C1] workqueue events: flags=0x0
[  982.659421][    C1]   pwq 2: cpus=1 node=0 flags=0x0 nice=0 active=15/256 refcnt=16
[  982.659480][    C1]     pending: do_free_init, e1000_watchdog, vmstat_shepherd, kfree_rcu_monitor, regulator_init_complete_work_function, kernfs_notify_workfn, release_one_tty, key_garbage_collector, release_one_tty, release_one_tty, release_one_tty, release_one_tty, release_one_tty, release_one_tty, release_one_tty
[  982.662894][    C1] workqueue events_unbound: flags=0x2
[  982.663394][    C1]   pwq 4: cpus=0-1 flags=0x4 nice=0 active=4/512 refcnt=7
[  982.663445][    C1]     in-flight: 40:fsnotify_mark_destroy_workfn fsnotify_mark_destroy_workfn BAR(1), 9:fsnotify_connector_destroy_workfn fsnotify_connector_destroy_workfn
[  982.665500][    C1] workqueue events_power_efficient: flags=0x80
[  982.665994][    C1]   pwq 2: cpus=1 node=0 flags=0x0 nice=0 active=7/256 refcnt=8
[  982.666044][    C1]     pending: neigh_managed_work, neigh_managed_work, neigh_managed_work, neigh_periodic_work, neigh_periodic_work, neigh_periodic_work, check_lifetime
[  982.666181][    C1] workqueue rcu_gp: flags=0x8
[  982.668550][    C1]   pwq 2: cpus=1 node=0 flags=0x0 nice=0 active=3/256 refcnt=4
[  982.668597][    C1]     in-flight: 179:sync_rcu_do_polled_gp
[  982.668634][    C1]     pending: sync_rcu_do_polled_gp, process_srcu
[  982.668695][    C1] workqueue mm_percpu_wq: flags=0x8
[  982.670910][    C1]   pwq 2: cpus=1 node=0 flags=0x0 nice=0 active=1/256 refcnt=2
[  982.670971][    C1]     pending: vmstat_update
[  982.671020][    C1] workqueue cgroup_destroy: flags=0x0
[  982.672512][    C1]   pwq 2: cpus=1 node=0 flags=0x0 nice=0 active=1/1 refcnt=9
[  982.672559][    C1]     pending: css_killed_work_fn
[  982.672587][    C1]     inactive: css_killed_work_fn, css_killed_work_fn, css_release_work_fn, css_release_work_fn, css_killed_work_fn, css_killed_work_fn, css_killed_work_fn
[  982.672740][    C1] workqueue ipv6_addrconf: flags=0x40008
[  982.675414][    C1]   pwq 2: cpus=1 node=0 flags=0x0 nice=0 active=1/1 refcnt=2
[  982.675466][    C1]     pending: addrconf_verify_work
[  982.676570][    C1] pool 2: cpus=1 node=0 flags=0x0 nice=0 hung=912s workers=3 idle: 23 36
[  982.676637][    C1] pool 4: cpus=0-1 flags=0x4 nice=0 hung=0s workers=4 idle: 1873 1871
[ 1004.162318][   T29] INFO: task systemd:1 blocked for more than 491 seconds.
[ 1004.163047][   T29]       Not tainted 5.17.0-rc1-00111-g556d8afe4a77 #1
[ 1004.163690][   T29] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[ 1004.164413][   T29] task:systemd         state:D stack:    0 pid:    1 ppid:     0 flags:0x00004000
[ 1004.165227][   T29] Call Trace:
[ 1004.165552][   T29]  <TASK>
[ 1004.165808][ T29] __schedule (kernel/sched/core.c:4986 kernel/sched/core.c:6296) 
[ 1004.166207][ T29] ? usleep_range_state (kernel/time/timer.c:1843) 
[ 1004.166670][ T29] schedule (arch/x86/include/asm/preempt.h:85 (discriminator 1) kernel/sched/core.c:6370 (discriminator 1)) 
[ 1004.167044][ T29] schedule_timeout (kernel/time/timer.c:1858) 
[ 1004.167498][ T29] ? hlock_class (kernel/locking/lockdep.c:199) 
[ 1004.167918][ T29] ? write_comp_data (kernel/kcov.c:221) 
[ 1004.168370][ T29] ? lockdep_hardirqs_on_prepare (kernel/locking/lockdep.c:438 kernel/locking/lockdep.c:4293 kernel/locking/lockdep.c:4244) 
[ 1004.168962][ T29] ? _raw_spin_unlock_irq (arch/x86/include/asm/irqflags.h:45 arch/x86/include/asm/irqflags.h:80 include/linux/spinlock_api_smp.h:159 kernel/locking/spinlock.c:202) 
[ 1004.169435][ T29] __wait_for_common (kernel/sched/completion.c:86 kernel/sched/completion.c:106) 
[ 1004.169915][ T29] __flush_work (kernel/workqueue.c:3095) 
[ 1004.170331][ T29] ? flush_workqueue_prep_pwqs (kernel/workqueue.c:2660) 
[ 1004.170926][ T29] ? __wait_for_common (kernel/sched/completion.c:74 kernel/sched/completion.c:106) 
[ 1004.171397][ T29] ? inotify_poll (fs/notify/inotify/inotify_user.c:288) 
[ 1004.171855][ T29] fsnotify_destroy_group (fs/notify/group.c:84 (discriminator 1)) 
[ 1004.172348][ T29] ? __sanitizer_cov_trace_pc (kernel/kcov.c:200) 
[ 1004.172852][ T29] ? locks_remove_file (fs/locks.c:2620) 
[ 1004.173300][ T29] ? inotify_poll (fs/notify/inotify/inotify_user.c:288) 
[ 1004.173722][ T29] inotify_release (fs/notify/inotify/inotify_user.c:297) 
[ 1004.174143][ T29] __fput (fs/file_table.c:312) 
[ 1004.174545][ T29] task_work_run (kernel/task_work.c:166 (discriminator 1)) 
[ 1004.175001][ T29] exit_to_user_mode_prepare (include/linux/tracehook.h:197 kernel/entry/common.c:175 kernel/entry/common.c:207) 
[ 1004.175548][ T29] syscall_exit_to_user_mode (kernel/entry/common.c:126 kernel/entry/common.c:302) 
[ 1004.176065][ T29] do_syscall_64 (arch/x86/entry/common.c:87) 
[ 1004.176511][ T29] entry_SYSCALL_64_after_hwframe (arch/x86/entry/entry_64.S:113) 
[ 1004.177062][   T29] RIP: 0033:0x7fa39f4f7b54
[ 1004.177438][   T29] RSP: 002b:00007fffb0b4d7d8 EFLAGS: 00000246 ORIG_RAX: 0000000000000003
[ 1004.178185][   T29] RAX: 0000000000000000 RBX: 0000000000000017 RCX: 00007fa39f4f7b54
[ 1004.178867][   T29] RDX: 00007fa39f5c8ca0 RSI: 0000000000000000 RDI: 0000000000000017
[ 1004.179606][   T29] RBP: 00007fa39e3468c0 R08: 00000000000000ef R09: 00007fa3a13f2060
[ 1004.180297][   T29] R10: 0000000000000007 R11: 0000000000000246 R12: 0000000000000000
[ 1004.181006][   T29] R13: 00007fffb0b4d890 R14: 0000000000000001 R15: 00007fa39f6e3a8e
[ 1004.181724][   T29]  </TASK>
[ 1004.182001][   T29] INFO: task kworker/u4:1:9 blocked for more than 491 seconds.
[ 1004.182640][   T29]       Not tainted 5.17.0-rc1-00111-g556d8afe4a77 #1
[ 1004.183221][   T29] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[ 1004.183958][   T29] task:kworker/u4:1    state:D stack:    0 pid:    9 ppid:     2 flags:0x00004000
[ 1004.184769][   T29] Workqueue: events_unbound fsnotify_connector_destroy_workfn
[ 1004.185432][   T29] Call Trace:
[ 1004.185753][   T29]  <TASK>
[ 1004.186042][ T29] __schedule (kernel/sched/core.c:4986 kernel/sched/core.c:6296) 
[ 1004.186441][ T29] ? to_kthread (kernel/kthread.c:77 (discriminator 3)) 
[ 1004.186867][ T29] ? usleep_range_state (kernel/time/timer.c:1843) 
[ 1004.187313][ T29] schedule (arch/x86/include/asm/preempt.h:85 (discriminator 1) kernel/sched/core.c:6370 (discriminator 1)) 
[ 1004.187713][ T29] schedule_timeout (kernel/time/timer.c:1858) 
[ 1004.188141][ T29] ? mark_held_locks (kernel/locking/lockdep.c:4206) 
[ 1004.188598][ T29] ? lockdep_hardirqs_on_prepare (kernel/locking/lockdep.c:438 kernel/locking/lockdep.c:4293 kernel/locking/lockdep.c:4244) 
[ 1004.189130][ T29] ? _raw_spin_unlock_irq (arch/x86/include/asm/irqflags.h:45 arch/x86/include/asm/irqflags.h:80 include/linux/spinlock_api_smp.h:159 kernel/locking/spinlock.c:202) 
[ 1004.189611][ T29] __wait_for_common (kernel/sched/completion.c:86 kernel/sched/completion.c:106) 
[ 1004.190073][ T29] __synchronize_srcu (kernel/rcu/srcutree.c:1154) 
[ 1004.190574][ T29] ? rcu_tasks_pregp_step (kernel/rcu/update.c:367) 
[ 1004.191049][ T29] ? __wait_for_common (kernel/sched/completion.c:74 kernel/sched/completion.c:106) 
[ 1004.191549][ T29] fsnotify_connector_destroy_workfn (fs/notify/mark.c:165) 
[ 1004.192112][ T29] process_one_work (arch/x86/include/asm/atomic.h:29 include/linux/atomic/atomic-instrumented.h:28 include/linux/jump_label.h:266 include/linux/jump_label.h:276 include/trace/events/workqueue.h:108 kernel/workqueue.c:2312) 
[ 1004.192605][ T29] worker_thread (include/linux/list.h:292 kernel/workqueue.c:2455) 
[ 1004.193020][ T29] ? rescuer_thread (kernel/workqueue.c:2397) 
[ 1004.193452][ T29] kthread (kernel/kthread.c:377) 
[ 1004.193834][ T29] ? kthread_complete_and_exit (kernel/kthread.c:332) 
[ 1004.194326][ T29] ret_from_fork (arch/x86/entry/entry_64.S:301) 
[ 1004.194768][   T29]  </TASK>
[ 1004.195061][   T29] INFO: task kworker/u4:2:40 blocked for more than 491 seconds.
[ 1004.195713][   T29]       Not tainted 5.17.0-rc1-00111-g556d8afe4a77 #1
[ 1004.196309][   T29] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[ 1004.197064][   T29] task:kworker/u4:2    state:D stack:    0 pid:   40 ppid:     2 flags:0x00004000
[ 1004.197887][   T29] Workqueue: events_unbound fsnotify_mark_destroy_workfn
[ 1004.198529][   T29] Call Trace:
[ 1004.198816][   T29]  <TASK>
[ 1004.199095][ T29] __schedule (kernel/sched/core.c:4986 kernel/sched/core.c:6296) 
[ 1004.199524][ T29] ? to_kthread (kernel/kthread.c:77 (discriminator 3)) 
[ 1004.199911][ T29] ? usleep_range_state (kernel/time/timer.c:1843) 
[ 1004.200369][ T29] schedule (arch/x86/include/asm/preempt.h:85 (discriminator 1) kernel/sched/core.c:6370 (discriminator 1)) 
[ 1004.200769][ T29] schedule_timeout (kernel/time/timer.c:1858) 
[ 1004.201213][ T29] ? mark_held_locks (kernel/locking/lockdep.c:4206) 
[ 1004.201689][ T29] ? lockdep_hardirqs_on_prepare (kernel/locking/lockdep.c:438 kernel/locking/lockdep.c:4293 kernel/locking/lockdep.c:4244) 
[ 1004.202245][ T29] ? _raw_spin_unlock_irq (arch/x86/include/asm/irqflags.h:45 arch/x86/include/asm/irqflags.h:80 include/linux/spinlock_api_smp.h:159 kernel/locking/spinlock.c:202) 
[ 1004.202716][ T29] __wait_for_common (kernel/sched/completion.c:86 kernel/sched/completion.c:106) 
[ 1004.203170][ T29] __synchronize_srcu (kernel/rcu/srcutree.c:1154) 
[ 1004.203638][ T29] ? rcu_tasks_pregp_step (kernel/rcu/update.c:367) 
[ 1004.204121][ T29] ? __wait_for_common (kernel/sched/completion.c:74 kernel/sched/completion.c:106) 
[ 1004.204625][ T29] fsnotify_mark_destroy_workfn (fs/notify/mark.c:866) 
[ 1004.205153][ T29] process_one_work (arch/x86/include/asm/atomic.h:29 include/linux/atomic/atomic-instrumented.h:28 include/linux/jump_label.h:266 include/linux/jump_label.h:276 include/trace/events/workqueue.h:108 kernel/workqueue.c:2312) 
[ 1004.205643][ T29] worker_thread (include/linux/list.h:292 kernel/workqueue.c:2455) 
[ 1004.206080][ T29] ? rescuer_thread (kernel/workqueue.c:2397) 
[ 1004.206559][ T29] kthread (kernel/kthread.c:377) 
[ 1004.206931][ T29] ? kthread_complete_and_exit (kernel/kthread.c:332) 
[ 1004.207426][ T29] ret_from_fork (arch/x86/entry/entry_64.S:301) 
[ 1004.207878][   T29]  </TASK>
[ 1004.208180][   T29] INFO: task rmmod:1812 blocked for more than 491 seconds.
[ 1004.208797][   T29]       Not tainted 5.17.0-rc1-00111-g556d8afe4a77 #1
[ 1004.209395][   T29] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[ 1004.210155][   T29] task:rmmod           state:D stack:    0 pid: 1812 ppid:   508 flags:0x00004000
[ 1004.210953][   T29] Call Trace:
[ 1004.211243][   T29]  <TASK>
[ 1004.211532][ T29] __schedule (kernel/sched/core.c:4986 kernel/sched/core.c:6296) 
[ 1004.211961][ T29] ? usleep_range_state (kernel/time/timer.c:1843) 
[ 1004.212427][ T29] schedule (arch/x86/include/asm/preempt.h:85 (discriminator 1) kernel/sched/core.c:6370 (discriminator 1)) 
[ 1004.212846][ T29] schedule_timeout (kernel/time/timer.c:1858) 
[ 1004.213296][ T29] ? hlock_class (kernel/locking/lockdep.c:199) 
[ 1004.213723][ T29] ? write_comp_data (kernel/kcov.c:221) 
[ 1004.214161][ T29] ? lockdep_hardirqs_on_prepare (kernel/locking/lockdep.c:438 kernel/locking/lockdep.c:4293 kernel/locking/lockdep.c:4244) 
[ 1004.214723][ T29] ? _raw_spin_unlock_irq (arch/x86/include/asm/irqflags.h:45 arch/x86/include/asm/irqflags.h:80 include/linux/spinlock_api_smp.h:159 kernel/locking/spinlock.c:202) 
[ 1004.215223][ T29] __wait_for_common (kernel/sched/completion.c:86 kernel/sched/completion.c:106) 
[ 1004.215702][ T29] kthread_stop (kernel/kthread.c:710) 
[ 1004.216129][ T29] _torture_stop_kthread (kernel/torture.c:956 (discriminator 3)) torture
[ 1004.216618][ T29] rcu_torture_cleanup (kernel/rcu/rcutorture.c:2995) rcutorture
[ 1004.217140][ T29] ? prepare_to_wait_exclusive (kernel/sched/wait.c:415) 
[ 1004.217667][ T29] __x64_sys_delete_module (kernel/module.c:969 kernel/module.c:912 kernel/module.c:912) 
[ 1004.218150][ T29] ? lockdep_hardirqs_on_prepare (kernel/locking/lockdep.c:438 kernel/locking/lockdep.c:4293 kernel/locking/lockdep.c:4244) 
[ 1004.218695][ T29] do_syscall_64 (arch/x86/entry/common.c:67 arch/x86/entry/common.c:80) 
[ 1004.219103][ T29] entry_SYSCALL_64_after_hwframe (arch/x86/entry/entry_64.S:113) 
[ 1004.219635][   T29] RIP: 0033:0x7f93a158add7
[ 1004.220037][   T29] RSP: 002b:00007ffd6aef9048 EFLAGS: 00000206 ORIG_RAX: 00000000000000b0
[ 1004.220804][   T29] RAX: ffffffffffffffda RBX: 00007f93a3179900 RCX: 00007f93a158add7
[ 1004.221533][   T29] RDX: 000000000000000a RSI: 0000000000000800 RDI: 00007f93a3179968
[ 1004.222218][   T29] RBP: 0000000000000000 R08: 00007ffd6aef7fc1 R09: 0000000000000000
[ 1004.222940][   T29] R10: 00007f93a15fcae0 R11: 0000000000000206 R12: 00007ffd6aef9270
[ 1004.223624][   T29] R13: 00007ffd6aefadfc R14: 00007f93a3179260 R15: 00007f93a3179900
[ 1004.224302][   T29]  </TASK>
[ 1004.224593][   T29]
[ 1004.224593][   T29] Showing all locks held in the system:
[ 1004.225166][   T29] 2 locks held by kworker/u4:1/9:
[ 1004.225587][ T29] #0: ffff9b4480051548 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_one_work (kernel/workqueue.c:2278) 
[ 1004.226554][ T29] #1: ffff9b448016fe78 (connector_reaper_work){+.+.}-{0:0}, at: process_one_work (kernel/workqueue.c:2278) 
[ 1004.227509][   T29] 1 lock held by khungtaskd/29:
[ 1004.227914][ T29] #0: ffffffffbaddd460 (rcu_read_lock){....}-{1:2}, at: rcu_lock_acquire (include/linux/rcupdate.h:267) 
[ 1004.228845][   T29] 2 locks held by kworker/u4:2/40:
[ 1004.229296][ T29] #0: ffff9b4480051548 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_one_work (kernel/workqueue.c:2278) 
[ 1004.230227][ T29] #1: ffff9b4480c93e78 ((reaper_work).work){+.+.}-{0:0}, at: process_one_work (kernel/workqueue.c:2278) 
[ 1004.231129][   T29] 2 locks held by kworker/1:2/179:
[ 1004.231578][   T29] 1 lock held by in:imklog/431:
[ 1004.231981][   T29] 1 lock held by dmesg/440:
[ 1004.232337][   T29]
[ 1004.232570][   T29] =============================================
[ 1004.232570][   T29]
[ 1004.233244][   T29] Kernel panic - not syncing: hung_task: blocked tasks
[ 1004.233796][   T29] CPU: 1 PID: 29 Comm: khungtaskd Not tainted 5.17.0-rc1-00111-g556d8afe4a77 #1
[ 1004.234584][   T29] Call Trace:
[ 1004.234898][   T29]  <TASK>
[ 1004.235182][ T29] dump_stack_lvl (lib/dump_stack.c:107 (discriminator 4)) 
[ 1004.235600][ T29] panic (kernel/panic.c:251) 
[ 1004.235952][ T29] ? _printk (kernel/printk/printk.c:2270) 
[ 1004.236313][ T29] ? watchdog (kernel/hung_task.c:216 kernel/hung_task.c:369) 
[ 1004.236698][ T29] watchdog (kernel/hung_task.c:370) 
[ 1004.237098][ T29] ? rcu_read_unlock (init/main.c:1291) 
[ 1004.237530][ T29] kthread (kernel/kthread.c:377) 
[ 1004.237898][ T29] ? kthread_complete_and_exit (kernel/kthread.c:332) 
[ 1004.238409][ T29] ret_from_fork (arch/x86/entry/entry_64.S:301) 
[ 1004.238815][   T29]  </TASK>
[ 1004.239265][   T29] Kernel Offset: 0x38200000 from 0xffffffff81000000 (relocation range: 0xffffffff80000000-0xffffffffbfffffff)

Kboot worker: lkp-worker57
Elapsed time: 1020

kvm=(
qemu-system-x86_64
-enable-kvm
-cpu SandyBridge
-kernel $kernel
-initrd initrd-vm-snb-67.cgz
-m 16384
-smp 2
-device e1000,netdev=net0
-netdev user,id=net0,hostfwd=tcp::32032-:22
-boot order=nc
-no-reboot
-watchdog i6300esb
-watchdog-action debug
-rtc base=localtime
-serial stdio
-display none
-monitor null
)

append=(
ip=::::vm-snb-67::dhcp
root=/dev/ram0
RESULT_ROOT=/result/rcutorture/300s-default-rcu/vm-snb/debian-10.4-x86_64-20200603.cgz/x86_64-randconfig-a012-20210928/gcc-9/556d8afe4a779f41dfc8fa373993a88e43f1c5dc/3
BOOT_IMAGE=/pkg/linux/x86_64-randconfig-a012-20210928/gcc-9/556d8afe4a779f41dfc8fa373993a88e43f1c5dc/vmlinuz-5.17.0-rc1-00111-g556d8afe4a77
branch=linux-devel/devel-hourly-20220304-094445
job=/job-script
user=lkp
ARCH=x86_64
kconfig=x86_64-randconfig-a012-20210928
commit=556d8afe4a779f41dfc8fa373993a88e43f1c5dc
vmalloc=128M
initramfs_async=0
page_owner=on
max_uptime=2100


To reproduce:

        # build kernel
	cd linux
	cp config-5.17.0-rc1-00111-g556d8afe4a77 .config
	make HOSTCC=gcc-9 CC=gcc-9 ARCH=x86_64 olddefconfig prepare modules_prepare bzImage modules
	make HOSTCC=gcc-9 CC=gcc-9 ARCH=x86_64 INSTALL_MOD_PATH=<mod-install-dir> modules_install
	cd <mod-install-dir>
	find lib/ | cpio -o -H newc --quiet | gzip > modules.cgz


        git clone https://github.com/intel/lkp-tests.git
        cd lkp-tests
        bin/lkp qemu -k <bzImage> -m modules.cgz job-script # job-script is attached in this email

        # if come across any failure that blocks the test,
        # please remove ~/.lkp and /lkp dir to run from a clean state.



---
0-DAY CI Kernel Test Service
https://lists.01.org/hyperkitty/list/lkp@lists.01.org

Thanks,
Oliver Sang


View attachment "config-5.17.0-rc1-00111-g556d8afe4a77" of type "text/plain" (122061 bytes)

View attachment "job-script" of type "text/plain" (4890 bytes)

Download attachment "dmesg.xz" of type "application/x-xz" (22464 bytes)

View attachment "rcutorture" of type "text/plain" (96231 bytes)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ