[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <c4d0005d-ae34-40d4-80a0-67ca904cdae1@suse.cz>
Date: Fri, 28 Feb 2025 15:42:02 +0100
From: Vlastimil Babka <vbabka@...e.cz>
To: "Uladzislau Rezki (Sony)" <urezki@...il.com>, linux-mm@...ck.org,
Andrew Morton <akpm@...ux-foundation.org>
Cc: RCU <rcu@...r.kernel.org>, LKML <linux-kernel@...r.kernel.org>,
Christoph Lameter <cl@...ux.com>, Pekka Enberg <penberg@...nel.org>,
David Rientjes <rientjes@...gle.com>, Joonsoo Kim <iamjoonsoo.kim@....com>,
Roman Gushchin <roman.gushchin@...ux.dev>,
Hyeonggon Yoo <42.hyeyoo@...il.com>,
Oleksiy Avramchenko <oleksiy.avramchenko@...y.com>, stable@...r.kernel.org,
Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
Keith Busch <kbusch@...nel.org>
Subject: Re: [PATCH v1 2/2] mm/slab/kvfree_rcu: Switch to WQ_MEM_RECLAIM wq
On 2/28/25 13:13, Uladzislau Rezki (Sony) wrote:
> Currently kvfree_rcu() APIs use a system workqueue which is
> "system_unbound_wq" to driver RCU machinery to reclaim a memory.
>
> Recently, it has been noted that the following kernel warning can
> be observed:
>
> <snip>
> workqueue: WQ_MEM_RECLAIM nvme-wq:nvme_scan_work is flushing !WQ_MEM_RECLAIM events_unbound:kfree_rcu_work
> WARNING: CPU: 21 PID: 330 at kernel/workqueue.c:3719 check_flush_dependency+0x112/0x120
> Modules linked in: intel_uncore_frequency(E) intel_uncore_frequency_common(E) skx_edac(E) ...
> CPU: 21 UID: 0 PID: 330 Comm: kworker/u144:6 Tainted: G E 6.13.2-0_g925d379822da #1
> Hardware name: Wiwynn Twin Lakes MP/Twin Lakes Passive MP, BIOS YMM20 02/01/2023
> Workqueue: nvme-wq nvme_scan_work
> RIP: 0010:check_flush_dependency+0x112/0x120
> Code: 05 9a 40 14 02 01 48 81 c6 c0 00 00 00 48 8b 50 18 48 81 c7 c0 00 00 00 48 89 f9 48 ...
> RSP: 0018:ffffc90000df7bd8 EFLAGS: 00010082
> RAX: 000000000000006a RBX: ffffffff81622390 RCX: 0000000000000027
> RDX: 00000000fffeffff RSI: 000000000057ffa8 RDI: ffff88907f960c88
> RBP: 0000000000000000 R08: ffffffff83068e50 R09: 000000000002fffd
> R10: 0000000000000004 R11: 0000000000000000 R12: ffff8881001a4400
> R13: 0000000000000000 R14: ffff88907f420fb8 R15: 0000000000000000
> FS: 0000000000000000(0000) GS:ffff88907f940000(0000) knlGS:0000000000000000
> CR2: 00007f60c3001000 CR3: 000000107d010005 CR4: 00000000007726f0
> PKRU: 55555554
> Call Trace:
> <TASK>
> ? __warn+0xa4/0x140
> ? check_flush_dependency+0x112/0x120
> ? report_bug+0xe1/0x140
> ? check_flush_dependency+0x112/0x120
> ? handle_bug+0x5e/0x90
> ? exc_invalid_op+0x16/0x40
> ? asm_exc_invalid_op+0x16/0x20
> ? timer_recalc_next_expiry+0x190/0x190
> ? check_flush_dependency+0x112/0x120
> ? check_flush_dependency+0x112/0x120
> __flush_work.llvm.1643880146586177030+0x174/0x2c0
> flush_rcu_work+0x28/0x30
> kvfree_rcu_barrier+0x12f/0x160
> kmem_cache_destroy+0x18/0x120
> bioset_exit+0x10c/0x150
> disk_release.llvm.6740012984264378178+0x61/0xd0
> device_release+0x4f/0x90
> kobject_put+0x95/0x180
> nvme_put_ns+0x23/0xc0
> nvme_remove_invalid_namespaces+0xb3/0xd0
> nvme_scan_work+0x342/0x490
> process_scheduled_works+0x1a2/0x370
> worker_thread+0x2ff/0x390
> ? pwq_release_workfn+0x1e0/0x1e0
> kthread+0xb1/0xe0
> ? __kthread_parkme+0x70/0x70
> ret_from_fork+0x30/0x40
> ? __kthread_parkme+0x70/0x70
> ret_from_fork_asm+0x11/0x20
> </TASK>
> ---[ end trace 0000000000000000 ]---
> <snip>
>
> To address this switch to use of independent WQ_MEM_RECLAIM
> workqueue, so the rules are not violated from workqueue framework
> point of view.
>
> Apart of that, since kvfree_rcu() does reclaim memory it is worth
> to go with WQ_MEM_RECLAIM type of wq because it is designed for
> this purpose.
>
> Cc: <stable@...r.kernel.org>
> Cc: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
stable is sufficient, no need for greg himself too
> Cc: Keith Busch <kbusch@...nel.org>
> Closes: https://www.spinics.net/lists/kernel/msg5563270.html
lore pls :)
> Fixes: 6c6c47b063b5 ("mm, slab: call kvfree_rcu_barrier() from kmem_cache_destroy()"),
> Reported-by: Keith Busch <kbusch@...nel.org>
> Signed-off-by: Uladzislau Rezki (Sony) <urezki@...il.com>
fixed locally and pushed to slab/for-next-fixes
thanks!
Powered by blists - more mailing lists