[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <aYFlZf9p4cY0rIbc@fedora>
Date: Tue, 3 Feb 2026 11:03:01 +0800
From: Ming Lei <ming.lei@...hat.com>
To: 李龙兴 <coregee2000@...il.com>
Cc: syzkaller@...glegroups.com, tj@...nel.org, josef@...icpanda.com,
axboe@...nel.dk, cgroups@...r.kernel.org,
linux-block@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [Kernel Bug] KASAN: slab-use-after-free Read in
__blkcg_rstat_flush
Hello,
On Mon, Feb 02, 2026 at 02:19:07PM +0800, 李龙兴 wrote:
> Dear Linux kernel developers and maintainers,
>
> We would like to report a new kernel bug found by our tool. KASAN:
> slab-use-after-free Read in __blkcg_rstat_flush. Details are as
> follows.
>
> Kernel commit: v6.18.2
> Kernel config: see attachment
> report: see attachment
>
> We are currently analyzing the root cause and working on a
> reproducible PoC. We will provide further updates in this thread as
> soon as we have more information.
>
> Best regards,
> Longxing Li
>
> ==================================================================
> BUG: KASAN: slab-use-after-free in
> __blkcg_rstat_flush.isra.0+0x73c/0x800 block/blk-cgroup.c:1069
> Read of size 8 at addr ffff88810a8ba830 by task pool_workqueue_/3
>
> CPU: 1 UID: 0 PID: 3 Comm: pool_workqueue_ Not tainted 6.18.2 #1 PREEMPT(full)
> Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.15.0-1 04/01/2014
> Call Trace:
> <IRQ>
> __dump_stack lib/dump_stack.c:94 [inline]
> dump_stack_lvl+0x116/0x1f0 lib/dump_stack.c:120
> print_address_description mm/kasan/report.c:378 [inline]
> print_report+0xcd/0x630 mm/kasan/report.c:482
> kasan_report+0xe0/0x110 mm/kasan/report.c:595
> __blkcg_rstat_flush.isra.0+0x73c/0x800 block/blk-cgroup.c:1069
> __blkg_release+0x1a6/0x2d0 block/blk-cgroup.c:179
> rcu_do_batch kernel/rcu/tree.c:2605 [inline]
Can you try the following patch?
diff --git a/block/blk-cgroup.c b/block/blk-cgroup.c
index 3cffb68ba5d8..dc0cccfdca68 100644
--- a/block/blk-cgroup.c
+++ b/block/blk-cgroup.c
@@ -160,6 +160,20 @@ static void blkg_free(struct blkcg_gq *blkg)
schedule_work(&blkg->free_work);
}
+/*
+ * RCU callback to free blkg after an additional grace period.
+ * This ensures any concurrent __blkcg_rstat_flush() that might have
+ * removed our iostat entries via llist_del_all() has completed.
+ */
+static void __blkg_release_free_rcu(struct rcu_head *rcu)
+{
+ struct blkcg_gq *blkg = container_of(rcu, struct blkcg_gq, rcu_head);
+
+ /* release the blkcg and parent blkg refs this blkg has been holding */
+ css_put(&blkg->blkcg->css);
+ blkg_free(blkg);
+}
+
static void __blkg_release(struct rcu_head *rcu)
{
struct blkcg_gq *blkg = container_of(rcu, struct blkcg_gq, rcu_head);
@@ -178,9 +192,14 @@ static void __blkg_release(struct rcu_head *rcu)
for_each_possible_cpu(cpu)
__blkcg_rstat_flush(blkcg, cpu);
- /* release the blkcg and parent blkg refs this blkg has been holding */
- css_put(&blkg->blkcg->css);
- blkg_free(blkg);
+ /*
+ * Defer freeing via another call_rcu() to ensure any concurrent
+ * __blkcg_rstat_flush() (under rcu_read_lock) that might have removed
+ * our iostat entries via llist_del_all() has completed its iteration.
+ * The second grace period guarantees those RCU read-side critical
+ * sections have finished.
+ */
+ call_rcu(&blkg->rcu_head, __blkg_release_free_rcu);
}
/*
thanks,
Ming
Powered by blists - more mailing lists