[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ffzrfu62npwacsl3225qqyjbhd6oue3x3rt46l2wcyp5oq4eli@26gvvst6hrmu>
Date: Tue, 3 Feb 2026 11:54:34 +0100
From: Michal Koutný <mkoutny@...e.com>
To: Ming Lei <ming.lei@...hat.com>
Cc: 李龙兴 <coregee2000@...il.com>,
syzkaller@...glegroups.com, tj@...nel.org, josef@...icpanda.com, axboe@...nel.dk,
cgroups@...r.kernel.org, linux-block@...r.kernel.org, linux-kernel@...r.kernel.org,
yukuai@...as.com
Subject: Re: [Kernel Bug] KASAN: slab-use-after-free Read in
__blkcg_rstat_flush
Hello.
On Tue, Feb 03, 2026 at 11:03:01AM +0800, Ming Lei <ming.lei@...hat.com> wrote:
> Can you try the following patch?
I think it'd work thanks to the rcu_read_lock() in
__blkcg_rstat_flush(). However, the chaining of RCU callbacks makes
predictability of the release path less deterministic and may be
unnecessary.
What about this:
index 3cffb68ba5d87..e2f51e3bf04ef 100644
--- a/tmp/b.c
+++ b/tmp/a.c
@@ -1081,6 +1081,7 @@ static void __blkcg_rstat_flush(struct blkcg *blkcg, int cpu)
smp_mb();
WRITE_ONCE(bisc->lqueued, false);
+ blkg_put(blkg);
if (bisc == &blkg->iostat)
goto propagate_up; /* propagate up to parent only */
@@ -2220,8 +2221,10 @@ void blk_cgroup_bio_start(struct bio *bio)
if (!READ_ONCE(bis->lqueued)) {
struct llist_head *lhead = this_cpu_ptr(blkcg->lhead);
+ blkg_get(bio->bi_blkg);
llist_add(&bis->lnode, lhead);
WRITE_ONCE(bis->lqueued, true);
+
}
u64_stats_update_end_irqrestore(&bis->sync, flags);
(If only I remembered whether a reference taken from blkcg->lhead causes
reference cycle...)
Michal
Download attachment "signature.asc" of type "application/pgp-signature" (266 bytes)
Powered by blists - more mailing lists