[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <aYHXzyRJbzFSohNm@fedora>
Date: Tue, 3 Feb 2026 19:11:11 +0800
From: Ming Lei <ming.lei@...hat.com>
To: Michal Koutný <mkoutny@...e.com>
Cc: 李龙兴 <coregee2000@...il.com>,
syzkaller@...glegroups.com, tj@...nel.org, josef@...icpanda.com,
axboe@...nel.dk, cgroups@...r.kernel.org,
linux-block@...r.kernel.org, linux-kernel@...r.kernel.org,
yukuai@...as.com
Subject: Re: [Kernel Bug] KASAN: slab-use-after-free Read in
__blkcg_rstat_flush
On Tue, Feb 03, 2026 at 11:54:34AM +0100, Michal Koutný wrote:
> Hello.
>
> On Tue, Feb 03, 2026 at 11:03:01AM +0800, Ming Lei <ming.lei@...hat.com> wrote:
> > Can you try the following patch?
>
> I think it'd work thanks to the rcu_read_lock() in
> __blkcg_rstat_flush(). However, the chaining of RCU callbacks makes
> predictability of the release path less deterministic and may be
> unnecessary.
RCU supports this way, here is just 2-stage RCU chain, and everything
is deterministic.
>
> What about this:
>
> index 3cffb68ba5d87..e2f51e3bf04ef 100644
> --- a/tmp/b.c
> +++ b/tmp/a.c
> @@ -1081,6 +1081,7 @@ static void __blkcg_rstat_flush(struct blkcg *blkcg, int cpu)
> smp_mb();
>
> WRITE_ONCE(bisc->lqueued, false);
> + blkg_put(blkg);
> if (bisc == &blkg->iostat)
> goto propagate_up; /* propagate up to parent only */
>
> @@ -2220,8 +2221,10 @@ void blk_cgroup_bio_start(struct bio *bio)
> if (!READ_ONCE(bis->lqueued)) {
> struct llist_head *lhead = this_cpu_ptr(blkcg->lhead);
>
> + blkg_get(bio->bi_blkg);
> llist_add(&bis->lnode, lhead);
> WRITE_ONCE(bis->lqueued, true);
> +
I thought about this way, but ->lqueued is lockless, and in theory the `blkg_iostat_set`
can be added again after WRITE_ONCE(bisc->lqueued, false) happens, so this way looks
fragile.
Thanks,
Ming
Powered by blists - more mailing lists