[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <aOyLlFUNEKi2_vXT@fedora>
Date: Mon, 13 Oct 2025 13:18:12 +0800
From: Ming Lei <ming.lei@...hat.com>
To: Fengnan Chang <changfengnan@...edance.com>
Cc: axboe@...nel.dk, viro@...iv.linux.org.uk, brauner@...nel.org,
jack@...e.cz, asml.silence@...il.com, willy@...radead.org,
djwong@...nel.org, hch@...radead.org, ritesh.list@...il.com,
linux-fsdevel@...r.kernel.org, io-uring@...r.kernel.org,
linux-xfs@...r.kernel.org, linux-ext4@...r.kernel.org
Subject: Re: [PATCH] block: enable per-cpu bio cache by default
On Sat, Oct 11, 2025 at 09:33:12AM +0800, Fengnan Chang wrote:
> Per cpu bio cache was only used in the io_uring + raw block device,
> after commit 12e4e8c7ab59 ("io_uring/rw: enable bio caches for IRQ
> rw"), bio_put is safe for task and irq context, bio_alloc_bioset is
> safe for task context and no one calls in irq context, so we can enable
> per cpu bio cache by default.
>
> Benchmarked with t/io_uring and ext4+nvme:
> taskset -c 6 /root/fio/t/io_uring -p0 -d128 -b4096 -s1 -c1 -F1 -B1 -R1
> -X1 -n1 -P1 /mnt/testfile
> base IOPS is 562K, patch IOPS is 574K. The CPU usage of bio_alloc_bioset
> decrease from 1.42% to 1.22%.
>
> The worst case is allocate bio in CPU A but free in CPU B, still use
> t/io_uring and ext4+nvme:
> base IOPS is 648K, patch IOPS is 647K.
Just be curious, how do you run the remote bio free test? If the nvme is 1:1
mapping, you may not trigger it.
BTW, ublk has this kind of remote bio free trouble, but not see IOPS drop
with this patch.
The patch itself looks fine for me.
Thanks,
Ming
Powered by blists - more mailing lists