[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1afd2c01-69b3-ab8f-6bfe-118e3e56001c@kernel.dk>
Date: Tue, 10 May 2022 06:50:35 -0600
From: Jens Axboe <axboe@...nel.dk>
To: John Garry <john.garry@...wei.com>, linux-block@...r.kernel.org
Cc: linux-kernel@...r.kernel.org, linux-scsi@...r.kernel.org
Subject: Re: [RFC PATCH 0/2] sbitmap: NUMA node spreading
On 5/10/22 5:14 AM, John Garry wrote:
> Hi Jens, guys,
>
> I am sending this as an RFC to see if there is any future in it or ideas
> on how to make better. I also need to improve some items (as mentioned in
> 2/2 commit message) and test a lot more.
>
> The general idea is that we change from allocating a single array of
> sbitmap words to allocating an sub-array per NUMA node. And then each CPU
> in that node is hinted to use that sub-array
>
> Initial performance looks decent.
>
> Some figures:
> System: 4-nodes (with memory on all nodes), 128 CPUs
>
> null blk config block:
> 20 devs, submit_queues=NR_CPUS, shared_tags, shared_tag_bitmap,
> hw_queue_depth=256
>
> fio config:
> bs=4096, iodepth=128, numjobs=10, cpus_allowed_policy=split, rw=read,
> ioscheduler=none
>
> Before:
> 7130K
>
> After:
> 7630K
>
> So a +7% IOPS gain.
What does the comparison run on a non-NUMA non-shared queue look like?
Because I bet it'd be slower.
To be honest, I don't like this approach at all. It makes the normal
case quite a bit slower by having an extra layer of indirection for the
word, that's quite a bit of extra cost. It doesn't seem like a good
approach for the issue, as it pessimizes the normal fast case.
Spreading the memory out does probably make sense, but we need to retain
the fast normal case. Making sbitmap support both, selected at init
time, would be far more likely to be acceptable imho.
--
Jens Axboe
Powered by blists - more mailing lists