[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <c05add9f-d375-44a4-a859-2757b19c70dc@nvidia.com>
Date: Tue, 23 Jan 2024 20:21:39 +0000
From: Chaitanya Kulkarni <chaitanyak@...dia.com>
To: Keith Busch <kbusch@...nel.org>, Sagi Grimberg <sagi@...mberg.me>, Stuart
Hayes <stuart.w.hayes@...il.com>
CC: "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>, Jens Axboe
<axboe@...nel.dk>, Christoph Hellwig <hch@....de>,
"linux-nvme@...ts.infradead.org" <linux-nvme@...ts.infradead.org>
Subject: Re: [PATCH v2] nvme_core: scan namespaces asynchronously
On 1/23/2024 8:37 AM, Keith Busch wrote:
> On Mon, Jan 22, 2024 at 11:13:15AM +0200, Sagi Grimberg wrote:
>> On 1/18/24 23:03, Stuart Hayes wrote:
>>> @@ -3901,19 +3932,25 @@ static int nvme_scan_ns_list(struct nvme_ctrl *ctrl)
>>> goto free;
>>> }
>>> + /*
>>> + * scan list starting at list offset 0
>>> + */
>>> + atomic_set(&scan_state.count, 0);
>>> for (i = 0; i < nr_entries; i++) {
>>> u32 nsid = le32_to_cpu(ns_list[i]);
>>> if (!nsid) /* end of the list? */
>>> goto out;
>>> - nvme_scan_ns(ctrl, nsid);
>>> + async_schedule_domain(nvme_scan_ns, &scan_state, &domain);
>>> while (++prev < nsid)
>>> nvme_ns_remove_by_nsid(ctrl, prev);
>>> }
>>> + async_synchronize_full_domain(&domain);
>
> You mentioned async scanning was an improvement if you have 1000
> namespaces, but wouldn't this be worse if you have very few namespaces?
> IOW, the decision to use the async schedule should be based on
> nr_entries, right?
>
Perhaps it's also helpful to documents the data for small number of
namespaces, we can think of collecting data something like this:-
NR Namespaces Seq Scan Async Scan
2
4
8
16
32
64
128
256
512
1024
If we find that difference is not that much then we can go ahead with
this patch, if it the difference is not acceptable to the point that it
will regress for common setups then we can make it configurable ?
-ck
Powered by blists - more mailing lists