[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <BYAPR04MB4965FD429BA4203694CB37DA86910@BYAPR04MB4965.namprd04.prod.outlook.com>
Date: Sun, 28 Jun 2020 21:55:12 +0000
From: Chaitanya Kulkarni <Chaitanya.Kulkarni@....com>
To: Baolin Wang <baolin.wang@...ux.alibaba.com>,
"kbusch@...nel.org" <kbusch@...nel.org>,
"axboe@...com" <axboe@...com>, "hch@....de" <hch@....de>,
"sagi@...mberg.me" <sagi@...mberg.me>
CC: "baolin.wang7@...il.com" <baolin.wang7@...il.com>,
"linux-nvme@...ts.infradead.org" <linux-nvme@...ts.infradead.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: Re: [RFC PATCH] nvme-pci: Move the sg table allocation/free into
init/exit_request
On 6/28/20 3:44 AM, Baolin Wang wrote:
> Move the sg table allocation and free into the init_request() and
> exit_request(), instead of allocating sg table when queuing requests,
> which can benefit the IO performance.
>
> Signed-off-by: Baolin Wang<baolin.wang@...ux.alibaba.com>
The call to sg_init_table() uses blk_rq_nr_phys_segments in
nvme_map_data() with this patch we are blindly allocating SG table with
NVME_MAX_SEGS, without any performance numbers it is hard to measure the
impact.
Can you share performance numbers ?
I'm particularly interested in for IOPS/BW/CPU/USAGE/Submission latency
and completion latency and perf numbers for the respective function in
to determine the overall impact.
Powered by blists - more mailing lists