[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <f8da3b2b-407a-c777-87f4-6a1dec32efb3@nvidia.com>
Date: Wed, 19 Apr 2023 21:13:58 +0000
From: Chaitanya Kulkarni <chaitanyak@...dia.com>
To: Sagi Grimberg <sagi@...mberg.me>, Daniel Wagner <dwagner@...e.de>
CC: "linux-nvme@...ts.infradead.org" <linux-nvme@...ts.infradead.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"linux-block@...r.kernel.org" <linux-block@...r.kernel.org>,
Shin'ichiro Kawasaki <shinichiro@...tmail.com>
Subject: Re: [RFC v1 0/1] nvme testsuite runtime optimization
On 4/19/23 06:15, Sagi Grimberg wrote:
>
>>>>> While testing the fc transport I got a bit tired of wait for the
>>>>> I/O jobs to
>>>>> finish. Thus here some runtime optimization.
>>>>>
>>>>> With a small/slow VM I got following values:
>>>>>
>>>>> with 'optimizations'
>>>>> loop:
>>>>> real 4m43.981s
>>>>> user 0m17.754s
>>>>> sys 2m6.249s
>>>
>>> How come loop is doubling the time with this patch?
>>> ratio is not the same before and after.
>>
>> first run was with loop, second one with rdma:
>>
>> nvme/002 (create many subsystems and test discovery) [not run]
>> runtime 82.089s ...
>> nvme_trtype=rdma is not supported in this test
>>
>> nvme/016 (create/delete many NVMeOF block device-backed ns and test
>> discovery) [not run]
>> runtime 39.948s ...
>> nvme_trtype=rdma is not supported in this test
>> nvme/017 (create/delete many file-ns and test discovery) [not run]
>> runtime 40.237s ...
>>
>> nvme/047 (test different queue types for fabric transports) [passed]
>> runtime ... 13.580s
>> nvme/048 (Test queue count changes on reconnect) [passed]
>> runtime ... 6.287s
>>
>> 82 + 40 + 40 - 14 - 6 = 142. So loop runs additional tests. Hmm,
>> though my
>> optimization didn't work there...
>
> How come loop is 4m+ while the others are 2m+ when before all
> were the same timeframe more or less?
>
>>
>>>> Those jobs are meant to be run for at least 1G to establish
>>>> confidence on the data set and the system under test since SSDs
>>>> are in TBs nowadays and we don't even get anywhere close to that,
>>>> with your suggestion we are going even lower ...
>>>
>>> Where does the 1G boundary coming from?
>>
>> No idea, it just the existing hard coded values. I guess it might be
>> from
>> efa06fcf3c83 ("loop: test partition scanning") which was the first
>> real test
>> case (according the logs).
>
> Was asking Chaitanya why is 1G considered sufficient vs. other sizes?
> Why not 10G? Why not 100M?
See the earlier response ...
-ck
Powered by blists - more mailing lists