[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <b9c8309a-6263-4eca-b2f0-2262bb43e81c@nvidia.com>
Date: Thu, 7 Dec 2023 09:48:09 +0000
From: Chaitanya Kulkarni <chaitanyak@...dia.com>
To: Li Feng <fengli@...rtx.com>
CC: Jens Axboe <axboe@...nel.dk>,
"Michael S. Tsirkin" <mst@...hat.com>,
Jason Wang <jasowang@...hat.com>,
Paolo Bonzini <pbonzini@...hat.com>,
Stefan Hajnoczi <stefanha@...hat.com>,
Xuan Zhuo <xuanzhuo@...ux.alibaba.com>,
"open list:BLOCK LAYER" <linux-block@...r.kernel.org>,
linux-kernel <linux-kernel@...r.kernel.org>,
"open list:VIRTIO BLOCK AND SCSI DRIVERS"
<virtualization@...ts.linux.dev>
Subject: Re: [PATCH] virtio_blk: set the default scheduler to none
On 12/6/2023 11:21 PM, Li Feng wrote:
>
>
>> On Dec 7, 2023, at 14:53, Chaitanya Kulkarni <chaitanyak@...dia.com> wrote:
>>
>> On 12/6/23 20:31, Li Feng wrote:
>>> virtio-blk is generally used in cloud computing scenarios, where the
>>> performance of virtual disks is very important. The mq-deadline scheduler
>>> has a big performance drop compared to none with single queue. In my tests,
>>> mq-deadline 4k readread iops were 270k compared to 450k for none. So here
>>> the default scheduler of virtio-blk is set to "none".
>>>
>>> Signed-off-by: Li Feng <fengli@...rtx.com>
>>> ---
>>>
>>
>> This patch looks good to me, however I'd update the commit log and add
>> performance numbers for the non-mq case also, just in-case to show that we
>> are not breaking non-mq setup.
>>
>> Being said that, in case we want to be future proof, we can also think of
>> adding a module param so if someone comes with a scenario where NO_SCHED is
>> not providing the performance then they can just use the module parameter
>> instead of again editing the code, irrespective of that :-
>>
>> Reviewed-by: Chaitanya Kulkarni <kch@...dia.com>
>>
>> -ck
>
> Hi ck,
>
> What I put above(450k vs 270k) is the data of single queue(non-mq). I think
> we don’t need to add module parameters because the scheduler can be modified
> through sysfs.
>
> Thanks.
okay.
-ck
Powered by blists - more mailing lists