[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <9edbfe1a-b4ba-7967-4287-1610415f6449@huaweicloud.com>
Date: Fri, 23 Sep 2022 19:32:46 +0800
From: Yu Kuai <yukuai1@...weicloud.com>
To: Jan Kara <jack@...e.cz>, Yu Kuai <yukuai1@...weicloud.com>
Cc: Christoph Hellwig <hch@...radead.org>, paolo.valente@...aro.org,
axboe@...nel.dk, linux-block@...r.kernel.org,
linux-kernel@...r.kernel.org, yi.zhang@...wei.com,
"yukuai (C)" <yukuai3@...wei.com>
Subject: Re: [PATCH v3 3/5] block, bfq: don't disable wbt if
CONFIG_BFQ_GROUP_IOSCHED is disabled
Hi, Jan!
在 2022/09/23 19:03, Jan Kara 写道:
> Hi Kuai!
>
> On Fri 23-09-22 18:23:03, Yu Kuai wrote:
>> 在 2022/09/23 18:06, Jan Kara 写道:
>>> On Fri 23-09-22 17:50:49, Yu Kuai wrote:
>>>> Hi, Christoph
>>>>
>>>> 在 2022/09/23 16:56, Christoph Hellwig 写道:
>>>>> On Thu, Sep 22, 2022 at 07:35:56PM +0800, Yu Kuai wrote:
>>>>>> wbt and bfq should work just fine if CONFIG_BFQ_GROUP_IOSCHED is disabled.
>>>>>
>>>>> Umm, wouldn't this be something decided at runtime, that is not
>>>>> if CONFIG_BFQ_GROUP_IOSCHED is enable/disable in the kernel build
>>>>> if the hierarchical cgroup based scheduling is actually used for a
>>>>> given device?
>>>>> .
>>>>>
>>>>
>>>> That's a good point,
>>>>
>>>> Before this patch wbt is simply disabled if elevator is bfq.
>>>>
>>>> With this patch, if elevator is bfq while bfq doesn't throttle
>>>> any IO yet, wbt still is disabled unnecessarily.
>>>
>>> It is not really disabled unnecessarily. Have you actually tested the
>>> performance of the combination? I did once and the results were just
>>> horrible (which is I made BFQ just disable wbt by default). The problem is
>>> that blk-wbt assumes certain model of underlying storage stack and hardware
>>> behavior and BFQ just does not fit in that model. For example BFQ wants to
>>> see as many requests as possible so that it can heavily reorder them,
>>> estimate think times of applications, etc. On the other hand blk-wbt
>>> assumes that if request latency gets higher, it means there is too much IO
>>> going on and we need to allow less of "lower priority" IO types to be
>>> submitted. These two go directly against one another and I was easily
>>> observing blk-wbt spiraling down to allowing only very small number of
>>> requests submitted while BFQ was idling waiting for more IO from the
>>> process that was currently scheduled.
>>>
>>
>> Thanks for your explanation, I understand that bfq and wbt should not
>> work together.
>>
>> However, I wonder if CONFIG_BFQ_GROUP_IOSCHED is disabled, or service
>> guarantee is not needed, does the above phenomenon still exist? I find
>> it hard to understand... Perhaps I need to do some test.
>
> Well, BFQ implements for example idling on sync IO queues which is one of
> the features that upsets blk-wbt. That does not depend on
> CONFIG_BFQ_GROUP_IOSCHED in any way. Also generally the idea that BFQ
> assigns storage *time slots* to different processes and IO from other
> processes is just queued at those times increases IO completion
> latency (for IOs of processes that are not currently scheduled) and this
> tends to confuse blk-wbt.
>
I see it now, thanks a lot for your expiations, that really helps a lot.
I misunderstand about the how the bfq works. I'll remove this patch in
next version.
Thanks,
Kuai
> Honza
>
Powered by blists - more mailing lists