lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 27 Sep 2022 09:02:31 +0800
From:   Yu Kuai <yukuai1@...weicloud.com>
To:     Jan Kara <jack@...e.cz>, Yu Kuai <yukuai1@...weicloud.com>
Cc:     Christoph Hellwig <hch@...radead.org>, paolo.valente@...aro.org,
        axboe@...nel.dk, linux-block@...r.kernel.org,
        linux-kernel@...r.kernel.org, yi.zhang@...wei.com,
        "yukuai (C)" <yukuai3@...wei.com>
Subject: Re: [PATCH v3 3/5] block, bfq: don't disable wbt if
 CONFIG_BFQ_GROUP_IOSCHED is disabled

Hi, Jan

在 2022/09/26 22:22, Jan Kara 写道:
> Hi Kuai!
> 
> On Mon 26-09-22 21:00:48, Yu Kuai wrote:
>> 在 2022/09/23 19:03, Jan Kara 写道:
>>> Hi Kuai!
>>>
>>> On Fri 23-09-22 18:23:03, Yu Kuai wrote:
>>>> 在 2022/09/23 18:06, Jan Kara 写道:
>>>>> On Fri 23-09-22 17:50:49, Yu Kuai wrote:
>>>>>> Hi, Christoph
>>>>>>
>>>>>> 在 2022/09/23 16:56, Christoph Hellwig 写道:
>>>>>>> On Thu, Sep 22, 2022 at 07:35:56PM +0800, Yu Kuai wrote:
>>>>>>>> wbt and bfq should work just fine if CONFIG_BFQ_GROUP_IOSCHED is disabled.
>>>>>>>
>>>>>>> Umm, wouldn't this be something decided at runtime, that is not
>>>>>>> if CONFIG_BFQ_GROUP_IOSCHED is enable/disable in the kernel build
>>>>>>> if the hierarchical cgroup based scheduling is actually used for a
>>>>>>> given device?
>>>>>>> .
>>>>>>>
>>>>>>
>>>>>> That's a good point,
>>>>>>
>>>>>> Before this patch wbt is simply disabled if elevator is bfq.
>>>>>>
>>>>>> With this patch, if elevator is bfq while bfq doesn't throttle
>>>>>> any IO yet, wbt still is disabled unnecessarily.
>>>>>
>>>>> It is not really disabled unnecessarily. Have you actually tested the
>>>>> performance of the combination? I did once and the results were just
>>>>> horrible (which is I made BFQ just disable wbt by default). The problem is
>>>>> that blk-wbt assumes certain model of underlying storage stack and hardware
>>>>> behavior and BFQ just does not fit in that model. For example BFQ wants to
>>>>> see as many requests as possible so that it can heavily reorder them,
>>>>> estimate think times of applications, etc. On the other hand blk-wbt
>>>>> assumes that if request latency gets higher, it means there is too much IO
>>>>> going on and we need to allow less of "lower priority" IO types to be
>>>>> submitted. These two go directly against one another and I was easily
>>>>> observing blk-wbt spiraling down to allowing only very small number of
>>>>> requests submitted while BFQ was idling waiting for more IO from the
>>>>> process that was currently scheduled.
>>>>>
>>>>
>>>> Thanks for your explanation, I understand that bfq and wbt should not
>>>> work together.
>>>>
>>>> However, I wonder if CONFIG_BFQ_GROUP_IOSCHED is disabled, or service
>>>> guarantee is not needed, does the above phenomenon still exist? I find
>>>> it hard to understand... Perhaps I need to do some test.
>>>
>>> Well, BFQ implements for example idling on sync IO queues which is one of
>>> the features that upsets blk-wbt. That does not depend on
>>> CONFIG_BFQ_GROUP_IOSCHED in any way. Also generally the idea that BFQ
>>> assigns storage *time slots* to different processes and IO from other
>>> processes is just queued at those times increases IO completion
>>> latency (for IOs of processes that are not currently scheduled) and this
>>> tends to confuse blk-wbt.
>>>
>> Hi, Jan
>>
>> Just to be curious, have you ever think about or tested wbt with
>> io-cost? And even more, how bfq work with io-cost?
>>
>> I haven't tested yet, but it seems to me some of them can work well
>> together.
> 
> No, I didn't test these combinations. I actually expect there would be
> troubles in both cases under high IO load but you can try :)

Just realize I made a clerical error, I actually want to saied that
*can't* work well together.

I'll try to have a test the combinations.

Thanks,
Kuai
> 
> 								Honza
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ