lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <a257ff06-ed02-46a2-81fc-caa351a379fd@xiaomi.com>
Date: Thu, 5 Sep 2024 07:33:29 +0000
From: 章辉 <zhanghui31@...omi.com>
To: Ming Lei <ming.lei@...hat.com>
CC: "axboe@...nel.dk" <axboe@...nel.dk>, "bvanassche@....org"
	<bvanassche@....org>, "linux-block@...r.kernel.org"
	<linux-block@...r.kernel.org>, "linux-kernel@...r.kernel.org"
	<linux-kernel@...r.kernel.org>, 方翔 <fangxiang@...omi.com>,
	王辉 <wanghui33@...omi.com>
Subject: Re: [External Mail]Re: [PATCH v3] block: move non sync requests
 complete flow to softirq

On 2024/9/5 11:49, Ming Lei wrote:
> On Thu, Sep 05, 2024 at 02:46:39AM +0000, 章辉 wrote:
>> On 2024/9/4 16:01, Ming Lei wrote:
>>> On Tue, Sep 03, 2024 at 07:54:37PM +0800, ZhangHui wrote:
>>>> From: zhanghui <zhanghui31@...omi.com>
>>>>
>>>> Currently, for a controller that supports multiple queues, like UFS4.0,
>>>> the mq_ops->complete is executed in the interrupt top-half. Therefore,
>>>> the file system's end io is executed during the request completion process,
>>>> such as f2fs_write_end_io on smartphone.
>>>>
>>>> However, we found that the execution time of the file system end io
>>>> is strongly related to the size of the bio and the processing speed
>>>> of the CPU. Because the file system's end io will traverse every page
>>>> in bio, this is a very time-consuming operation.
>>>>
>>>> We measured that the 80M bio write operation on the little CPU will
>>> What is 80M bio?
>>>
>>> It is one known issue that soft lockup may be triggered in case of N:M
>>> blk-mq mapping, but not sure if that is the case.
>>>
>>> What is nr_hw_queues(blk_mq) and nr_cpus in your system?
>>>
>>>> cause the execution time of the top-half to be greater than 100ms.
>>>> The CPU tick on a smartphone is only 4ms, which will undoubtedly affect
>>>> scheduling efficiency.
>>> schedule is off too in softirq(bottom-half).
>>>
>>>> Let's fixed this issue by moved non sync request completion flow to
>>>> softirq, and keep the sync request completion in the top-half.
>>> If you do care interrupt-off or schedule-off latency, you may have to move
>>> the IO handling into thread context in the driver.
>>>
>>> BTW, threaded irq can't help you too.
>>>
>>>
>>> Thanks,
>>> Ming
>>>
>> hi Ming,
>>
>> Very good reminder, thank you.
>>
>> On smartphones, nr_hw_queues and nr_cpus are 1:1, I am more concerned
>> about the interrupt-off latency, which is more obvious on little cores.
> So you submits 80M bytes from one CPU, and almost all these bios are completed
> in single interrupt, which looks very impossible, except that your
> UFS controller is far faster than the CPU.

The 80M bio bio refers to the bio sent by the file system. At the block
layer, it will be split into many bios and form a bio chain. The
time-consuming part is end_io of filesystem processing all page state.
It will only be actually called after all the 80M BIOs in the bio chain
are completed.

>> Moving time-consuming work to the bottom half may not help with schedule
>> latency, but it is may helpful for interrupt response latency of other
>> modules in the system?
> scheduling response latency is system-wide too.
>
> Then please document the interrupt latency improvement instead of
> scheduling in your commit log, otherwise it is just misleading.
>
> ```
> The CPU tick on a smartphone is only 4ms, which will undoubtedly affect
> scheduling efficiency.
> ```
>
> Thanks,
> Ming

hi Ming,

OK, I will update patch V4 later for comment update.
Thank you for your suggestion!
Besides this,do you have any other concern?

Thanks
Zhang

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ