lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Wed, 30 Mar 2022 09:54:03 +0800 From: "yukuai (C)" <yukuai3@...wei.com> To: Jens Axboe <axboe@...nel.dk>, Christoph Hellwig <hch@...radead.org> CC: <andriy.shevchenko@...ux.intel.com>, <john.garry@...wei.com>, <ming.lei@...hat.com>, <linux-block@...r.kernel.org>, <linux-kernel@...r.kernel.org>, <yi.zhang@...wei.com> Subject: Re: [PATCH -next RFC 2/6] block: refactor to split bio thoroughly On 2022/03/29 22:41, Jens Axboe wrote: > On 3/29/22 8:40 AM, Christoph Hellwig wrote: >> On Tue, Mar 29, 2022 at 08:35:29AM -0600, Jens Axboe wrote: >>>> But more importantly why does your use case even have splits that get >>>> submitted together? Is this a case of Linus' stupidly low default >>>> max_sectors when the hardware supports more, or is the hardware limited >>>> to a low number of sectors per request? Or do we hit another reason >>>> for the split? >>> >>> See the posted use case, it's running 512kb ios on a 128kb device. Hi, The problem was first found during kernel upgrade(v3.10 to v4.18), and we maintain a series of io performance test suites, and one of the test is fio random rw with large bs. In the environment, the 'max_sectors_kb' is 256kb, and fio bs is 1m. >> >> That is an awfully low limit these days. I'm really not sure we should >> optimize the block layer for that. > > That's exactly what my replies have been saying. I don't think this is > a relevant thing to optimize for. If the use case that large ios get submitted together is not a common issue(probably not since it's been a long time without complaining), I agree that we should not optimize the block layer for that. Thanks, Kuai > > Fixing fairness for wakeups seems useful, however. >
Powered by blists - more mailing lists