lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <32c83d67-1d69-4d69-8d00-274cf0d0ff62@amazon.com>
Date: Wed, 11 Jun 2025 16:24:06 +0100
From: "Mohamed Abuelfotoh, Hazem" <abuehaze@...zon.com>
To: Ming Lei <ming.lei@...hat.com>
CC: <stable@...r.kernel.org>, kernel test robot <oliver.sang@...el.com>, Hagar
 Hemdan <hagarhem@...zon.com>, Shaoying Xu <shaoyi@...zon.com>, "Jens Axboe"
	<axboe@...nel.dk>, "Michael S. Tsirkin" <mst@...hat.com>, Jason Wang
	<jasowang@...hat.com>, Paolo Bonzini <pbonzini@...hat.com>, Stefan Hajnoczi
	<stefanha@...hat.com>, Eugenio Pérez <eperezma@...hat.com>,
	Xuan Zhuo <xuanzhuo@...ux.alibaba.com>, Keith Busch <kbusch@...nel.org>,
	Christoph Hellwig <hch@....de>, Sagi Grimberg <sagi@...mberg.me>,
	<linux-block@...r.kernel.org>, <linux-kernel@...r.kernel.org>,
	<virtualization@...ts.linux.dev>, <linux-nvme@...ts.infradead.org>,
	<linux-fsdevel@...r.kernel.org>
Subject: Re: [PATCH] Revert "block: don't reorder requests in blk_add_rq_to_plug"

On 11/06/2025 16:10, Ming Lei wrote:
> CAUTION: This email originated from outside of the organization. Do not click links or open attachments unless you can confirm the sender and know the content is safe.
> 
> 
> 
> On Wed, Jun 11, 2025 at 12:14:54PM +0000, Hazem Mohamed Abuelfotoh wrote:
>> This reverts commit e70c301faece15b618e54b613b1fd6ece3dd05b4.
>>
>> Commit <e70c301faece> ("block: don't reorder requests in
>> blk_add_rq_to_plug") reversed how requests are stored in the blk_plug
>> list, this had significant impact on bio merging with requests exist on
>> the plug list. This impact has been reported in [1] and could easily be
>> reproducible using 4k randwrite fio benchmark on an NVME based SSD without
>> having any filesystem on the disk.
>>
>> My benchmark is:
>>
>>      fio --time_based --name=benchmark --size=50G --rw=randwrite \
>>        --runtime=60 --filename="/dev/nvme1n1" --ioengine=psync \
>>        --randrepeat=0 --iodepth=1 --fsync=64 --invalidate=1 \
>>        --verify=0 --verify_fatal=0 --blocksize=4k --numjobs=4 \
>>        --group_reporting
>>
>> On 1.9TiB SSD(180K Max IOPS) attached to i3.16xlarge AWS EC2 instance.
>>
>> Kernel        |  fio (B.W MiB/sec)  | I/O size (iostat)
>> --------------+---------------------+--------------------
>> 6.15.1        |   362               |  2KiB
>> 6.15.1+revert |   660 (+82%)        |  4KiB
>> --------------+---------------------+--------------------
> 
> I just run one quick test in my test VM, but can't reproduce it.

Possibly you aren't hitting the Disk IOPS limit because you are using 
more powerful SSD? In this case I am using i3.16xlarge EC2 instance 
running AL2023 or may be fio has different behavior across Distribution. 
In AL2023 we have fio-3.32

> Also be curious, why does writeback produce so many 2KiB bios?

Good question unfortunately I don't have a good answer on why we have
2KB bios although I am specifying 4K as I/O size in fio, this is 
something we probably should explore further.

Hazem




Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ