lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YKJBWClI7sUeABDs@infradead.org>
Date:   Mon, 17 May 2021 11:11:36 +0100
From:   Christoph Hellwig <hch@...radead.org>
To:     Changheun Lee <nanich.lee@...sung.com>
Cc:     hch@...radead.org, Johannes.Thumshirn@....com, alex_y_xu@...oo.ca,
        asml.silence@...il.com, axboe@...nel.dk, bgoncalv@...hat.com,
        bvanassche@....org, damien.lemoal@....com,
        gregkh@...uxfoundation.org, jaegeuk@...nel.org,
        jisoo2146.oh@...sung.com, junho89.kim@...sung.com,
        linux-block@...r.kernel.org, linux-kernel@...r.kernel.org,
        ming.lei@...hat.com, mj0123.lee@...sung.com, osandov@...com,
        patchwork-bot@...nel.org, seunghwan.hyun@...sung.com,
        sookwan7.kim@...sung.com, tj@...nel.org, tom.leiming@...il.com,
        woosung2.lee@...sung.com, yi.zhang@...hat.com,
        yt0928.kim@...sung.com
Subject: Re: [PATCH v10 0/1] bio: limit bio max size

On Fri, May 14, 2021 at 03:32:41PM +0900, Changheun Lee wrote:
> I tested 512MB file read with direct I/O. and chunk size is 64MB.
>  - on SCSI disk, with no limit of bio max size(4GB) : avg. 630 MB/s
>  - on SCSI disk, with limit bio max size to 1MB     : avg. 645 MB/s
>  - on ramdisk, with no limit of bio max size(4GB)   : avg. 2749 MB/s
>  - on ramdisk, with limit bio max size to 1MB       : avg. 3068 MB/s
> 
> I set ramdisk environment as below.
>  - dd if=/dev/zero of=/mnt/ramdisk.img bs=$((1024*1024)) count=1024
>  - mkfs.ext4 /mnt/ramdisk.img
>  - mkdir /mnt/ext4ramdisk
>  - mount -o loop /mnt/ramdisk.img /mnt/ext4ramdisk
> 
> With low performance disk, bio submit delay caused by large bio size is
> not big protion. So it can't be feel easily. But it will be shown in high
> performance disk.

So let's attack the problem properly:

 1) switch f2fs to a direct I/O implementation that does not suck
 2) look into optimizing the iomap code to e.g. submit the bio once
    it is larger than queue_io_opt() without failing to add to a bio
    which would be annoying for things like huge pages.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ