lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Fri, 1 Jun 2018 16:09:54 +0200
From:   David Sterba <dsterba@...e.cz>
To:     Ming Lei <ming.lei@...hat.com>
Cc:     Jens Axboe <axboe@...com>, Christoph Hellwig <hch@...radead.org>,
        Alexander Viro <viro@...iv.linux.org.uk>,
        Kent Overstreet <kent.overstreet@...il.com>,
        Huang Ying <ying.huang@...el.com>,
        linux-kernel@...r.kernel.org, linux-block@...r.kernel.org,
        linux-fsdevel@...r.kernel.org, linux-mm@...ck.org,
        Theodore Ts'o <tytso@....edu>,
        "Darrick J . Wong" <darrick.wong@...cle.com>,
        Coly Li <colyli@...e.de>, Filipe Manana <fdmanana@...il.com>
Subject: Re: [RESEND PATCH V5 00/33] block: support multipage bvec

On Fri, May 25, 2018 at 11:45:48AM +0800, Ming Lei wrote:
>  fs/btrfs/check-integrity.c          |   6 +-
>  fs/btrfs/compression.c              |   8 +-
>  fs/btrfs/disk-io.c                  |   3 +-
>  fs/btrfs/extent_io.c                |  14 ++-
>  fs/btrfs/file-item.c                |   4 +-
>  fs/btrfs/inode.c                    |  12 ++-
>  fs/btrfs/raid56.c                   |   5 +-

For the btrfs bits,
Acked-by: David Sterba <dsterba@...e.com>

but that's from the bio API user perspective only, I'll leave the design
and implementation questions to others.

I've let the patchset through fstests, no problems. One thing that caught
my eye was use of the 'struct bvec_iter_all' in random functions. As
this structure is a compound of 2 others and is 40 bytes in size, I was
curious how this increased stack consumption.

Measured with -fstack-usage before and after patch 22/33 "btrfs: conver to
bio_for_each_page_all2"

-disk-io.c:btree_csum_one_bio                             48 static
+disk-io.c:btree_csum_one_bio                             80 static
-extent_io.c:end_bio_extent_buffer_writepage              56 static
+extent_io.c:end_bio_extent_buffer_writepage              80 static
-extent_io.c:end_bio_extent_readpage                      176 dynamic,bounded
+extent_io.c:end_bio_extent_readpage                      240 dynamic,bounded
-extent_io.c:end_bio_extent_writepage                     56 static
+extent_io.c:end_bio_extent_writepage                     120 static
-inode.c:btrfs_retry_endio                                96 dynamic,bounded
+inode.c:btrfs_retry_endio                                144 dynamic,bounded
-inode.c:btrfs_retry_endio_nocsum                         72 dynamic,bounded
+inode.c:btrfs_retry_endio_nocsum                         104 dynamic,bounded
-raid56.c:set_bio_pages_uptodate                          8 static
+raid56.c:set_bio_pages_uptodate                          40 static

It's not that bad, but still quite a lot just to iterate a list of bios. I
think it's worth mentioning as it affects several other filesystems and
should be possibly optimized in the future.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ