lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190406123035.GA3018@ming.t460p>
Date:   Sat, 6 Apr 2019 20:30:37 +0800
From:   Ming Lei <ming.lei@...hat.com>
To:     Nikolay Borisov <nborisov@...e.com>
Cc:     Jens Axboe <axboe@...nel.dk>, Omar Sandoval <osandov@...ndov.com>,
        linux-block@...r.kernel.org, LKML <linux-kernel@...r.kernel.org>,
        linux-btrfs <linux-btrfs@...r.kernel.org>
Subject: Re: Possible bio merging breakage in mp bio rework

On Sat, Apr 06, 2019 at 09:09:12AM +0300, Nikolay Borisov wrote:
> 
> 
> On 6.04.19 г. 3:16 ч., Ming Lei wrote:
> > Hi Nikolay,
> > 
> > On Fri, Apr 05, 2019 at 07:04:18PM +0300, Nikolay Borisov wrote:
> >> Hello Ming, 
> >>
> >> Following the mp biovec rework what is the maximum 
> >> data that a bio could contain? Should it be PAGE_SIZE * bio_vec 
> > 
> > There isn't any maximum data limit on the bio submitted from fs,
> > and block layer will make the final bio sent to driver correct
> > by applying all kinds of queue limit, such as max segment size,
> > max segment number, max sectors, ...
> > 
> >> or something else? Currently I can see bios as large as 127 megs 
> >> on sequential workloads, I got prompted to this since btrfs has a 
> >> memory allocation that is dependent on the data in the bio and this 
> >> particular memory allocation started failing with order 6 allocs. 
> > 
> > Could you share us the code? I don't see why order 6 allocs is a must.
> 
> When a bio is submitted btrfs has to calculate the checksum for it, this
> happens in btrfs_csum_one_bio. Said checksums are stored in an
> kmalloc'ed array, whose size is calculated as:
> 
> 32 + bio_size / btrfs' block size (usually 4k). So for a 127mb bio that
> would be: 32 * ((134184960÷4096) * 4) = 127k. We'd make an order 3
> allocation. Admittedly the code in btrfs should know better rather than
> make unbounded allocations without a fallback, but bio suddenly becoming
> rather unbounded in their size caught us offhand.

OK, thanks for your explanation.

Given it is one btrfs specific feature, I'd suggest you set one max size for
btrfs bio, for example, suppose the max checksum array is 4k, then the max
bio size can be calculated as:

	(4k - 32) * btrfs's block size

which should be big enough.

Thanks,
Ming

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ