lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 6 Apr 2016 09:20:59 +0800
From:	Ming Lei <ming.lei@...onical.com>
To:	Kent Overstreet <kent.overstreet@...il.com>
Cc:	Jens Axboe <axboe@...com>,
	Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
	linux-block@...r.kernel.org, Christoph Hellwig <hch@...radead.org>,
	Eric Wheeler <bcache@...ts.ewheeler.net>,
	Sebastian Roesner <sroesner-kernelorg@...sner-online.de>,
	"4.2+" <stable@...r.kernel.org>, Shaohua Li <shli@...com>
Subject: Re: [PATCH] block: make sure big bio is splitted into at most 256 bvecs

On Wed, Apr 6, 2016 at 9:10 AM, Kent Overstreet
<kent.overstreet@...il.com> wrote:
> On Wed, Apr 06, 2016 at 08:59:31AM +0800, Ming Lei wrote:
>> On Wed, Apr 6, 2016 at 8:30 AM, Kent Overstreet
>> <kent.overstreet@...il.com> wrote:
>> > On Wed, Apr 06, 2016 at 01:44:06AM +0800, Ming Lei wrote:
>> >> After arbitrary bio size is supported, the incoming bio may
>> >> be very big. We have to split the bio into small bios so that
>> >> each holds at most BIO_MAX_PAGES bvecs for safety reason, such
>> >> as bio_clone().
>> >>
>> >> This patch fixes the following kernel crash:
>> >
>> > Ming, let's not do it this way; drivers that don't clone biovecs are the norm -
>> > instead, md has its own queue limits that it ought to be setting up correctly.
>>
>> Except for md, there are also several usages of bio_clone:
>>
>>          - drbd
>>          - osdblk
>>          - pktcdvd
>>          - xen-blkfront
>>          - verify code of bcache
>>
>> I don't like bio_clone() too, which can cause trouble to multipage bvecs.
>>
>> How about fixing the issue by this simple patch first? Then once we limits
>> all above queues by max sectors, the global limit can be removed as
>> mentioned by the comment.
>
> just do this:
>
> void blk_set_limit_clonable(struct queue_limits *lim)
> {
>         lim->max_segments = min(lim->max_segments, BIO_MAX_PAGES);
> }

As I memtioned it is __not__ correct to use max_segments, and the issue is
related with max sectors, please see the code of bio_clone_bioset():

      bio = bio_alloc_bioset(gfp_mask, bio_segments(bio_src), bs);

bio_segments() returns pages actually.

>
> and then call that from the appropriate drivers. It should be like 20 minutes of
> work.
>
> My issue is that your approach of just enforcing a global limit is a step in the
> wrong direction - we want to get _away_ from that and move towards drivers
> specifying _directly_ what their limits are: more straightforward, less opaque.
>
> Also, your patch is wrong, as it'll break if there's bvecs that aren't full
> pages.

I don't understand why my patch is wrong, since we can split anywhere
in a bio, could you explain it a bit?

Thanks,
Ming

> --
> To unsubscribe from this list: send the line "unsubscribe linux-block" in
> the body of a message to majordomo@...r.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ