[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CACVXFVP8KxHqLoTxSdouPkjz4uuQvAwKY2Tj2x68OuFfoc-peQ@mail.gmail.com>
Date: Thu, 7 Apr 2016 09:49:45 +0800
From: Ming Lei <ming.lei@...onical.com>
To: Eric Wheeler <bcache@...ts.ewheeler.net>
Cc: Jens Axboe <axboe@...com>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
linux-block@...r.kernel.org,
Kent Overstreet <kent.overstreet@...il.com>,
Christoph Hellwig <hch@...radead.org>,
Sebastian Roesner <sroesner-kernelorg@...sner-online.de>,
"4.2+" <stable@...r.kernel.org>, Shaohua Li <shli@...com>
Subject: Re: [PATCH] block: make sure big bio is splitted into at most 256 bvecs
On Thu, Apr 7, 2016 at 9:36 AM, Eric Wheeler <bcache@...ts.ewheeler.net> wrote:
> On Wed, 6 Apr 2016, Ming Lei wrote:
>
>> After arbitrary bio size is supported, the incoming bio may
>> be very big. We have to split the bio into small bios so that
>> each holds at most BIO_MAX_PAGES bvecs for safety reason, such
>> as bio_clone().
>>
>> This patch fixes the following kernel crash:
>>
>> > [ 172.660142] BUG: unable to handle kernel NULL pointer dereference at
>> > 0000000000000028
>> > [ 172.660229] IP: [<ffffffff811e53b4>] bio_trim+0xf/0x2a
>> > [ 172.660289] PGD 7faf3e067 PUD 7f9279067 PMD 0
>> > [ 172.660399] Oops: 0000 [#1] SMP
>> > [...]
>> > [ 172.664780] Call Trace:
>> > [ 172.664813] [<ffffffffa007f3be>] ? raid1_make_request+0x2e8/0xad7 [raid1]
>> > [ 172.664846] [<ffffffff811f07da>] ? blk_queue_split+0x377/0x3d4
>> > [ 172.664880] [<ffffffffa005fb5f>] ? md_make_request+0xf6/0x1e9 [md_mod]
>> > [ 172.664912] [<ffffffff811eb860>] ? generic_make_request+0xb5/0x155
>> > [ 172.664947] [<ffffffffa0445c89>] ? prio_io+0x85/0x95 [bcache]
>> > [ 172.664981] [<ffffffffa0448252>] ? register_cache_set+0x355/0x8d0 [bcache]
>> > [ 172.665016] [<ffffffffa04497d3>] ? register_bcache+0x1006/0x1174 [bcache]
>>
>> Fixes: 54efd50(block: make generic_make_request handle arbitrarily sized bios)
>> Reported-by: Sebastian Roesner <sroesner-kernelorg@...sner-online.de>
>> Reported-by: Eric Wheeler <bcache@...ts.ewheeler.net>
>> Cc: stable@...r.kernel.org (4.2+)
>
> Ming Lei,
>
> get_max_io_size doesn't appear until 4.5 based on a quick LXR check so we
> won't see it in stable I don't think.
>
> It would be nice to see this fixed in 4.1 (if affected there). Is there
The issue should be introduced to v4.3 via 54efd50
> another place this could be applied to be a bit more backward compatible?
The v1 needn't change to get_max_io_size(), and it should be simple enough
to backport to previous stables, please try it:
http://marc.info/?l=linux-block&m=145991422422927&w=2
Thanks,
>
>> Cc: Shaohua Li <shli@...com>
>> Signed-off-by: Ming Lei <ming.lei@...onical.com>
>> ---
>> I can reproduce the issue and verify the fix by the following approach:
>> - create one raid1 over two virtio-blk
>> - build bcache device over the above raid1 and another cache device.
>> - set cache mode as writeback
>> - run random write over ext4 on the bcache device
>> - then the crash can be triggered
>>
>> block/blk-merge.c | 12 ++++++++++++
>> 1 file changed, 12 insertions(+)
>>
>> diff --git a/block/blk-merge.c b/block/blk-merge.c
>> index 2613531..9a8651f 100644
>> --- a/block/blk-merge.c
>> +++ b/block/blk-merge.c
>> @@ -79,6 +79,18 @@ static inline unsigned get_max_io_size(struct request_queue *q,
>> /* aligned to logical block size */
>> sectors &= ~(mask >> 9);
>>
>> + /*
>> + * With arbitrary bio size, the incoming bio may be very big.
>> + * We have to split the bio into small bios so that each holds
>> + * at most BIO_MAX_PAGES bvecs for safety reason, such as
>> + * bio_clone().
>> + *
>> + * In the future, the limit might be converted into per-queue
>> + * flag.
>> + */
>> + sectors = min_t(unsigned, sectors, BIO_MAX_PAGES <<
>> + (PAGE_CACHE_SHIFT - 9));
>> +
>> return sectors;
>> }
>>
>> --
>> 1.9.1
>>
>>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-block" in
> the body of a message to majordomo@...r.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists