lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.LRH.2.11.1604070144150.12121@mail.ewheeler.net>
Date:	Thu, 7 Apr 2016 01:48:45 +0000 (UTC)
From:	Eric Wheeler <bcache@...ts.ewheeler.net>
To:	Ming Lei <ming.lei@...onical.com>
cc:	Jens Axboe <axboe@...com>,
	Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
	linux-block@...r.kernel.org,
	Kent Overstreet <kent.overstreet@...il.com>,
	Christoph Hellwig <hch@...radead.org>,
	Sebastian Roesner <sroesner-kernelorg@...sner-online.de>,
	"4.2+" <stable@...r.kernel.org>, Shaohua Li <shli@...com>
Subject: Re: [PATCH] block: make sure big bio is splitted into at most 256
 bvecs

On Wed, 6 Apr 2016, Ming Lei wrote:

> On Wed, Apr 6, 2016 at 1:44 AM, Ming Lei <ming.lei@...onical.com> wrote:
> > After arbitrary bio size is supported, the incoming bio may
> > be very big. We have to split the bio into small bios so that
> > each holds at most BIO_MAX_PAGES bvecs for safety reason, such
> > as bio_clone().
> >
> > This patch fixes the following kernel crash:
> >
> >> [  172.660142] BUG: unable to handle kernel NULL pointer dereference at
> >> 0000000000000028
> >> [  172.660229] IP: [<ffffffff811e53b4>] bio_trim+0xf/0x2a
> >> [  172.660289] PGD 7faf3e067 PUD 7f9279067 PMD 0
> >> [  172.660399] Oops: 0000 [#1] SMP
> >> [...]
> >> [  172.664780] Call Trace:
> >> [  172.664813]  [<ffffffffa007f3be>] ? raid1_make_request+0x2e8/0xad7 [raid1]
> >> [  172.664846]  [<ffffffff811f07da>] ? blk_queue_split+0x377/0x3d4
> >> [  172.664880]  [<ffffffffa005fb5f>] ? md_make_request+0xf6/0x1e9 [md_mod]
> >> [  172.664912]  [<ffffffff811eb860>] ? generic_make_request+0xb5/0x155
> >> [  172.664947]  [<ffffffffa0445c89>] ? prio_io+0x85/0x95 [bcache]
> >> [  172.664981]  [<ffffffffa0448252>] ? register_cache_set+0x355/0x8d0 [bcache]
> >> [  172.665016]  [<ffffffffa04497d3>] ? register_bcache+0x1006/0x1174 [bcache]
> >
> > Fixes: 54efd50(block: make generic_make_request handle arbitrarily sized bios)
> > Reported-by: Sebastian Roesner <sroesner-kernelorg@...sner-online.de>
> > Reported-by: Eric Wheeler <bcache@...ts.ewheeler.net>
> > Cc: stable@...r.kernel.org (4.2+)
> > Cc: Shaohua Li <shli@...com>
> > Signed-off-by: Ming Lei <ming.lei@...onical.com>
> > ---
> > I can reproduce the issue and verify the fix by the following approach:
> >         - create one raid1 over two virtio-blk
> >         - build bcache device over the above raid1 and another cache device.
> >         - set cache mode as writeback
> >         - run random write over ext4 on the bcache device
> >         - then the crash can be triggered
> 
> For anyone who is interested in issue/fix, forget to mention:
> 
>        The bucket size should be set as bigger than 1M during making bcache.
>        In my test, the bucket size is 2M.

Does the bucket size dictate the ideal cached data size, or is it just an 
optimization for erase block boundaries on the SSD?  

Are reads/writes smaller than the bucket size still cached effectively, or 
does a 2MB bucket slurp up 2MB of backing data along with it?  

For example, if 64k is our ideal IO size, should we use 64k buckets?

--
Eric Wheeler

> 
> Thanks,
> Ming
> 
> >
> >  block/blk-merge.c | 12 ++++++++++++
> >  1 file changed, 12 insertions(+)
> >
> > diff --git a/block/blk-merge.c b/block/blk-merge.c
> > index 2613531..9a8651f 100644
> > --- a/block/blk-merge.c
> > +++ b/block/blk-merge.c
> > @@ -79,6 +79,18 @@ static inline unsigned get_max_io_size(struct request_queue *q,
> >         /* aligned to logical block size */
> >         sectors &= ~(mask >> 9);
> >
> > +       /*
> > +        * With arbitrary bio size, the incoming bio may be very big.
> > +        * We have to split the bio into small bios so that each holds
> > +        * at most BIO_MAX_PAGES bvecs for safety reason, such as
> > +        * bio_clone().
> > +        *
> > +        * In the future, the limit might be converted into per-queue
> > +        * flag.
> > +        */
> > +       sectors = min_t(unsigned, sectors, BIO_MAX_PAGES <<
> > +                       (PAGE_CACHE_SHIFT - 9));
> > +
> >         return sectors;
> >  }
> >
> > --
> > 1.9.1
> >
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ