[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20170403154528.6470165dd791cf8a23ae57c8@linux-foundation.org>
Date: Mon, 3 Apr 2017 15:45:28 -0700
From: Andrew Morton <akpm@...ux-foundation.org>
To: Minchan Kim <minchan@...nel.org>
Cc: <linux-kernel@...r.kernel.org>,
Sergey Senozhatsky <sergey.senozhatsky@...il.com>,
<kernel-team@....com>, Jens Axboe <axboe@...nel.dk>,
Hannes Reinecke <hare@...e.com>,
Johannes Thumshirn <jthumshirn@...e.de>
Subject: Re: [PATCH 1/5] zram: handle multiple pages attached bio's bvec
On Mon, 3 Apr 2017 14:17:29 +0900 Minchan Kim <minchan@...nel.org> wrote:
> Johannes Thumshirn reported system goes the panic when using NVMe over
> Fabrics loopback target with zram.
>
> The reason is zram expects each bvec in bio contains a single page
> but nvme can attach a huge bulk of pages attached to the bio's bvec
> so that zram's index arithmetic could be wrong so that out-of-bound
> access makes panic.
>
> It can be solved by limiting max_sectors with SECTORS_PER_PAGE like
> [1] but it makes zram slow because bio should split with each pages
> so this patch makes zram aware of multiple pages in a bvec so it
> could solve without any regression.
>
> [1] 0bc315381fe9, zram: set physical queue limits to avoid array out of
> bounds accesses
This isn't a cleanup - it fixes a panic (or is it a BUG or is it an
oops, or...)
How serious is this bug? Should the fix be backported into -stable
kernels? etc.
A better description of the bug's behaviour would be appropriate.
> Cc: Jens Axboe <axboe@...nel.dk>
> Cc: Hannes Reinecke <hare@...e.com>
> Reported-by: Johannes Thumshirn <jthumshirn@...e.de>
> Tested-by: Johannes Thumshirn <jthumshirn@...e.de>
> Reviewed-by: Johannes Thumshirn <jthumshirn@...e.de>
> Signed-off-by: Johannes Thumshirn <jthumshirn@...e.de>
> Signed-off-by: Minchan Kim <minchan@...nel.org>
This signoff trail is confusing. It somewhat implies that Johannes
authored the patch which I don't think is the case?
Powered by blists - more mailing lists