[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20210702111747.GF2610@twin.jikos.cz>
Date: Fri, 2 Jul 2021 13:17:47 +0200
From: David Sterba <dsterba@...e.cz>
To: Qu Wenruo <quwenruo.btrfs@....com>
Cc: "Gustavo A. R. Silva" <gustavoars@...nel.org>,
Chris Mason <clm@...com>, Josef Bacik <josef@...icpanda.com>,
David Sterba <dsterba@...e.com>, linux-btrfs@...r.kernel.org,
linux-kernel@...r.kernel.org, linux-hardening@...r.kernel.org
Subject: Re: [PATCH][next] btrfs: Fix multiple out-of-bounds warnings
On Fri, Jul 02, 2021 at 06:20:33PM +0800, Qu Wenruo wrote:
>
>
> On 2021/7/2 上午9:06, Gustavo A. R. Silva wrote:
> > Fix the following out-of-bounds warnings by using a flexible-array
> > member *pages[] at the bottom of struct extent_buffer:
> >
> > fs/btrfs/disk-io.c:225:34: warning: array subscript 1 is above array bounds of ‘struct page *[1]’ [-Warray-bounds]
>
> The involved code looks like:
>
> static void csum_tree_block(struct extent_buffer *buf, u8 *result)
> {
> struct btrfs_fs_info *fs_info = buf->fs_info;
> const int num_pages = fs_info->nodesize >> PAGE_SHIFT;
> ...
> for (i = 1; i < num_pages; i++) {
> kaddr = page_address(buf->pages[i]);
> crypto_shash_update(shash, kaddr, PAGE_SIZE);
> }
>
> For Power case, the page size is 64K and nodesize is at most 64K for
> btrfs, thus num_pages will either be 0 or 1.
>
> In that case, the for loop should never get reached, thus it's not
> possible to really get beyond the boundary.
>
> To me, the real problem is we have no way to tell compiler that
> fs_info->nodesize is ensured to be no larger than 64K.
>
>
> Although using flex array can mask the problem, but it's really masking
> the problem as now compiler has no idea how large the array can really be.
Agreed, that's the problem, we'd be switching compile-time static
information about the array with dynamic.
> David still has the final say on how to fix it, but I'm really wondering
> is there any way to give compiler some hint about the possible value
> range for things like fs_info->nodesize?
We can add some macros that are also page size dependent and evaluate to
a constant that can be in turn used to optimize the loop to a single
call of the loop body.
Looking at csum_tree_block we should really use the num_extent_pages
helper that does the same thing but handles when nodesize >> PAGE_SIZE
is zero (and returns 1).
Powered by blists - more mailing lists