[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4F43C825.1040501@redhat.com>
Date: Tue, 21 Feb 2012 10:36:53 -0600
From: Eric Sandeen <sandeen@...hat.com>
To: Xi Wang <xi.wang@...il.com>
CC: Haogang Chen <haogangchen@...il.com>, Theodore Tso <tytso@....edu>,
Andreas Dilger <adilger.kernel@...ger.ca>,
linux-ext4@...r.kernel.org, linux-kernel@...r.kernel.org,
Yongqiang Yang <xiaoqiangnk@...il.com>,
Andrew Morton <akpm@...ux-foundation.org>
Subject: Re: [PATCH] FS: ext4: fix integer overflow in alloc_flex_gd()
On 02/21/2012 07:55 AM, Xi Wang wrote:
> On Feb 20, 2012, at 6:47 PM, Eric Sandeen wrote:
>> Hm this raises a few questions I think.
>>
>> On the one hand, making sure the kmalloc arg doesn't overflow here is
>> certainly a good thing and probably the right thing to do in the short term.
>>
>> So I guess:
>>
>> Reviewed-by: Eric Sandeen <sandeen@...hat.com>
>>
>> for that, to close the hole.
>
> Another possibility is to wait for knalloc/kmalloc_array in the -mm
> tree, which is basically the non-zeroing version of kcalloc that
> performs overflow checking.
>
>> Doesn't this also mean that a valid s_log_groups_per_flex (i.e. 31)
>> will fail in this resize code? That would be an unexpected outcome.
>> 2^31 groups per flex is a little crazy, but still technically valid
>> according to the limits in the code.
>
> Or we could limit s_log_groups_per_flex/groups_per_flex to a
> reasonable upper bound in ext4_fill_flex_info(), right?
Depends on the "flex_bg" design intent, I guess.
I don't know if the 2^31 was an intended design limit, or just a
mathematical limit that based on container sizes etc...
I'd have to look at the resize code more carefully but I can't imagine
that it's imperative to allocate this stuff all at once.
-Eric
> - xi
>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists