[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4A0D276B.4010703@kernel.org>
Date: Fri, 15 May 2009 17:27:23 +0900
From: Tejun Heo <tj@...nel.org>
To: Jan Beulich <JBeulich@...ell.com>
CC: mingo@...e.hu, andi@...stfloor.org, tglx@...utronix.de,
linux-kernel@...r.kernel.org, hpa@...or.com
Subject: Re: [GIT PATCH] x86,percpu: fix pageattr handling with remap allocator
Jan Beulich wrote:
>>>> Tejun Heo <tj@...nel.org> 15.05.09 10:11 >>>
>>>>> This would additionally address a potential problem on 32-bits -
>>>>> currently, for a 32-CPU system you consume half of the vmalloc space
>>>>> with PAE (on non-PAE you'd even exhaust it, but I think it's
>>>>> unreasonable to expect a system having 32 CPUs to not need PAE).
>>>> I recall having about the same conversation before. Looking up...
>>>>
>>>> -- QUOTE --
>>>> Actually, I've been looking at the numbers and I'm not sure if the
>>>> concern is valid. On x86_32, the practical number of maximum
>>>> processors would be around 16 so it will end up 32M, which isn't
>>>> nice and it would probably a good idea to introduce a parameter to
>>>> select which allocator to use but still it's far from consuming all
>>>> the VM area. On x86_64, the vmalloc area is obscenely large at 245,
>>>> ie 32 terabytes. Even with 4096 processors, single chunk is measly
>>>> 0.02%.
>>> Just to note - there must be a reason we (SuSE/Novell) build our default
>>> 32-bit kernel with support for 128 CPUs, which now is simply broken.
>> It's not broken, it will just fall back to 4k allocator. Also, please
>
> I'm afraid I have to disagree: There's no check (not even in
> vm_area_register_early()) whether the vmalloc area is actually large enough
> to fulfill the request.
Hah... indeed. Well, it's solved now.
Thanks.
--
tejun
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists