[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <8761wth5ph.fsf@rustcorp.com.au>
Date: Tue, 02 Jul 2013 15:19:14 +0930
From: Rusty Russell <rusty@...tcorp.com.au>
To: Chegu Vinod <chegu_vinod@...com>
Cc: prarit@...hat.com, LKML <linux-kernel@...r.kernel.org>,
Gleb Natapov <gleb@...hat.com>,
Paolo Bonzini <pbonzini@...hat.com>, KVM <kvm@...r.kernel.org>
Subject: Re: kvm_intel: Could not allocate 42 bytes percpu data
Chegu Vinod <chegu_vinod@...com> writes:
> On 6/30/2013 11:22 PM, Rusty Russell wrote:
>> Chegu Vinod <chegu_vinod@...com> writes:
>>> Hello,
>>>
>>> Lots (~700+) of the following messages are showing up in the dmesg of a
>>> 3.10-rc1 based kernel (Host OS is running on a large socket count box
>>> with HT-on).
>>>
>>> [ 82.270682] PERCPU: allocation failed, size=42 align=16, alloc from
>>> reserved chunk failed
>>> [ 82.272633] kvm_intel: Could not allocate 42 bytes percpu data
>> Woah, weird....
>>
>> Oh. Shit. Um, this is embarrassing.
>>
>> Thanks,
>> Rusty.
>
>
> Thanks for your response!
>
>> ===
>> module: do percpu allocation after uniqueness check. No, really!
>>
>> v3.8-rc1-5-g1fb9341 was supposed to stop parallel kvm loads exhausting
>> percpu memory on large machines:
>>
>> Now we have a new state MODULE_STATE_UNFORMED, we can insert the
>> module into the list (and thus guarantee its uniqueness) before we
>> allocate the per-cpu region.
>>
>> In my defence, it didn't actually say the patch did this. Just that
>> we "can".
>>
>> This patch actually *does* it.
>>
>> Signed-off-by: Rusty Russell <rusty@...tcorp.com.au>
>> Tested-by: Noone it seems.
>
> Your following "updated" fix seems to be working fine on the larger
> socket count machine with HT-on.
OK, did you definitely revert every other workaround?
If so, please give me a Tested-by: line...
Thanks,
Rusty.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists