[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <503DFFF4.4060909@mellanox.com>
Date: Wed, 29 Aug 2012 14:41:40 +0300
From: Haggai Eran <haggaie@...lanox.com>
To: David Rientjes <rientjes@...gle.com>
CC: Linus Torvalds <torvalds@...ux-foundation.org>,
Mel Gorman <mgorman@...e.de>,
Pekka Enberg <penberg@...nel.org>,
LKML <linux-kernel@...r.kernel.org>,
Or Gerlitz <ogerlitz@...lanox.com>,
Shachar Raindel <raindel@...lanox.com>
Subject: Re: [patch v3.6] mm, slab: lock the correct nodelist after reenabling
irqs
On 29/08/2012 05:57, David Rientjes wrote:
> On Tue, 28 Aug 2012, Haggai Eran wrote:
>
>> Hi,
>>
>> I believe I have encountered a bug in kernel 3.6-rc3. It starts with the
>> assertion in mm/slab.c:2629 failing, and then the system hangs. I can
>> reproduce this bug by running a large compilation (compiling the kernel
>> for instance).
>>
>> Here's what I see in netconsole:
>>> ------------[ cut here ]------------
>>> kernel BUG at mm/slab.c:2629!
>>> invalid opcode: 0000 [#1] SMP DEBUG_PAGEALLOC
>> I'm attaching netconsole logs I got with kernel 3.6-rc1, which contain a
>> little more details after the crash, but for some reason netconsole
>> didn't capture the full stack trace of the assertion. I caught a glimpse
>> at the console and I saw RIP was at cache_alloc_refill.
>>
> It only gets called from cache_alloc_refill().
>
> Looks like a problem in 072bb0aa5e0 ("mm: sl[au]b: add knowledge of
> PFMEMALLOC reserve pages"). cache_grow() can reenable irqs which allows
> this to be scheduled on a different cpu, possibly with a different node.
> So it turns out that we lock the wrong node's list_lock because we don't
> check the new node id when irqs are disabled again.
>
> I doubt you can reliably reproduce this, but the following should fix the
> issue.
Your patch did solve the issue. Thanks!
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists