[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20070501172625.GP26598@holomorphy.com>
Date: Tue, 1 May 2007 10:26:25 -0700
From: Bill Irwin <bill.irwin@...cle.com>
To: Heiko Carstens <heiko.carstens@...ibm.com>
Cc: Bill Irwin <bill.irwin@...cle.com>, linux-kernel@...r.kernel.org,
bunk@...sta.de, akpm@...l.org, gcoady@...il.com, zlynx@....org,
dgc@....com, alan@...rguk.ukuu.org.uk, andi@...stfloor.org,
hch@...radead.org, jengelh@...ux01.gwdg.de, zwane@...radead.org,
neilb@...e.de, jens.axboe@...cle.com, eric@...venscaling.com,
wli@...omorphy.com
Subject: Re: [1/3] dynamically allocate IRQ stacks
At some point in the past, I wrote:
>> +static void * __init __alloc_irqstack(int cpu)
>> +{
>> + if (!cpu)
>> + return __alloc_bootmem(THREAD_SIZE, THREAD_SIZE,
>> + __pa(MAX_DMA_ADDRESS));
>> +
>> + return (void *)__get_free_pages(GFP_KERNEL,
>> + ilog2(THREAD_SIZE/PAGE_SIZE));
>> +}
On Tue, May 01, 2007 at 07:04:26PM +0200, Heiko Carstens wrote:
> I think you should test for slab_is_available() instead of checking
> if the cpu number is 0.
Plausible, though it would appear somewhat incongruous given that no
direct calls to slab functions are made. The timing should still come
out right, given that the vmallocspace variant needs to run late enough
for mm/vmalloc.c's internal slab allocations to work.
The patches end up looking like the following MIME attachments with
your suggested change.
-- wli
View attachment "dynamic-irq-stacks.patch" of type "text/plain" (4143 bytes)
View attachment "unconditional-i386-irq-stacks.patch" of type "text/plain" (2488 bytes)
View attachment "debug-stack.patch" of type "text/plain" (9296 bytes)
Powered by blists - more mailing lists