[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20090209083739.GB15517@elte.hu>
Date: Mon, 9 Feb 2009 09:37:39 +0100
From: Ingo Molnar <mingo@...e.hu>
To: Andrew Morton <akpm@...ux-foundation.org>
Cc: Yinghai Lu <yinghai@...nel.org>, tglx@...utronix.de, hpa@...or.com,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH] irq: optimize init_kstat_irqs/init_copy_kstat_irqs
* Andrew Morton <akpm@...ux-foundation.org> wrote:
> On Mon, 9 Feb 2009 09:11:24 +0100 Ingo Molnar <mingo@...e.hu> wrote:
>
> >
> > * Andrew Morton <akpm@...ux-foundation.org> wrote:
> >
> > > On Sat, 7 Feb 2009 01:01:03 -0800 Yinghai Lu <yinghai@...nel.org> wrote:
> > >
> > > >
> > > > add kzalloc_node_safe()?
> > >
> > > I cannot find that function.
> >
> > His suggestion is to provide that allocator variant.
> >
>
> Oh.
>
> It isn't possible to write a kzalloc_node_safe(GFP_ATOMIC). Or at
> least, we've never worked out a way.
>
> Maybe I'm confused again.
Indeed - duh - more morning tea needed.
Yinghai, why are those allocations GFP_ATOMIC to begin with? These:
earth4:~/tip> grep GFP_ATOMIC kernel/irq/*.c
kernel/irq/handle.c: ptr = kzalloc_node(nr * sizeof(*desc->kstat_irqs), GFP_ATOMIC, node);
kernel/irq/handle.c: desc = kzalloc_node(sizeof(*desc), GFP_ATOMIC, node);
kernel/irq/manage.c: action = kmalloc(sizeof(struct irqaction), GFP_ATOMIC);
Should all be GFP_KERNEL. Wherever they are within a spinlocked section the code
should be restructured. All descriptor data structures should be preallocated at
__setup_irq() time. If we ever need to allocate dynamically later on, in the middle
of some difficult codepath that's a structure bug in the code.
and this one:
kernel/irq/numa_migrate.c: desc = kzalloc_node(sizeof(*desc), GFP_ATOMIC, node);
should fail the migration silently if GFP_ATOMIC returns NULL.
Ingo
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists