[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4A15B8F2.9000804@kernel.org>
Date: Thu, 21 May 2009 13:26:26 -0700
From: Yinghai Lu <yinghai@...nel.org>
To: Paul Mundt <lethal@...ux-sh.org>, Ingo Molnar <mingo@...e.hu>,
Yinghai Lu <yinghai@...nel.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Mel Gorman <mel@....ul.ie>
CC: linux-kernel@...r.kernel.org
Subject: Re: [PATCH] sparseirq: Enable early irq_desc allocation.
Paul Mundt wrote:
> Presently non-legacy IRQs have their irq_desc allocated with
> kzalloc_node(). This assumes that all callers of irq_to_desc_cpu_alloc()
> will be sufficiently late in the boot process that kmalloc is available.
>
> While porting sparseirq support to sh this blew up immediately, as at the
> time that we register the CPU's interrupt vector map only bootmem is
> available.
>
> This adds in a simple after_bootmem check to see where the allocation
> needs to come from, which is likewise provided by all of the platforms
> that support sparse irq today. :-)
>
> Cc: Yinghai Lu <yinghai@...nel.org>
> Cc: Ingo Molnar <mingo@...e.hu>
> Signed-off-by: Paul Mundt <lethal@...ux-sh.org>
>
> ---
>
> kernel/irq/handle.c | 17 ++++++++++++-----
> 1 file changed, 12 insertions(+), 5 deletions(-)
>
> diff --git a/kernel/irq/handle.c b/kernel/irq/handle.c
> index d82142b..5fb3a5c 100644
> --- a/kernel/irq/handle.c
> +++ b/kernel/irq/handle.c
> @@ -19,7 +19,7 @@
> #include <linux/hash.h>
> #include <trace/irq.h>
> #include <linux/bootmem.h>
> -
> +#include <linux/mm.h>
> #include "internals.h"
>
> /*
> @@ -81,13 +81,17 @@ static struct irq_desc irq_desc_init = {
> .lock = __SPIN_LOCK_UNLOCKED(irq_desc_init.lock),
> };
>
> -void init_kstat_irqs(struct irq_desc *desc, int cpu, int nr)
> +void __ref init_kstat_irqs(struct irq_desc *desc, int cpu, int nr)
> {
> int node;
> void *ptr;
>
> node = cpu_to_node(cpu);
> - ptr = kzalloc_node(nr * sizeof(*desc->kstat_irqs), GFP_ATOMIC, node);
> + if (after_bootmem)
> + ptr = kzalloc_node(nr * sizeof(*desc->kstat_irqs), GFP_ATOMIC, node);
> + else
> + ptr = alloc_bootmem_node(NODE_DATA(node),
> + nr * sizeof(*desc->kstat_irqs));
>
> /*
> * don't overwite if can not get new one
> @@ -187,7 +191,7 @@ struct irq_desc *irq_to_desc(unsigned int irq)
> return NULL;
> }
>
> -struct irq_desc *irq_to_desc_alloc_cpu(unsigned int irq, int cpu)
> +struct irq_desc * __ref irq_to_desc_alloc_cpu(unsigned int irq, int cpu)
> {
> struct irq_desc *desc;
> unsigned long flags;
> @@ -211,7 +215,10 @@ struct irq_desc *irq_to_desc_alloc_cpu(unsigned int irq, int cpu)
> goto out_unlock;
>
> node = cpu_to_node(cpu);
> - desc = kzalloc_node(sizeof(*desc), GFP_ATOMIC, node);
> + if (after_bootmem)
> + desc = kzalloc_node(sizeof(*desc), GFP_ATOMIC, node);
> + else
> + desc = alloc_bootmem_node(NODE_DATA(node), sizeof(*desc));
> printk(KERN_DEBUG " alloc irq_desc for %d on cpu %d node %d\n",
> irq, cpu, node);
> if (!desc) {
can you check tip?
we change _cpu to node already.
also only sh have after_bootmem now.
arch/sh/mm/init.c:int after_bootmem = 0;
arch/sh/mm/init.c: after_bootmem = 1;
arch/sh/mm/ioremap_64.c: extern int after_bootmem;
arch/sh/mm/ioremap_64.c: if (after_bootmem) {
include/linux/mm.h:extern int after_bootmem;
for x86 we have bootmem_state ...
arch/x86/include/asm/page_types.h:enum bootmem_state {
arch/x86/include/asm/page_types.h:extern enum bootmem_state bootmem_state;
arch/x86/kernel/setup.c: bootmem_state = DURING_BOOTMEM;
arch/x86/mm/init.c:enum bootmem_state bootmem_state = BEFORE_BOOTMEM;
arch/x86/mm/init.c: if (bootmem_state == BEFORE_BOOTMEM)
arch/x86/mm/init.c: if (bootmem_state == BEFORE_BOOTMEM)
arch/x86/mm/init.c: if (bootmem_state == BEFORE_BOOTMEM && !start) {
arch/x86/mm/init.c: if (bootmem_state == BEFORE_BOOTMEM &&
arch/x86/mm/init.c: if (bootmem_state == BEFORE_BOOTMEM)
Andrew,
do we need to move bootmem_state back to linux/mm.h?
YH
YH
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists