[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1234568885.4831.14.camel@laptop>
Date: Sat, 14 Feb 2009 00:48:05 +0100
From: Peter Zijlstra <peterz@...radead.org>
To: Rusty Russell <rusty@...tcorp.com.au>
Cc: Ingo Molnar <mingo@...e.hu>,
Frederic Weisbecker <fweisbec@...il.com>,
Thomas Gleixner <tglx@...x.de>,
LKML <linux-kernel@...r.kernel.org>,
rt-users <linux-rt-users@...r.kernel.org>,
Steven Rostedt <rostedt@...dmis.org>,
Carsten Emde <ce@...g.ch>, Clark Williams <williams@...hat.com>
Subject: Re: [patch] generic-ipi: remove kmalloc, cleanup
On Sat, 2009-02-14 at 07:46 +1030, Rusty Russell wrote:
> On Thursday 12 February 2009 22:39:14 Peter Zijlstra wrote:
> > So it put in unconditionally, how about this?
> >
> >
> > --
> > Subject: generic-smp: remove single ipi fallback for smp_call_function_many()
> >
> > In preparation of removing the kmalloc() calls from the generic-ipi code
> > get rid of the single ipi fallback for smp_call_function_many().
> >
> > Because we cannot get around carrying the cpumask in the data -- imagine
> > 2 such calls with different but overlapping masks -- put in a full mask.
>
> OK, if you really want this, please just change it to:
> unsigned long cpumask_bits[BITS_TO_LONGS(CONFIG_NR_CPUS)];
>
> The 'struct cpumask' will be undefined soon when CONFIG_CPUMASK_OFFSTACK=y,
> which will prevent assignment and declaration on stack.
>
> I'd be fascinated to see perf numbers once you kill the kmalloc. Because
> this patch will add num_possible_cpus * NR_CPUS/8 bytes to the kernel which
> is something we're trying to avoid unless necessary.
You're free to make it a pointer and do node affine allocations from an
init section of choice and add a hotplug handler.
But I'm not quite sure how perf is affected by size overhead on
ridiculous configs.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists