[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <48C53D20.604@sgi.com>
Date: Mon, 08 Sep 2008 07:56:32 -0700
From: Mike Travis <travis@....com>
To: Peter Zijlstra <peterz@...radead.org>
CC: Andrew Morton <akpm@...ux-foundation.org>,
Ingo Molnar <mingo@...e.hu>, davej@...emonkey.org.uk,
David Miller <davem@...emloft.net>,
Eric Dumazet <dada1@...mosbay.com>,
"Eric W. Biederman" <ebiederm@...ssion.com>,
Jack Steiner <steiner@....com>,
Jeremy Fitzhardinge <jeremy@...p.org>,
Jes Sorensen <jes@....com>, "H. Peter Anvin" <hpa@...or.com>,
Thomas Gleixner <tglx@...utronix.de>,
linux-kernel@...r.kernel.org
Subject: Re: [RFC 07/13] sched: Reduce stack size requirements in kernel/sched.c
Peter Zijlstra wrote:
> On Sun, 2008-09-07 at 04:00 -0700, Andrew Morton wrote:
>> On Sun, 07 Sep 2008 12:24:47 +0200 Peter Zijlstra <peterz@...radead.org> wrote:
>>
>>> get_online_cpus() can sleep, but you just disabled preemption with those
>>> get_cpumask_var() horribles!
>> make cpu_hotplug.refcount an atomic_t.
>
> A much easier fix is just re-ordering those operations and do the
> get_online_cpus() before disabling preemption. But it does indicate this
> patch series isn't carefully constructed.
Yes, it's mostly a hunt for comments on my part... ;-)
>
>>> Couldn't be arsed to look through the rest, but I really hate this
>>> cpumask_ptr() stuff that relies on disabling preemption.
>> that's harder to fix ;)
>
> Looking at more patches than just the sched one convinced me more that
> this approach isn't a good one. It seems to make code much more
> fragile.
>
> See patch 9, there it was needed to draw out the callgraph in order to
> map stuff to these global variables - we're adding global dependencies
> to code that didn't have any, increasing complexity.
Again, yes, as I got farther into that one, it became clear that having
static cpumask_t temps over too large a range was ending up very messy.
Thanks,
Mike
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists