[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <1325834560.6233.62.camel@marge.simson.net>
Date: Fri, 06 Jan 2012 08:22:40 +0100
From: Mike Galbraith <efault@....de>
To: Andrew Morton <akpm@...ux-foundation.org>
Cc: Stephen Rothwell <sfr@...b.auug.org.au>,
linux-next@...r.kernel.org, linux-kernel@...r.kernel.org,
Dimitri Sivanich <sivanich@....com>,
Kay Sievers <kay.sievers@...y.org>, Greg KH <greg@...ah.com>
Subject: Re: linux-next: build failure after merge of the akpm tree
On Thu, 2012-01-05 at 15:52 -0800, Andrew Morton wrote:
> On Thu, 5 Jan 2012 18:29:49 +1100
> Stephen Rothwell <sfr@...b.auug.org.au> wrote:
>
> > Hi Andrew,
> >
> > After merging the akpm tree, today's linux-next build (powerpc
> > ppc64_defconfig) failed like this:
> >
> > kernel/time/tick-sched.c:874:7: warning: 'struct sysdev_attribute' declared inside parameter list [enabled by default]
> >
> > ...
> >
> > Caused by commit 629d589817da ("tick-sched: add specific do_timer_cpu
> > value for nohz off mode") interacting with the removal of sysdevs in the
> > driver-core tree. This patch will need reworking for that.
> >
> > I have reverted that commit for today.
>
> Bah. I dropped it.
Hm. I was looking at that patch, and wondering if it wouldn't be better
to twiddle cpusets to allow the user to tell the scheduler and friends
that certain CPUs are being used for HPC instead. We do isolation there
now, letting the user nuke scheduler domains, so it looks like the right
spot to extend isolation.
I'm trying that out, because I found that the rt push/pull logic adds
useless overhead and jitter to isolated/pinned rt loads. We can let
exclusive cpusets turn nohz on/off, and/or rt push/pull.. and a set
could be immunized from becoming jiffies maintainer, or any other
unsavory and not 100% required duty, with a per cpu flag read as well.
-Mike
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists