[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1232304855.5908.40.camel@marge.simson.net>
Date: Sun, 18 Jan 2009 19:54:15 +0100
From: Mike Galbraith <efault@....de>
To: Peter Zijlstra <a.p.zijlstra@...llo.nl>
Cc: Andrew Morton <akpm@...ux-foundation.org>,
Ingo Molnar <mingo@...e.hu>,
Linus Torvalds <torvalds@...ux-foundation.org>,
LKML <linux-kernel@...r.kernel.org>
Subject: Re: [git pull] scheduler fixes
On Sun, 2009-01-18 at 16:28 +0100, Peter Zijlstra wrote:
> On Sun, 2009-01-18 at 15:08 +0100, Mike Galbraith wrote:
> > On Sat, 2009-01-17 at 13:00 +0100, Peter Zijlstra wrote:
> > > On Sat, 2009-01-17 at 11:34 +0100, Mike Galbraith wrote:
> > > > > Right, how about we flip the 'initial' case in place_entity() for !
> > > > > nr_exclusive wakeups.
> > > >
> > > > Wouldn't that be more drastic than sleep denial?
> > >
> > > Strictly speaking that DEBIT thing is valid for each wakeup, IIRC we
> > > restricted it to clone() only because that was where we could actually
> > > observe these latency spikes using a fork-bomb.
> > >
> > > This reduces the latency hits to around 400ms, which is about right for
> > > the given load.
> >
> > Disregarding the startup landmine for the moment, maybe we should put a
> > buddy slice knob in the user's hands, so they can tune latency, along
> > with a full on/off switch for those who care not one whit about
> > scalability.
>
> I'm not particularly fond of that idea because it only helps for this
> particular workload where any one task isn't actually running for very
> long.
Well, there are a lot of buddy loads where runtime is very short. You
can achieve ~similar latency control, but not as fine, by twiddling
wakeup latency. At least you can prevent the triple digit latency with
12 (and a half) pairs per core. I can imagine server loads not liking
600-700ms latencies. I was just thinking that with two knobs, you can
keep the benefits of relatively high wakeup granularity, yet have a user
tweakable throughput vs latency adjustment for buddies to complement
wakeup latency.
> If however your workload consists of cpu hogs, each will run for the
> full wakeup preemption 'slice' you now see these buddy pairs do.
Hm. I had a whack buddy tags if you are one at tick in there, but
removed it pending measurement. I was wondering if a last buddy hog
could end up getting the CPU back after having received his quanta and
being resched, but haven't checked that yet.
> Also, buddies really work for throughput, so I don't particularly fancy
> weakening them below what a normal cpu-hog can already do.
Yeah, I was thinking of the user who where latency rules, very select
group. If someone has a really latency sensitive load, they can turn
LAST_BUDDY off, but that doesn't help with the latency that NEXT_BUDDY
causes. Almost everyone really really wants buddies, but not all.
Buddies are wonderful for cache, wonderful for throughput, but are not
wonderful for latency, ergo the thought that maybe we should offer a
full off switch for those who know the consequences.
It was just a thought.
-Mike
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists