[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <AANLkTimmkmeyFg6wSw4Ej=gqLAHgcB5puRxvAa_14eZp@mail.gmail.com>
Date: Sat, 11 Sep 2010 13:52:40 -0700
From: Linus Torvalds <torvalds@...ux-foundation.org>
To: Peter Zijlstra <peterz@...radead.org>
Cc: Mathieu Desnoyers <mathieu.desnoyers@...icios.com>,
LKML <linux-kernel@...r.kernel.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Ingo Molnar <mingo@...e.hu>,
Steven Rostedt <rostedt@...dmis.org>,
Thomas Gleixner <tglx@...utronix.de>,
Tony Lindgren <tony@...mide.com>,
Mike Galbraith <efault@....de>
Subject: Re: [RFC patch 1/2] sched: dynamically adapt granularity with nr_running
On Sat, Sep 11, 2010 at 1:45 PM, Peter Zijlstra <peterz@...radead.org> wrote:
> On Sat, 2010-09-11 at 22:36 +0200, Peter Zijlstra wrote:
>>
>> But if you want us to change the scheduler to be more latency sensitive
>> and trade in throughput for other benchmarks, we can do that.
>
> Really, just say "latency trumps throughput" and we'll make it so.
Nothing is ever that black-and-white.
But latency really _is_ important. And it's often overlooked, because
few benchmarks actually test it. So when somebody sends you actual
measured latency numbers, you shouldn't be so cavalier. And you
shouldn't say "trumps throughput", since it's clearly a matter of
balancing, and quite frankly, Mathieu's patch does seem to try to
balance things.
As mentioned, it does seem to make tons of conceptual sense to take
the number of running threads into account for the whole scheduling
granularity decision. After all, we already do that for the other
important numbers (the scheduling period and time slice).
So to me it looks like you're just being negative, without actually
looking at the patch and giving it some fair thought. That's what I'm
objecting to.
Linus
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists