[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <AANLkTi=a91QX-FLURFWNsjbCS1bDLr6GqSpC4_uzm3ta@mail.gmail.com>
Date: Sat, 11 Sep 2010 13:48:29 -0700
From: Linus Torvalds <torvalds@...ux-foundation.org>
To: Peter Zijlstra <peterz@...radead.org>
Cc: Mathieu Desnoyers <mathieu.desnoyers@...icios.com>,
LKML <linux-kernel@...r.kernel.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Ingo Molnar <mingo@...e.hu>,
Steven Rostedt <rostedt@...dmis.org>,
Thomas Gleixner <tglx@...utronix.de>,
Tony Lindgren <tony@...mide.com>,
Mike Galbraith <efault@....de>
Subject: Re: [RFC patch 1/2] sched: dynamically adapt granularity with nr_running
On Sat, Sep 11, 2010 at 1:36 PM, Peter Zijlstra <peterz@...radead.org> wrote:
>
>>From what I can make up:
>
> LAT=`cat /proc/sys/kernel/sched_latency_ns`;
> echo $((LAT/8)) > /proc/sys/kernel/sched_min_granularity_ns
>
> will give you pretty much the same result as Mathieu's patch.
Or perhaps not. The point being that Mathieu's patch seems to do this
dynamically based on number of runnable threads per cpu. Which seems
to be a good idea.
IOW, this part:
- if (delta_exec < sysctl_sched_min_granularity)
+ if (delta_exec < __sched_gran(cfs_rq->nr_running))
seems to be a rather fundamental change, and looks at least
potentially interesting. It seems to make conceptual sense to take the
number of running tasks into account at that point, no?
And I don't like how you dismissed the measured latency improvement.
And yes, I do think latency matters. A _lot_.
And no, I'm not saying that Mathieu's patch is necessarily good. I
haven't tried it myself. I don't have _that_ kind of opinion. The
opinion I do have is that I think it's sad how you dismissed things
out of hand - and seem to _continue_ to dismiss them without
apparently actually having looked at the patch at all.
Linus
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists