[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20100912203454.GC32327@Krystal>
Date: Sun, 12 Sep 2010 16:34:54 -0400
From: Mathieu Desnoyers <mathieu.desnoyers@...icios.com>
To: Peter Zijlstra <peterz@...radead.org>
Cc: Linus Torvalds <torvalds@...ux-foundation.org>,
LKML <linux-kernel@...r.kernel.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Ingo Molnar <mingo@...e.hu>,
Steven Rostedt <rostedt@...dmis.org>,
Thomas Gleixner <tglx@...utronix.de>,
Tony Lindgren <tony@...mide.com>,
Mike Galbraith <efault@....de>
Subject: Re: [RFC patch 1/2] sched: dynamically adapt granularity with
nr_running
* Peter Zijlstra (peterz@...radead.org) wrote:
> On Sat, 2010-09-11 at 13:48 -0700, Linus Torvalds wrote:
> > On Sat, Sep 11, 2010 at 1:36 PM, Peter Zijlstra <peterz@...radead.org> wrote:
> > >
> > >From what I can make up:
> > >
> > > LAT=`cat /proc/sys/kernel/sched_latency_ns`;
> > > echo $((LAT/8)) > /proc/sys/kernel/sched_min_granularity_ns
> > >
> > > will give you pretty much the same result as Mathieu's patch.
> >
> > Or perhaps not. The point being that Mathieu's patch seems to do this
> > dynamically based on number of runnable threads per cpu. Which seems
> > to be a good idea.
> >
> > IOW, this part:
> >
> > - if (delta_exec < sysctl_sched_min_granularity)
> > + if (delta_exec < __sched_gran(cfs_rq->nr_running))
> >
> > seems to be a rather fundamental change, and looks at least
> > potentially interesting. It seems to make conceptual sense to take the
> > number of running tasks into account at that point, no?
>
> We used to have something like that a long while back, we nixed it
> because of the division and replaced it with floor(__sched_gran) (ie.
> the smallest value it would ever give).
>
> Smaller values are better for latency, larger values are better for
> throughput. So introducing __sched_gran() in order to provide larger
> values doesn't make sense to me.
__sched_gran() provides large values for small nr_running, and smaller values
for larger nr_running.
So for systems with few threads running, we have good throughput (and there does
not seem to be much latency issues there). However for a system with a larger
number of running threads, __sched_gran() dynamically reduces the granularity.
>
> > And I don't like how you dismissed the measured latency improvement.
> > And yes, I do think latency matters. A _lot_.
>
> OK, we'll make it better and sacrifice some throughput, can do, no
> problem.
My approach try to get lower latencies without sacrificing throughput when there
are few threads running, which IMHO is the common case where throughput really
matters. A system running tons of threads should already expect some sort of
throughput degradation anyway, so we might as well favor low-latency rather than
throughput on those systems running lots of threads.
>
> > And no, I'm not saying that Mathieu's patch is necessarily good. I
> > haven't tried it myself. I don't have _that_ kind of opinion. The
> > opinion I do have is that I think it's sad how you dismissed things
> > out of hand - and seem to _continue_ to dismiss them without
> > apparently actually having looked at the patch at all.
>
> Let me draw you a picture of what this patch looks like to me:
>
> * is slice length, + is period length
>
> Patch (sched_latency = 10, sched_min_gran = 10/3)
Hrm, in sched_fair.c, sysctl_sched_latency is set to 6000000ULL. So this would
be a sched_latency = 6, or am I missing something ? With a sched_latency of 6,
the jump you show below in your graph disappears. So what have I missed ?
I agree with the shape of your graph, I'm just trying to understand why you have
a sched_latency of 10.
Thanks,
Mathieu
>
>
> 30 | +
> |
> |
> | +
> |
> |
> |
> |
> |
> |
> 20 |
> |
> |
> |
> |
> |
> |
> |
> |
> |
> 10 | * + + + + + + +
> |
> |
> |
> |
> | *
> |
> | * * * * * * * *
> | * *
> | * *
> 0 +---------------------------------------------------------
> 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
>
>
> Normal (sched_latency = 10, sched_min_gran = 10/3)
>
>
> 30 | +
> |
> |
> | +
> |
> |
> |
> | +
> |
> |
> 20 | +
> |
> |
> | +
> |
> |
> |
> | +
> |
> |
> 10 | * + +
> |
> |
> |
> |
> | *
> |
> | * * * * * * * * * * * * * *
> |
> |
> 0 +---------------------------------------------------------
> 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
>
>
>
> Normal (sched_latency = 10, sched_min_gran = 10/8)
>
> 30 |
> |
> |
> |
> |
> |
> |
> |
> |
> |
> 20 |
> |
> |
> |
> |
> | +
> | +
> | +
> |
> | +
> 10 | * + + + + + + +
> |
> |
> |
> |
> | *
> |
> | * *
> | * *
> | * * * * * * * *
> 0 +---------------------------------------------------------
> 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
>
>
>
>
>
>
>
>
>
>
>
--
Mathieu Desnoyers
Operating System Efficiency R&D Consultant
EfficiOS Inc.
http://www.efficios.com
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists