[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <36bbf267-be27-4c9e-b782-91ed32a1dfe9@g1g2000pra.googlegroups.com>
Date: Sat, 5 Sep 2009 19:32:31 -0700 (PDT)
From: Ani <asinha@...gmasystems.com>
To: Lucas De Marchi <lucas.de.marchi@...il.com>
Cc: linux-kernel@...r.kernel.org
Subject: Re: question on sched-rt group allocation cap: sched_rt_runtime_us
On Sep 5, 3:50 pm, Lucas De Marchi <lucas.de.mar...@...il.com> wrote:
>
> Indeed. I've tested this same test program in a single core machine and it
> produces the expected behavior:
>
> rt_runtime_us / rt_period_us % loops executed in SCHED_OTHER
> 95% 4.48%
> 60% 54.84%
> 50% 86.03%
> 40% OTHER completed first
>
Hmm. This does seem to indicate that there is some kind of
relationship with SMP. So I wonder whether there is a way to turn this
'RT bandwidth accumulation' heuristic off. I did an
echo 0 > /proc/sys/kernel/sched_migration_cost
but results were identical to previous.
I figure that if I set it to zero, the regular sched-fair (non-RT)
tasks will be treated as not being cache hot and hence susceptible to
migration. From the code it looks like sched-rt tasks are always
treated as cache cold? Mind you though that I have not yet looked into
the code very rigorously. I knew the O(1) scheduler relatively well,
but I am just begun digging into the new CFS scheduler code.
On a side note, why is there no documentation explain the
sched_migration_cost tuning knob? It would be nice to have one - at
least where the sysctl variable is defined.
--Ani
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists