[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20070812051734.GB18291@elte.hu>
Date: Sun, 12 Aug 2007 07:17:34 +0200
From: Ingo Molnar <mingo@...e.hu>
To: Willy Tarreau <w@....eu>
Cc: Roman Zippel <zippel@...ux-m68k.org>,
Michael Chang <thenewme91@...il.com>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Andi Kleen <andi@...stfloor.org>,
Mike Galbraith <efault@....de>,
Andrew Morton <akpm@...ux-foundation.org>,
linux-kernel@...r.kernel.org
Subject: Re: CFS review
* Willy Tarreau <w@....eu> wrote:
> > 1. Two simple busy loops, one of them is reniced to 15, according to
> > my calculations the reniced task should get about 3.4%
> > (1/(1.25^15+1)), but I get this:
> >
> > PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
> > 4433 roman 20 0 1532 300 244 R 99.2 0.2 5:05.51 l
> > 4434 roman 35 15 1532 72 16 R 0.7 0.1 0:10.62 l
>
> Could this be caused by typos in some tables like you have found in
> wmult ?
note that the typo was not in the weight table but in the inverse weight
table which didnt really affect CPU utilization (that's why we didnt
notice the typo sooner). Regarding the above problem with nice +15 being
beefier than intended i'd suggest to re-test with a doubled
/proc/sys/kernel/sched_runtime_limit value, or with:
echo 30 > /proc/sys/kernel/sched_features
(which turns off sleeper fairness)
> > 4477 roman 5 -15 1532 68 16 R 8.6 0.1 0:07.63 l
> > 4476 roman 5 -15 1532 68 16 R 9.6 0.1 0:07.38 l
> > 4475 roman 5 -15 1532 68 16 R 1.3 0.1 0:07.09 l
> > 4474 roman 5 -15 1532 68 16 R 2.3 0.1 0:07.97 l
> > 4473 roman 5 -15 1532 296 244 R 1.0 0.2 0:07.73 l
>
> Do you see this only at -15, or starting with -15 and below ?
i think this was scheduling jitter caused by the larger granularity of
negatively reniced tasks. This got improved recently, with latest -git i
get:
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
3108 root 5 -15 1576 248 196 R 5.0 0.0 0:07.26 loop_silent
3109 root 5 -15 1576 248 196 R 5.0 0.0 0:07.26 loop_silent
3110 root 5 -15 1576 248 196 R 5.0 0.0 0:07.26 loop_silent
3111 root 5 -15 1576 244 196 R 5.0 0.0 0:07.26 loop_silent
3112 root 5 -15 1576 248 196 R 5.0 0.0 0:07.26 loop_silent
3113 root 5 -15 1576 248 196 R 5.0 0.0 0:07.26 loop_silent
that's picture-perfect CPU time distribution. But, and that's fair to
say, i never ran such an artificial workload of 20x nice -15 infinite
loops (!) before, and boy does interactivity suck (as expected) ;)
Ingo
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists