[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1201460328.5092.95.camel@homer.simson.net>
Date: Sun, 27 Jan 2008 19:58:48 +0100
From: Mike Galbraith <efault@....de>
To: Toralf Förster <toralf.foerster@....de>
Cc: Sam Ravnborg <sam@...nborg.org>, Ingo Molnar <mingo@...e.hu>,
Peter Zijlstra <peterz@...radead.org>,
linux-kernel@...r.kernel.org
Subject: Re: (ondemand) CPU governor regression between 2.6.23 and 2.6.24
On Sun, 2008-01-27 at 13:39 +0100, Toralf Förster wrote:
> Am Sonntag, 27. Januar 2008 schrieben Sie:
> >
> > On Sun, 2008-01-27 at 12:00 +0100, Toralf Förster wrote:
> > > BTW the dnetc process runs under the user "dnetc" with nice level -19,
> > > my process runs under my own user id "tfoerste" therefore I wouldn't expect
> > > that both processes got the same processor resources isn't it ? :
> >
> > Normal. Nice level controls cpu distribution _within_ a task group,
> > whereas distribution between groups is controlled by cpu_share. It's
> > going to take a while for folks to get used to having two levels of cpu
> > distribution.
>
> Ough, does this mean that for a multi-user scenario of 2 non-root users "A" and
> "B" each running exactly 1 process with nice level 0 and 19 rerspectively
> that both share ~50% of the CPU *and furthermore* that that user "B" does never
> ever have a chance to be nice to user "A" although his process should really
> use only those CPU cycles not eated by any other user ?
Yes. If you want one task group to receive less cpu cycles, you have to
'nice' that task group by reducing it's share.
> If the answer is yes what's about extending the current behaviour to consider
> (optionally) nice level of running processes in the case where
> CONFIG_FAIR_GROUP_SCHED is set ?
I think it's better to just disable fair group scheduling if it doesn't
suit your needs. It's not going to be everyone's cup of tea.
-Mike
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists