[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1220780098.8687.40.camel@twins.programming.kicks-ass.net>
Date: Sun, 07 Sep 2008 11:34:58 +0200
From: Peter Zijlstra <peterz@...radead.org>
To: Ulrich Drepper <drepper@...il.com>
Cc: Arjan van de Ven <arjan@...radead.org>,
Phil Endecott <phil_wueww_endecott@...zphil.org>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>
Subject: Re: nice and hyperthreading on atom
On Sat, 2008-09-06 at 11:30 -0700, Ulrich Drepper wrote:
> On Sat, Sep 6, 2008 at 9:42 AM, Arjan van de Ven <arjan@...radead.org> wrote:
> > As an OS one COULD decide to just not schedule the nice task at all,
> > but then, especially on atom where HT has a high efficiency, your cpu
> > is mostly idle ...
>
> One thread being idle is even on Atom the right thing to do in some
> situations. If you have processes which, when HT is used, experience
> high pressure on the common cache(s) then you should not schedule them
> together. We can theoretically find out whether this is the case
> using the PMCs. With perfmon2 hopefully on the horizon soon it might
> actually be possible to automatically make these measurements.
>
> There is another aspect I talked to Peter about already. We really
> want concurrent threat scheduling in some cases. For the
> implementation of helper threads you don't want two threads to be
> scheduled independently, you want them to be scheduled on a HT pair.
> Currently this isn't possible except by pinning them to fixed threads.
> We really want to have a new way to express this type of scheduling
> (Peter, how did you call it?)
Really helps if you make sure I'm on the CC ;-)
I think I called it something like affinity grouping - but I'm still a
bit scared of the bin-packing issues involved - those will really mess
up the already complex balancing rules.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists