[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20070726223455.GK3318@linux-os.sc.intel.com>
Date: Thu, 26 Jul 2007 15:34:56 -0700
From: "Siddha, Suresh B" <suresh.b.siddha@...el.com>
To: Ingo Molnar <mingo@...e.hu>
Cc: "Siddha, Suresh B" <suresh.b.siddha@...el.com>, npiggin@...e.de,
linux-kernel@...r.kernel.org, akpm@...ux-foundation.org
Subject: Re: [patch] sched: introduce SD_BALANCE_FORK for ht/mc/smp domains
On Fri, Jul 27, 2007 at 12:18:30AM +0200, Ingo Molnar wrote:
>
> * Siddha, Suresh B <suresh.b.siddha@...el.com> wrote:
>
> > Introduce SD_BALANCE_FORK for HT/MC/SMP domains.
> >
> > For HT/MC, as caches are shared, SD_BALANCE_FORK is the right thing to
> > do. Given that NUMA domain already has this flag and the scheduler
> > currently doesn't have the concept of running threads belonging to a
> > process as close as possible(i.e., forking may keep close, but
> > periodic balance later will likely take them far away), introduce
> > SD_BALANCE_FORK for SMP domain too.
> >
> > Signed-off-by: Suresh Siddha <suresh.b.siddha@...el.com>
>
> i'm not opposed to this fundamentally, but it would be nice to better
> map the effects of this change: do you have any particular workload
> under which you've tested this and under which you've seen it makes a
> difference? I'd expect this to improve fork-intense half-idle workloads
> perhaps - things like a make -j3 on a 4-core CPU.
They might be doing more exec's and probably covered by exec balance.
There was a small pthread test case which was calculating the time to create
all the threads and how much time each thread took to start running. It
appeared as if the threads ran sequentially one after another on a DP system
with four cores leading to this SD_BALANCE_FORK observation.
thanks,
suresh
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists