[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20070315211240.GC12447@linux-os.sc.intel.com>
Date: Thu, 15 Mar 2007 14:12:41 -0700
From: "Siddha, Suresh B" <suresh.b.siddha@...el.com>
To: ray-gmail@...rabbit.org
Cc: "Siddha, Suresh B" <suresh.b.siddha@...el.com>,
Con Kolivas <kernel@...ivas.org>,
linux kernel mailing list <linux-kernel@...r.kernel.org>,
ck list <ck@....kolivas.org>
Subject: Re: RSDL v0.30 cpu scheduler for mainline kernels
On Thu, Mar 15, 2007 at 11:58:39AM -0700, Ray Lee wrote:
> With more CPUs, the context switch period can be multiplied by that
> number of CPUs while still allowing all tasks the same frequency of
> access to the CPU.
Are you assuming the other cpus might be idle?
It depends on the load of the system also, right. If all the cpus are
loaded, then increasing the period will decrease the frequency.
If some of the cpus are idle, then it doesn't matter what context
switch rate we use(as we don't get context switched out by other task).
> With 4 processors, the context switch would be
> 24ms, by which point we're probably reaching the point of diminishing
> returns for minimizing overhead and maximizing throughput.
BTW, the overhead is not just the context switch cost, but also the cache
evictions that the incoming process will bring.
> >We need to minimize these context switches.
>
> That's a judgement call. If a synthetic benchmark degrades but other
I was showing the degradation with SPECjbb2000 workload. Synthetic workload
was for showing/reproducing the issue quickly.
> things improve, then this, as most everything in computer science, is
> yet another trade-off that needs to be evaluated. (You recognize there
> is a tradeoff here, right?
I am with you. But lets say, if these tasks are not interactive, then
what is the need for paying this penality?
thanks,
suresh
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists