[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20110208181026.GB8278@dirshya.in.ibm.com>
Date: Tue, 8 Feb 2011 23:40:26 +0530
From: Vaidyanathan Srinivasan <svaidy@...ux.vnet.ibm.com>
To: Ranjit Manomohan <ranjitm@...gle.com>
Cc: linux-kernel@...r.kernel.org, Mike Galbraith <efault@....de>,
Nikhil Rao <ncrao@...gle.com>, Salman Qazi <sqazi@...gle.com>,
Dhaval Giani <dhaval.giani@...il.com>,
Peter Zijlstra <a.p.zijlstra@...llo.nl>,
Ingo Molnar <mingo@...hat.com>,
Thomas Gleixner <tglx@...utronix.de>,
Venkatesh Pallipadi <venki@...gle.com>,
Paul Turner <pjt@...gle.com>
Subject: Re: [ANNOUNCE] Linsched for 2.6.35 released
* Ranjit Manomohan <ranjitm@...gle.com> [2010-11-15 17:52:05]:
> On Mon, Oct 18, 2010 at 9:52 PM, Vaidyanathan Srinivasan
> <svaidy@...ux.vnet.ibm.com> wrote:
> > * Ranjit Manomohan <ranjitm@...gle.com> [2010-10-12 10:29:54]:
> >
[snip]
> > Can you help me figure out how to get to kstat_cpu() or per-cpu
> > kernel_stat accounting/utilisation metrics within the simulation?
>
> we don't use the kstat_cpu accounting in the simulation since it does
> not really make sense in this environment.
>
> We have a timer driven loop that advances time globally and kicks of
> events scheduled to run at specified times on each CPU. The periodic
> timer tick is one among these events. Since there is really no notion
> of system vs user time in this scenario, the current code disables the
> update_process_times routine. I am not sure how these times relate to
> the task placement logic you are trying to verify. If you could let me
> know how you plan to use these then I can try to accommodate that in
> the simulation.
The current setup lets us find how much time each task was run.
I would like to use the kernel_stat information to understand 'which
cpu' ran the task. Basically we could place nr_tasks < nr_cpus and
see them settle to the right CPUs within the sched domain topology.
This can be verified by checking the CPU's utilisation or run time at
the end of the simulation. Like two tasks on the same socket of
a dual-socket dual-core system should settle to one task per socket.
The load balancer should be able to spread the tasks around slowly.
The ability to create diverse topology within linsched is very
useful to test these load balancer functions and corner cases.
> Sorry for the delay in response. My mail filters messed this up.
I got your reply earlier. No problem with the delay.
Do you have a new version to share? Any new feature that you are planning?
--Vaidy
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists