lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1322524692.21329.69.camel@sbsiddha-desk.sc.intel.com>
Date:	Mon, 28 Nov 2011 15:58:12 -0800
From:	Suresh Siddha <suresh.b.siddha@...el.com>
To:	Peter Zijlstra <peterz@...radead.org>
Cc:	Ingo Molnar <mingo@...e.hu>, Venki Pallipadi <venki@...gle.com>,
	Srivatsa Vaddagiri <vatsa@...ux.vnet.ibm.com>,
	Mike Galbraith <efault@....de>,
	linux-kernel <linux-kernel@...r.kernel.org>,
	Tim Chen <tim.c.chen@...ux.jf.intel.com>,
	"Shi, Alex" <alex.shi@...el.com>
Subject: Re: [patch 3/6] sched, nohz: sched group, domain aware nohz idle
 load balancing

On Thu, 2011-11-24 at 03:53 -0800, Peter Zijlstra wrote:
> On Fri, 2011-11-18 at 15:03 -0800, Suresh Siddha wrote:
> > Make nohz idle load balancing more scalabale by using the nr_busy_cpus in
> > the struct sched_group_power.
> > 
> > Idle load balance is kicked on one of the idle cpu's when there is atleast
> > one idle cpu and
> > 
> >  - a busy rq having more than one task or
> > 
> >  - a busy scheduler group having multiple busy cpus that exceed the sched group
> >    power or
> > 
> >  - for the SD_ASYM_PACKING domain, if the lower numbered cpu's in that
> >    domain are idle compared to the busy ones.
> > 
> > This will help in kicking the idle load balancing request only when
> > there is a real imbalance. And once it is mostly balanced, these kicks will
> > be minimized.
> > 
> > These changes helped improve the workload that is context switch intensive
> > between number of task pairs by 2x on a 8 socket NHM-EX based system.
> 
> OK, but the nohz idle balance will still iterate the whole machine
> instead of smaller parts, right?

In the current series, yes. one idle cpu spending a bit more time doing
idle load balancing might be better compared to waking up multiple idle
cpu's from deep c-states.

But if needed, we can easily partition the nohz idle load balancer load
to multiple idle cpu's. But we need a balance between the right
partition size vs how many idle cpu's we need to bring out of tickless
mode to do this idle load balancing.

Current proposed series already has the infrastructure to identify
which scheduler domain has the imbalance. Perhaps we can use that to do
the nohz idle load balancing only for that domain.

For now, I am trying to do better than what mainline has.

thanks,
suresh

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ