lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 4 Mar 2009 23:36:57 +0530
From:	Vaidyanathan Srinivasan <svaidy@...ux.vnet.ibm.com>
To:	Ingo Molnar <mingo@...e.hu>
Cc:	Arun R Bharadwaj <arun@...ux.vnet.ibm.com>,
	linux-kernel@...r.kernel.org, linux-pm@...ts.linux-foundation.org,
	a.p.zijlstra@...llo.nl, ego@...ibm.com, tglx@...utronix.de,
	andi@...stfloor.org, venkatesh.pallipadi@...el.com,
	vatsa@...ux.vnet.ibm.com, arjan@...radead.org
Subject: Re: [v2 PATCH 0/4] timers: framework for migration between CPU

* Ingo Molnar <mingo@...e.hu> [2009-03-04 18:33:21]:

> 
> * Arun R Bharadwaj <arun@...ux.vnet.ibm.com> wrote:
> 
> > $taskset -c 4,5,6,7 make -j4
> > 
> > my_driver queuing timers continuously on CPU 10.
> > 
> > idle load balancer currently on CPU 15
> > 
> > 
> > Case1: Without timer migration		Case2: With timer migration
> > 
> >    --------------------			   --------------------
> >    | Core | LOC Count |			   | Core | LOC Count |
> >    | 4    |   2504    |			   | 4    |   2503    |
> >    | 5    |   2502    |			   | 5    |   2503    |
> >    | 6    |   2502    |			   | 6    |   2502    |
> >    | 7    |   2498    |			   | 7    |   2500    |
> >    | 10   |   2501    |			   | 10   |     35    |
> >    | 15   |   2501    |			   | 15   |   2501    |
> >    --------------------			   --------------------
> > 
> >    ---------------------		   --------------------
> >    | Core | Sleep time |		   | Core | Sleep time |
> >    | 4    |    0.47168 |		   | 4    |    0.49601 |
> >    | 5    |    0.44301 |		   | 5    |    0.37153 |
> >    | 6    |    0.38979 |		   | 6    |    0.51286 |
> >    | 7    |    0.42829 |		   | 7    |    0.49635 |
> >    | 10   |    9.86652 |		   | 10   |   10.04216 |
> >    | 15   |    0.43048 |		   | 15   |    0.49056 |
> >    ---------------------		   ---------------------
> > 
> > Here, all the timers queued by the driver on CPU10 are moved to CPU15,
> > which is the idle load balancer.
> 
> The numbers with this automatic method based on the ilb-cpu look 
> pretty convincing. Is this what you expected it to be?

Yes Ingo, this is the expected results and looks pretty good.  However
there are two parameters controlled in this experiment:

1) The system is moderately loaded with kernbench so that there are
   some busy CPUs and some idle cpus, and the no_hz mask is does not
   change often.  This leads to stable ilb-cpu selection.  If the
   system is either completely idle or loaded too little leading to
   ilb nominations, then timers keep following the ilb cpu and it is
   very difficult to experimentally observe the benefits.

   Even if the ilb bounces, consolidating timers should increase
   overlap between timers and reduce the wakeup from idle.

   Optimising the ilb selection should significantly improve
   experimental results for this patch.

2) The timer test driver creates quite large timer load so that the
   effect of migration is observable as sleep time difference on the
   expected target cpu.  This kind of timer load may not be uncommon
   with lots of application stack loaded in an enterprise system

--Vaidy
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ