lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20150622160437.GD16576@linux.vnet.ibm.com>
Date:	Mon, 22 Jun 2015 21:34:37 +0530
From:	Srikar Dronamraju <srikar@...ux.vnet.ibm.com>
To:	Rik van Riel <riel@...hat.com>
Cc:	linux-kernel@...r.kernel.org, peterz@...radead.org,
	mingo@...nel.org, mgorman@...e.de
Subject: Re: [PATCH] sched,numa: document and fix numa_preferred_nid setting


> Would you happen to have 2 instance and 4 instance SPECjbb
> numbers, too?  The single instance numbers seem to be within
> the margin of error, but I would expect multi-instance numbers
> to show more dramatic changes, due to changes in how workloads
> converge...
>
> Those behave very differently from single instance, especially
> with the "always set the preferred_nid, even if we moved the
> task to a node we do NOT prefer" patch...
>
> It would be good to understand the behaviour of these patches
> under more circumstances.

Here are specjbb2005 numbers with 1 JVM per System, 2 JVMs per System
and 4 JVMs per System.

Plain 4.1.0-rc7-tip (i)
tip + Rik's ++ (ii)
tip + Srikar's ++ (iii)
tip + Srikar's + Modified Rik's patch (iv)

(i) = Plain 4.1.0-rc7-tip = tip = 4.1.0-rc7 (b7ca96b)

(ii) =  tip + only Rik's suggested patches =  (i) + Rik's fix numa_preferred_nid setting
	+ Srikar's numa hotness + correct nid for evaluating task weight

(iii)  = tip + Srikar's ++ (iii) = (i) + Srikar's  numa hotness + correct nid for evaluating
	 task weight + numa_has_capacity fix +  always update preferred node

(iv) =  tip + Srikar's ++ (iv) = (i) + Srikar's  numa hotness + correct nid for evaluating
	 task weight + numa_has_capacity fix + Rik's modified patch.
	(Rik's modified patch == I removed node_isset check before setting
	nid as the preferred node)

jbb2005_1JVMperSYSTEM
Plain 4.1.0-rc7-tip (i)
		  Metric:         Min         Max         Avg      StdDev     %Change
	      bopsperJVM:   265519.00   272466.00   269377.80     2391.04

tip + Rik's ++ (ii)
	      bopsperJVM:   264298.00   271236.00   266818.20     2579.62      -0.94%

tip + Srikar's ++ (iii)
	      bopsperJVM:   266774.00   272434.00   269839.60     2083.19       0.17%

tip + Srikar's + Rik's (iv)
	      bopsperJVM:   265037.00   274419.00   269280.00     3146.74      -0.04%



jbb2005_2JVMperSYSTEM
Plain 4.1.0-rc7-tip (i)
		  Metric:         Min         Max         Avg      StdDev     %Change
	      bopsperJVM:   269575.00   288495.00   279910.80     6151.49

tip + Srikar's ++ (iii)
	      bopsperJVM:   278810.00   287706.00   282514.00     2946.37      0.90%

tip + Rik's ++ (ii)
	      bopsperJVM:   286785.00   289515.00   288311.80     1206.66      2.90%

tip + Srikar's + Rik's (iv)
	      bopsperJVM:   283295.00   293466.00   287848.80     3427.06      2.70%


jbb2005_4JVMperSYSTEM
Plain 4.1.0-rc7-tip (i)
		  Metric:         Min         Max         Avg      StdDev     %Change
	      bopsperJVM:   248392.00   263826.00   257263.20     5946.44

tip + Rik's ++ (ii)
	      bopsperJVM:   257057.00   260303.00   258819.00     1234.46      0.60%

tip + Srikar's ++ (iii)
	      bopsperJVM:   252968.00   262006.00   257321.80     3131.00      0.02%

tip + Srikar's + Rik's (iv)
	      bopsperJVM:   257063.00   266196.00   262547.80     3099.57      1.99%


Summary:
While Rik's suggested patchset performs the best in 2 JVM case and
numa01. A modified version of his patch, provides good performance in 2
JVM, 4 JVM cases and numa01. However these two patchsets dont regress in
numa02 (probably a little less with modified patch)


-- 
Thanks and Regards
Srikar Dronamraju

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ