lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20111219112326.GA15090@elte.hu>
Date:	Mon, 19 Dec 2011 12:23:26 +0100
From:	Ingo Molnar <mingo@...e.hu>
To:	"Nikunj A. Dadhania" <nikunj@...ux.vnet.ibm.com>
Cc:	peterz@...radead.org, linux-kernel@...r.kernel.org,
	vatsa@...ux.vnet.ibm.com, bharata@...ux.vnet.ibm.com
Subject: Re: [RFC PATCH 0/4] Gang scheduling in CFS


* Nikunj A. Dadhania <nikunj@...ux.vnet.ibm.com> wrote:

>     The following patches implements gang scheduling. These 
>     patches are *highly* experimental in nature and are not 
>     proposed for inclusion at this time.
> 
>     Gang scheduling is an approach where we make an effort to 
>     run related tasks (the gang) at the same time on a number 
>     of CPUs.

The thing is, the (non-)scalability consequences are awful, gang 
scheduling is a true scalability nightmare. Things like this in 
gang_sched():

+               for_each_domain(cpu_of(rq), sd) {
+      	                count = 0;
+                       for_each_cpu(i, sched_domain_span(sd))
+                               count++;

makes me shudder.

So could we please approach this from the benchmarked workload 
angle first? The highest improvement is in ebizzy:

>     ebizzy 2vm (improved 15 times, i.e. 1520%)
>     +------------+--------------------+--------------------+----------+
>     |                               Ebizzy                            |
>     +------------+--------------------+--------------------+----------+
>     | Parameter  |        Basline     |         gang:V2    | % imprv  |
>     +------------+--------------------+--------------------+----------+
>     | EbzyRecords|            1709.50 |           27701.00 |     1520 |
>     |    EbzyUser|              20.48 |             376.64 |     1739 |
>     |     EbzySys|            1384.65 |            1071.40 |       22 |
>     |    EbzyReal|             300.00 |             300.00 |        0 |
>     |     BwUsage|   2456114173416.00 |   2483447784640.00 |        1 |
>     |    HostIdle|              34.00 |              35.00 |       -2 |
>     |     UsrTime|               6.00 |              14.00 |      133 |
>     |     SysTime|              30.00 |              24.00 |       20 |
>     |      IOWait|              10.00 |               9.00 |       10 |
>     |    IdleTime|              51.00 |              51.00 |        0 |
>     |         TPS|              25.00 |              24.00 |       -4 |
>     | CacheMisses|       766543805.00 |      8113721819.00 |     -958 |
>     |   CacheRefs|      9420204706.00 |    136290854100.00 |     1346 |
>     |BranchMisses|      1191336154.00 |     11336436452.00 |     -851 |
>     |    Branches|    618882621656.00 |    459161727370.00 |      -25 |
>     |Instructions|   2517045997661.00 |   2325227247092.00 |        7 |
>     |      Cycles|   7642374654922.00 |   7657626973214.00 |        0 |
>     |     PageFlt|           23779.00 |           22195.00 |        6 |
>     |   ContextSW|         1517241.00 |         1786319.00 |      -17 |
>     |   CPUMigrat|             537.00 |             241.00 |       55 |
>     +-----------------------------------------------------------------+

What's behind this huge speedup? Does ebizzy use user-space 
spinlocks perhaps? Could we do something on the user-space side 
to get a similar speedup?

Thanks,

	Ingo
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ