lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <87obumqtvp.fsf@abhimanyu.in.ibm.com>
Date:	Mon, 02 Jan 2012 16:00:18 +0530
From:	Nikunj A Dadhania <nikunj@...ux.vnet.ibm.com>
To:	Avi Kivity <avi@...hat.com>, Rik van Riel <riel@...hat.com>
Cc:	Ingo Molnar <mingo@...e.hu>, peterz@...radead.org,
	linux-kernel@...r.kernel.org, vatsa@...ux.vnet.ibm.com,
	bharata@...ux.vnet.ibm.com
Subject: Re: [RFC PATCH 0/4] Gang scheduling in CFS

On Mon, 02 Jan 2012 11:37:22 +0200, Avi Kivity <avi@...hat.com> wrote:
> On 12/31/2011 04:21 AM, Nikunj A Dadhania wrote:

> >
> >     non-PLE - Test Setup:
> >     =====================

> >
> >     ebizzy 8vm (improved 331%)
[...]
> >     GangV2:
> >     27.96%       ebizzy  libc-2.12.so            [.] __memcpy_ssse3_back
> >     12.13%       ebizzy  [kernel.kallsyms]       [k] clear_page
> >     11.66%       ebizzy  [kernel.kallsyms]       [k] __bitmap_empty
> >     11.54%       ebizzy  [kernel.kallsyms]       [k] flush_tlb_others_ipi
> >      5.93%       ebizzy  [kernel.kallsyms]       [k] __do_page_fault
> >
> >     GangBase;
> >     36.34%       ebizzy  [kernel.kallsyms]  [k] __bitmap_empty
> >     35.95%       ebizzy  [kernel.kallsyms]  [k] flush_tlb_others_ipi
> >      8.52%       ebizzy  libc-2.12.so       [.] __memcpy_ssse3_back
> 
> Same thing.  __bitmap_empty() is likely the cpumask_empty() called from
> flush_tlb_others_ipi(), so 70% of time is spent in this loop.
> 
> Xen works around this particular busy loop by having a hypercall for
> flushing the tlb, but this is very fragile (and broken wrt
> get_user_pages_fast() IIRC).
> 
> >
> >     dbench 8vm (degraded -30%)
> >     +------------+--------------------+--------------------+----------+
> >     |                               Dbench                            |
> >     +------------+--------------------+--------------------+----------+
> >     | Parameter  | GangBase           |   Gang V2          | % imprv  |
> >     +------------+--------------------+--------------------+----------+
> >     |      dbench|               2.01 |               1.38 |      -30 |
> >     |     BwUsage|    100408068913.00 |    176095548113.00 |       75 |
> >     |    HostIdle|              82.00 |              74.00 |        9 |
> >     |      IOWait|              25.00 |              23.00 |        8 |
> >     |    IdleTime|              74.00 |              71.00 |       -4 |
> >     |         TPS|              13.00 |              13.00 |        0 |
> >     | CacheMisses|       137351386.00 |       267116184.00 |      -94 |
> >     |   CacheRefs|      4347880250.00 |      5830408064.00 |       34 |
> >     |BranchMisses|       602120546.00 |      1110592466.00 |      -84 |
> >     |    Branches|     22275747114.00 |     39163309805.00 |       75 |
> >     |Instructions|    107942079625.00 |    195313721170.00 |      -80 |
> >     |      Cycles|    271014283494.00 |    481886203993.00 |      -77 |
> >     |     PageFlt|           44373.00 |           47679.00 |       -7 |
> >     |   ContextSW|         3318033.00 |        11598234.00 |     -249 |
> >     |   CPUMigrat|           82475.00 |          423066.00 |     -412 |
> >     +-----------------------------------------------------------------+
> >
> 
> Rik, what's going on?  ContextSW is relatively low in the base load,
> looks like PLE is asleep on the wheel.
> 
Avi, the above dbench result is from a non-PLE machine. So PLE will not
come into picture here.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ