lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 02 Jan 2012 09:50:30 +0530
From:	Nikunj A Dadhania <nikunj@...ux.vnet.ibm.com>
To:	Ingo Molnar <mingo@...e.hu>, Avi Kivity <avi@...hat.com>
Cc:	peterz@...radead.org, linux-kernel@...r.kernel.org,
	vatsa@...ux.vnet.ibm.com, bharata@...ux.vnet.ibm.com
Subject: Re: [RFC PATCH 0/4] Gang scheduling in CFS

On Sat, 31 Dec 2011 07:51:15 +0530, Nikunj A Dadhania <nikunj@...ux.vnet.ibm.com> wrote:
> On Fri, 30 Dec 2011 15:40:06 +0530, Nikunj A Dadhania <nikunj@...ux.vnet.ibm.com> wrote:
> > On Fri, 30 Dec 2011 10:51:47 +0100, Ingo Molnar <mingo@...e.hu> wrote:
> > > 
> > > * Avi Kivity <avi@...hat.com> wrote:
> > > 
> > > > [...]
> > > > 
> > > > The first part appears to be unrelated to ebizzy itself - it's 
> > > > the kunmap_atomic() flushing ptes.  It could be eliminated by 
> > > > switching to a non-highmem kernel, or by allocating more PTEs 
> > > > for kmap_atomic() and batching the flush.
> > > 
> > > Nikunj, please only run pure 64-bit/64-bit combinations - by the 
> > > time any fix goes upstream and trickles down to distros 32-bit 
> > > guests will be even less relevant than they are today.
> > > 
> > Sure Ingo, got a 64bit guest working yesterday and I am in process of
> > getting the benchmark numbers for the same.
> > 
> Here is the results collected from the 64bit VM runs. 
> 
[...]

PLE worst case:

>      
>     dbench 8vm (degraded -8%)
>     |      dbench|               2.27 |               2.09 |       -8 |
[...]
>     dbench needs some more love, i will get the perf top caller for
>     that.
>

    Baseline:
    75.18%         init  [kernel.kallsyms]  [k] native_safe_halt
    23.32%      swapper  [kernel.kallsyms]  [k] native_safe_halt

    Gang V2:
    73.21%         init  [kernel.kallsyms]       [k] native_safe_halt
    25.74%      swapper  [kernel.kallsyms]       [k] native_safe_halt

That does not give much clue :(
Comments?

>     non-PLE - Test Setup:
> 
>     dbench 8vm (degraded -30%)
>     |      dbench|               2.01 |               1.38 |      -30 |


    Baseline:
    57.75%         init  [kernel.kallsyms]  [k] native_safe_halt
    40.88%      swapper  [kernel.kallsyms]  [k] native_safe_halt

    Gang V2:
    56.25%         init  [kernel.kallsyms]  [k] native_safe_halt
    42.84%      swapper  [kernel.kallsyms]  [k] native_safe_halt

Similar comparison here.

Regards
Nikunj

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists