lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <87mxa3zqm1.fsf@abhimanyu.in.ibm.com>
Date:	Wed, 04 Jan 2012 16:22:54 +0530
From:	Nikunj A Dadhania <nikunj@...ux.vnet.ibm.com>
To:	Avi Kivity <avi@...hat.com>, Rik van Riel <riel@...hat.com>
Cc:	Ingo Molnar <mingo@...e.hu>, peterz@...radead.org,
	linux-kernel@...r.kernel.org, vatsa@...ux.vnet.ibm.com,
	bharata@...ux.vnet.ibm.com
Subject: Re: [RFC PATCH 0/4] Gang scheduling in CFS

On Mon, 02 Jan 2012 11:37:22 +0200, Avi Kivity <avi@...hat.com> wrote:
> On 12/31/2011 04:21 AM, Nikunj A Dadhania wrote:
> >
> >     GangV2:
> >     27.45%       ebizzy  libc-2.12.so            [.] __memcpy_ssse3_back
> >     12.12%       ebizzy  [kernel.kallsyms]       [k] clear_page
> >      9.22%       ebizzy  [kernel.kallsyms]       [k] __do_page_fault
> >      6.91%       ebizzy  [kernel.kallsyms]       [k] flush_tlb_others_ipi
> >      4.06%       ebizzy  [kernel.kallsyms]       [k] get_page_from_freelist
> >      4.04%       ebizzy  [kernel.kallsyms]       [k] ____pagevec_lru_add
> >
> >     GangBase:
> >     45.08%       ebizzy  [kernel.kallsyms]       [k] flush_tlb_others_ipi
> >     15.38%       ebizzy  libc-2.12.so            [.] __memcpy_ssse3_back
> >      7.00%       ebizzy  [kernel.kallsyms]       [k] clear_page
> >      4.88%       ebizzy  [kernel.kallsyms]       [k] __do_page_fault
> 
> Looping in flush_tlb_others().  Rik, what trace an we run to find out
> why PLE directed yield isn't working as expected?
> 
I tried some experiments by adding a pause_loop_exits stat in the
kvm_vpu_stat.

Here are some observation related to Baseline-only(8vm case)

              | ple_gap=128 | ple_gap=64 | ple_gap=256 | ple_window=2048
--------------+-------------+------------+-------------+----------------
EbzyRecords/s |    2247.50  |    2132.75 |    2086.25  |      1835.62
PauseExits    | 7928154.00  | 6696342.00 | 7365999.00  |  50319582.00

With ple_window = 2048, PauseExits is more than 6times the default case

-----

    From: Nikunj A. Dadhania <nikunj@...ux.vnet.ibm.com>

    Add Pause-loop-exit stats to kvm_vcpu_stat

    Signed-off-by: Nikunj A. Dadhania <nikunj@...ux.vnet.ibm.com>


diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index b4973f4..be2e7f2 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -539,6 +539,7 @@ struct kvm_vcpu_stat {
        u32 hypercalls;
        u32 irq_injections;
        u32 nmi_injections;
+       u32 pause_loop_exits;
 };
 
 struct x86_instruction_info;
diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
index 579a0b5..29e90b7 100644
--- a/arch/x86/kvm/vmx.c
+++ b/arch/x86/kvm/vmx.c
@@ -4897,6 +4897,8 @@ out:
 static int handle_pause(struct kvm_vcpu *vcpu)
 {
        skip_emulated_instruction(vcpu);
+       ++vcpu->stat.pause_loop_exits;
        kvm_vcpu_on_spin(vcpu);
 
        return 1;
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index c38efd7..87433a8 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -149,6 +149,7 @@ struct kvm_stats_debugfs_item debugfs_entries[] = {
        { "mmu_unsync", VM_STAT(mmu_unsync) },
        { "remote_tlb_flush", VM_STAT(remote_tlb_flush) },
        { "largepages", VM_STAT(lpages) },
+       { "pause_loop_exits", VCPU_STAT(pause_loop_exits) },
        { NULL }
 };

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ