lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <56E90943.3060100@redhat.com>
Date:	Wed, 16 Mar 2016 08:20:35 +0100
From:	Paolo Bonzini <pbonzini@...hat.com>
To:	Suravee Suthikulpanit <Suravee.Suthikulpanit@....com>,
	rkrcmar@...hat.com, joro@...tes.org, bp@...en8.de, gleb@...nel.org,
	alex.williamson@...hat.com
Cc:	kvm@...r.kernel.org, linux-kernel@...r.kernel.org, wei@...hat.com,
	sherry.hurwitz@....com
Subject: Re: [PART1 RFC v2 05/10] KVM: x86: Detect and Initialize AVIC support



On 16/03/2016 07:22, Suravee Suthikulpanit wrote:
> This is mainly causing a large number of VMEXIT due to NPF.

Got it, it's here in the manual: "System software is responsible for
setting up a translation in the nested page table granting guest read
and write permissions for accesses to the vAPIC Backing Page in SPA
space. AVIC hardware walks the nested page table to check permissions,
but does not use the SPA address specified in the leaf page table entry.
Instead, AVIC hardware finds this address in the AVIC_BACKING_PAGE
pointer field of the VMCB".

Strictly speaking the address of the 0xFEE00000 translation is
unnecessary and it could be all zeroes, but I suggest that you set up an
APIC access page like Intel does (4k only), using the special memslot.
The AVIC backing page can then point to lapic->regs.

Thanks for the explanation!

Paolo

> CASE1: Using x86_set_memory_region() for AVIC backing page
> 
> # ./perf-vmexit.sh 10
> [ perf record: Woken up 1 times to write data ]
> [ perf record: Captured and wrote 2.813 MB perf.data.guest (30356
> samples) ]
> 
> 
> Analyze events for all VMs, all VCPUs:
> 
>              VM-EXIT    Samples  Samples%     Time%    Min Time    Max
> Time         Avg time
> 
>            interrupt      10042    66.30%    81.33%      0.43us
> 202.50us      7.43us ( +-   1.20% )
>                  msr       5004    33.04%    15.76%      0.73us
> 12.21us      2.89us ( +-   0.43% )
>                pause         58     0.38%     0.18%      0.56us
> 5.88us      2.92us ( +-   6.43% )
>                  npf         35     0.23%     2.01%      6.41us
> 207.78us     52.70us ( +-  23.67% )
>                  nmi          4     0.03%     0.02%      2.31us
> 4.67us      3.49us ( +-  14.26% )
>                   io          3     0.02%     0.70%     82.75us
> 360.90us    214.28us ( +-  37.64% )
>      avic_incomp_ipi          1     0.01%     0.00%      2.17us
> 2.17us      2.17us ( +-   0.00% )
> 
> Total Samples:15147, Total events handled time:91715.78us.
> 
> 
> CASE2: Using the lapic regs page for AVIC backing page.
> 
> # ./perf-vmexit.sh 10
> [ perf record: Woken up 255 times to write data ]
> [ perf record: Captured and wrote 509.202 MB perf.data.guest (5718856
> samples) ]
> 
> 
> Analyze events for all VMs, all VCPUs:
> 
>              VM-EXIT    Samples  Samples%     Time%    Min Time    Max
> Time         Avg time
> 
>                  npf    1897710    99.33%    98.08%      1.09us
> 243.22us      1.67us ( +-   0.04% )
>            interrupt       7818     0.41%     1.44%      0.44us
> 216.55us      5.97us ( +-   1.92% )
>                  msr       5001     0.26%     0.45%      0.68us
> 12.58us      2.89us ( +-   0.50% )
>                pause         25     0.00%     0.00%      0.71us
> 4.23us      2.03us ( +-  10.76% )
>                   io          4     0.00%     0.03%     73.91us
> 337.29us    206.74us ( +-  26.38% )
>                  nmi          1     0.00%     0.00%      5.92us
> 5.92us      5.92us ( +-   0.00% )
> 
> Total Samples:1910559, Total events handled time:3229214.64us.
> 
> Thanks,
> Suravee
> -- 
> To unsubscribe from this list: send the line "unsubscribe kvm" in
> the body of a message to majordomo@...r.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ