lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <87a7i6ljr0.fsf@vitty.brq.redhat.com>
Date:   Thu, 07 Mar 2019 17:41:39 +0100
From:   Vitaly Kuznetsov <vkuznets@...hat.com>
To:     Sean Christopherson <sean.j.christopherson@...el.com>
Cc:     Paolo Bonzini <pbonzini@...hat.com>,
        Radim Krčmář <rkrcmar@...hat.com>,
        kvm@...r.kernel.org, Junaid Shahid <junaids@...gle.com>,
        linux-kernel@...r.kernel.org
Subject: Re: [PATCH v2 RFC] x86/kvm/mmu: make mmu->prev_roots cache work for NPT case

Sean Christopherson <sean.j.christopherson@...el.com> writes:

> On Thu, Mar 07, 2019 at 03:07:05PM +0100, Vitaly Kuznetsov wrote:
>> Vitaly Kuznetsov <vkuznets@...hat.com> writes:
>> 
>> > Alternative patch: remove the filtering from kvm_mmu_get_page() and check
>> > for direct on call sites. cr4_pae setting in kvm_calc_mmu_role_common()
>> > can be preserved for consistency.
>> >
>> > Signed-off-by: Vitaly Kuznetsov <vkuznets@...hat.com>
>> > ---
>> >  arch/x86/kvm/mmu.c | 6 ++----
>> >  1 file changed, 2 insertions(+), 4 deletions(-)
>> >
>> > diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
>> > index f2d1d230d5b8..7fb8118f2af6 100644
>> > --- a/arch/x86/kvm/mmu.c
>> > +++ b/arch/x86/kvm/mmu.c
>> > @@ -2420,8 +2420,6 @@ static struct kvm_mmu_page *kvm_mmu_get_page(struct kvm_vcpu *vcpu,
>> >  	role = vcpu->arch.mmu->mmu_role.base;
>> >  	role.level = level;
>> >  	role.direct = direct;
>> > -	if (role.direct)
>> > -		role.cr4_pae = 0;
>> >  	role.access = access;
>> >  	if (!vcpu->arch.mmu->direct_map
>> >  	    && vcpu->arch.mmu->root_level <= PT32_ROOT_LEVEL) {
>> > @@ -5176,7 +5174,7 @@ static bool detect_write_misaligned(struct kvm_mmu_page *sp, gpa_t gpa,
>> >  		 gpa, bytes, sp->role.word);
>> >  
>> >  	offset = offset_in_page(gpa);
>> > -	pte_size = sp->role.cr4_pae ? 8 : 4;
>> > +	pte_size = (sp->role.cr4_pae && !sp->role.direct) ? 8 : 4;
>> >  
>> >  	/*
>> >  	 * Sometimes, the OS only writes the last one bytes to update status
>> > @@ -5200,7 +5198,7 @@ static u64 *get_written_sptes(struct kvm_mmu_page *sp, gpa_t gpa, int *nspte)
>> >  	page_offset = offset_in_page(gpa);
>> >  	level = sp->role.level;
>> >  	*nspte = 1;
>> > -	if (!sp->role.cr4_pae) {
>> > +	if (!sp->role.cr4_pae || sp->role.direct) {
>> >  		page_offset <<= 1;	/* 32->64 */
>> >  		/*
>> >  		 * A 32-bit pde maps 4MB while the shadow pdes map
>> 
>> While I personally prefer this approach to not setting role.cr4_pae in
>> kvm_calc_mmu_role_common() I'd like to get maintainers opinion (I did
>> test the patch with ept=off but I have to admit I don't know much about
>> shadow page tables which actually use detect_write_misaligned()/
>> get_written_sptes())
>> 
>> Paolo, Radim, (anyone else) - any thoughts?
>
> The changes to detect_write_misaligned() and get_written_sptes() are
> wrong/unnecessary as those functions should only be called for indirect
> shadow PTEs, e.g.:
>
> 	for_each_gfn_indirect_valid_sp(vcpu->kvm, sp, gfn) {
> 		if (detect_write_misaligned(sp, gpa, bytes) ||
> 		      detect_write_flooding(sp)) {
> 			kvm_mmu_prepare_zap_page(vcpu->kvm, sp, &invalid_list);
> 			++vcpu->kvm->stat.mmu_flooded;
> 			continue;
> 		}
>
> 		spte = get_written_sptes(sp, gpa, &npte);
> 		if (!spte)
> 			continue;
>
> 		...
> 	}
>
> If anything, they could WARN_ON(sp->role.direct), but IMO that's overkill
> since they're static helpers with a single call site (above).
>
> I'm missing the background for this patch, why does clearing role.cr4_pae
> for direct SPTEs cause problems with the prev_roots cache?

Oh, thank you for taking a look! You've probably missed my original v1
patch where I tried to explaing the issue:

"I noticed that fast_cr3_switch() always fails when we switch back from
L2 to L1 as it is not able to find a cached root. This is odd: host's
CR3 usually stays the same, we expect to always follow the fast
path. Turns out the problem is that page role is always mismatched
because kvm_mmu_get_page() filters out cr4_pae when direct, the value is
stored in page header and later compared with new_role in
cached_root_available(). As cr4_pae is always set in long mode
prev_roots cache is dysfunctional.

The problem appeared after we introduced kvm_calc_mmu_role_common():
previously, we were only setting this bit for shadow MMU root but now we
set it for everything. Restore the original behavior.

Fixes: 7dcd57552008 ("x86/kvm/mmu: check if tdp/shadow MMU reconfiguration is needed")
"

and the patch was just removing role.cr4_pae setting and moving it to
kvm_calc_shadow_mmu_root_page_role(). This is an alternative approach -
always set cr4_pae but count on kvm_mmu_get_page() filtering out cr4_pae
when direct is set.

-- 
Vitaly

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ