[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4C3DB671.1090802@redhat.com>
Date: Wed, 14 Jul 2010 16:06:57 +0300
From: Avi Kivity <avi@...hat.com>
To: Xiao Guangrong <xiaoguangrong@...fujitsu.com>
CC: LKML <linux-kernel@...r.kernel.org>,
KVM list <kvm@...r.kernel.org>,
Marcelo Tosatti <mtosatti@...hat.com>
Subject: Re: [PATCH 1/4] KVM: MMU: fix forgot reserved bits check in speculative
path
On 07/13/2010 12:42 PM, Xiao Guangrong wrote:
> In the speculative path, we should check guest pte's reserved bits just as
> the real processor does
>
> Reported-by: Marcelo Tosatti<mtosatti@...hat.com>
> Signed-off-by: Xiao Guangrong<xiaoguangrong@...fujitsu.com>
> ---
> arch/x86/kvm/mmu.c | 8 ++++++++
> arch/x86/kvm/paging_tmpl.h | 5 +++--
> 2 files changed, 11 insertions(+), 2 deletions(-)
>
> diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
> index b93b94f..9fc1524 100644
> --- a/arch/x86/kvm/mmu.c
> +++ b/arch/x86/kvm/mmu.c
> @@ -2783,6 +2783,9 @@ void kvm_mmu_pte_write(struct kvm_vcpu *vcpu, gpa_t gpa,
> break;
> }
>
> + if (is_rsvd_bits_set(vcpu, gentry, PT_PAGE_TABLE_LEVEL))
> + gentry = 0;
> +
> mmu_guess_page_from_pte_write(vcpu, gpa, gentry);
> spin_lock(&vcpu->kvm->mmu_lock);
> if (atomic_read(&vcpu->kvm->arch.invlpg_counter) != invlpg_counter)
> @@ -2851,6 +2854,11 @@ void kvm_mmu_pte_write(struct kvm_vcpu *vcpu, gpa_t gpa,
> while (npte--) {
> entry = *spte;
> mmu_pte_write_zap_pte(vcpu, sp, spte);
> +
> + if (!!is_pae(vcpu) != sp->role.cr4_pae ||
> + is_nx(vcpu) != sp->role.nxe)
> + continue;
> +
>
Do we also need to check cr0.wp? I think so.
> if (gentry)
> mmu_pte_write_new_pte(vcpu, sp, spte,&gentry);
>
Please move the checks to mmu_pte_write_new_pte(), it's a more logical
place.
It means the reserved bits check happens multiple times, but that's ok.
Also, you can use arch.mmu.base_role to compare:
static const kvm_mmu_page_role mask = { .level = -1U, .cr4_pae = 1,
... };
if ((sp->role.word ^ base_role.word) & mask.word)
return;
> @@ -640,8 +640,9 @@ static int FNAME(sync_page)(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp,
> return -EINVAL;
>
> gfn = gpte_to_gfn(gpte);
> - if (gfn != sp->gfns[i] ||
> - !is_present_gpte(gpte) || !(gpte& PT_ACCESSED_MASK)) {
> + if (is_rsvd_bits_set(vcpu, gpte, PT_PAGE_TABLE_LEVEL) ||
> + gfn != sp->gfns[i] || !is_present_gpte(gpte) ||
> + !(gpte& PT_ACCESSED_MASK)) {
> u64 nonpresent;
>
> if (is_present_gpte(gpte) || !clear_unsync)
>
Eventually we have to reduce the number of paths. But lets fix things
first.
--
I have a truly marvellous patch that fixes the bug which this
signature is too narrow to contain.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists