[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4CEA436D.8050202@cn.fujitsu.com>
Date: Mon, 22 Nov 2010 18:18:21 +0800
From: Xiao Guangrong <xiaoguangrong@...fujitsu.com>
To: Avi Kivity <avi@...hat.com>
CC: Marcelo Tosatti <mtosatti@...hat.com>, KVM <kvm@...r.kernel.org>,
LKML <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH v3 5/6] KVM: MMU: abstract invalid guest pte mapping
On 11/22/2010 05:28 PM, Avi Kivity wrote:
>> +static bool FNAME(map_invalid_gpte)(struct kvm_vcpu *vcpu,
>> + struct kvm_mmu_page *sp, u64 *spte,
>> + pt_element_t gpte)
>
> It's really only for speculative maps, the name should reflect that.
>
OK, i'll use speculative_map_invalid_gpte or speculative_map_gpte
instead.
> Why restrict to invalid gptes? Won't it work for valid gptes as well?
> Maybe you'll need an extra code path for update_pte() which already
> knows the pfn.
>
Um. i did it in the in the previous version, but it needs a callback to
get pfn since get pfn is very different on update_pte / prefetch_pte /
sync_page paths. the codes seems more complicated.
Maybe we can get pfn first and call FNAME(map_vaild_gpte) later, but
it can add little little overload on prefetch_pte path.
>> +{
>> + u64 nonpresent = shadow_trap_nonpresent_pte;
>> +
>> + if (is_rsvd_bits_set(&vcpu->arch.mmu, gpte, PT_PAGE_TABLE_LEVEL))
>> + goto no_present;
>> +
>> + if (!is_present_gpte(gpte)) {
>> + if (!sp->unsync)
>> + nonpresent = shadow_notrap_nonpresent_pte;
>> + goto no_present;
>> + }
>
> I think the order is reversed. If !is_present_gpte(), it doesn't matter
> if reserved bits are set or not.
>
if !is_present_gpte() && is_rsvd_bits_set, then we may mark the spte notrap,
so the guest will detect #PF with PFEC.P=PEFC.RSVD=0, but the appropriate PFEC
is PFEC.P=0 && PEFC.RSVD=1 ?
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists