lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 08 Jun 2011 16:58:51 +0800
From:	Xiao Guangrong <xiaoguangrong@...fujitsu.com>
To:	Alexander Graf <agraf@...e.de>
CC:	Avi Kivity <avi@...hat.com>, Marcelo Tosatti <mtosatti@...hat.com>,
	LKML <linux-kernel@...r.kernel.org>, KVM <kvm@...r.kernel.org>
Subject: Re: [PATCH 04/15] KVM: MMU: cache mmio info on page fault path

On 06/08/2011 04:22 PM, Alexander Graf wrote:

>> +static int vcpu_gva_to_gpa(struct kvm_vcpu *vcpu, unsigned long gva,
>> +			   gpa_t *gpa, struct x86_exception *exception,
>> +			   bool write)
>> +{
>> +	u32 access = (kvm_x86_ops->get_cpl(vcpu) == 3) ? PFERR_USER_MASK : 0;
>> +
>> +	if (vcpu_match_mmio_gva(vcpu, gva) &&
>> +	      check_write_user_access(vcpu, write, access,
>> +	      vcpu->arch.access)) {
>> +		*gpa = vcpu->arch.mmio_gfn << PAGE_SHIFT |
>> +					(gva & (PAGE_SIZE - 1));
>> +		return 1;
> 

Hi Alexander,

Thanks for your review!

> Hrm. Let me try to understand what you're doing.
> 
> Whenever a guest issues an MMIO, it triggers an #NPF or #PF and then we walk either the NPT or the guest PT to resolve the GPA to the fault and send off an MMIO.
> Within that path, you remember the GVA->GPA mapping for the last MMIO request. If the next MMIO request is on the same GVA and kernel/user permissions still apply, you simply bypass the resolution. So far so good.
> 

In this patch, we also introduced vcpu_clear_mmio_info() that clears mmio cache info on the vcpu,
and it is called when guest flush tlb (reload CR3 or INVLPG). 

> Now, what happens when the GVA is not identical to the GVA it was before? It's probably a purely theoretic case, but imagine the following:
> 
>   1) guest issues MMIO on GVA 0x1000 (GPA 0x1000)
>   2) guest remaps page 0x1000 to GPA 0x2000
>   3) guest issues MMIO on GVA 0x1000
> 

If guest modify the page structure, base on x86 tlb rules, we should flush tlb to ensure the cpu use
the new mapping.

When you remap GVA 0x1000 to 0x2000, you should flush tlb, then mmio cache info is cleared, so the later
access is right.

> That would break with your current implementation, right? It sounds pretty theoretic, but imagine the following:
> 
>   1) guest user space 1 maps MMIO region A to 0x1000
>   2) guest user space 2 maps MMIO region B to 0x1000
>   3) guest user space 1 issues MMIO on 0x1000
>   4) context switch; going to user space 2
>   5) user space 2 issues MMIO on 0x1000
> 

Also, when context switched, CR3 is reloaded, mmio cache info can be cleared too. right? :-)
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ