lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 29 Jul 2010 10:19:49 +0800
From:	Lai Jiangshan <laijs@...fujitsu.com>
To:	Marcelo Tosatti <mtosatti@...hat.com>
CC:	Gleb Natapov <gleb@...hat.com>,
	LKML <linux-kernel@...r.kernel.org>, kvm@...r.kernel.org,
	Avi Kivity <avi@...hat.com>, Nick Piggin <npiggin@...e.de>
Subject: Re: [PATCH 5/6] kvm, x86: use ro page and don't copy shared page

On 07/17/2010 07:26 AM, Marcelo Tosatti wrote:
> On Fri, Jul 16, 2010 at 10:19:36AM +0300, Gleb Natapov wrote:
>> On Fri, Jul 16, 2010 at 10:13:07AM +0800, Lai Jiangshan wrote:
>>> When page fault, we always call get_user_pages(write=1).
>>>
>>> Actually, we don't need to do this when it is not write fault.
>>> get_user_pages(write=1) will cause shared page(ksm) copied.
>>> If this page is not modified in future, this copying and the copied page
>>> are just wasted. Ksm may scan and merge them and may cause thrash.
>>>
>> But is page is written into afterwords we will get another page fault.
>>
>>> In this patch, if the page is RO for host VMM and it not write fault for guest,
>>> we will use RO page, otherwise we use a writable page.
>>>
>> Currently pages allocated for guest memory are required to be RW, so after your series
>> behaviour will remain exactly the same as before.
> 
> Except KSM pages.
> 
>>> Signed-off-by: Lai Jiangshan <laijs@...fujitsu.com>
>>> ---
>>> diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
>>> index 8ba9b0d..6382140 100644
>>> --- a/arch/x86/kvm/mmu.c
>>> +++ b/arch/x86/kvm/mmu.c
>>> @@ -1832,6 +1832,45 @@ static void kvm_unsync_pages(struct kvm_vcpu *vcpu,  gfn_t gfn)
>>>  	}
>>>  }
>>>  
>>> +/* get a current mapped page fast, and test whether the page is writable. */
>>> +static struct page *get_user_page_and_protection(unsigned long addr,
>>> +	int *writable)
>>> +{
>>> +	struct page *page[1];
>>> +
>>> +	if (__get_user_pages_fast(addr, 1, 1, page) == 1) {
>>> +		*writable = 1;
>>> +		return page[0];
>>> +	}
>>> +	if (__get_user_pages_fast(addr, 1, 0, page) == 1) {
>>> +		*writable = 0;
>>> +		return page[0];
>>> +	}
>>> +	return NULL;
>>> +}
>>> +
>>> +static pfn_t kvm_get_pfn_for_page_fault(struct kvm *kvm, gfn_t gfn,
>>> +		int write_fault, int *host_writable)
>>> +{
>>> +	unsigned long addr;
>>> +	struct page *page;
>>> +
>>> +	if (!write_fault) {
>>> +		addr = gfn_to_hva(kvm, gfn);
>>> +		if (kvm_is_error_hva(addr)) {
>>> +			get_page(bad_page);
>>> +			return page_to_pfn(bad_page);
>>> +		}
>>> +
>>> +		page = get_user_page_and_protection(addr, host_writable);
>>> +		if (page)
>>> +			return page_to_pfn(page);
>>> +	}
>>> +
>>> +	*host_writable = 1;
>>> +	return kvm_get_pfn_for_gfn(kvm, gfn);
>>> +}
>>> +
>> kvm_get_pfn_for_gfn() returns fault_page if page is mapped RO, so caller
>> of kvm_get_pfn_for_page_fault() and kvm_get_pfn_for_gfn() will get
>> different results when called on the same page. Not good.
>> kvm_get_pfn_for_page_fault() logic should be folded into
>> kvm_get_pfn_for_gfn().
> 
> Agreed. Please keep gfn_to_pfn related code in virt/kvm/kvm_main.c.
> 
> 

Pass write_fault parameter to kvm_get_pfn_for_gfn()?
But only X86 use this parameter currently, I think it is OK to
keep these code in arch/x86/kvm/mmu.c
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ