lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 12 Dec 2019 14:33:36 +0200
From:   Liran Alon <liran.alon@...cle.com>
To:     Barret Rhoden <brho@...gle.com>
Cc:     Paolo Bonzini <pbonzini@...hat.com>,
        Dan Williams <dan.j.williams@...el.com>,
        David Hildenbrand <david@...hat.com>,
        Dave Jiang <dave.jiang@...el.com>,
        Alexander Duyck <alexander.h.duyck@...ux.intel.com>,
        linux-nvdimm@...ts.01.org, x86@...nel.org, kvm@...r.kernel.org,
        linux-kernel@...r.kernel.org, jason.zeng@...el.com
Subject: Re: [PATCH v4 2/2] kvm: Use huge pages for DAX-backed files



> On 11 Dec 2019, at 23:32, Barret Rhoden <brho@...gle.com> wrote:
> 
> This change allows KVM to map DAX-backed files made of huge pages with
> huge mappings in the EPT/TDP.
> 
> DAX pages are not PageTransCompound.  The existing check is trying to
> determine if the mapping for the pfn is a huge mapping or not.  For
> non-DAX maps, e.g. hugetlbfs, that means checking PageTransCompound.
> For DAX, we can check the page table itself.

For hugetlbfs pages, tdp_page_fault() -> mapping_level() -> host_mapping_level() -> kvm_host_page_size() -> vma_kernel_pagesize()
will return the page-size of the hugetlbfs without the need to parse the page-tables.
See vma->vm_ops->pagesize() callback implementation at hugetlb_vm_ops->pagesize()==hugetlb_vm_op_pagesize().

Only for pages that were originally mapped as small-pages and later merged to larger pages by THP, there is a need to check for PageTransCompound(). Again, instead of parsing page-tables.

Therefore, it seems more logical to me that:
(a) If DAX-backed files are mapped as large-pages to userspace, it should be reflected in vma->vm_ops->page_size() of that mapping. Causing kvm_host_page_size() to return the right size without the need to parse the page-tables.
(b) If DAX-backed files small-pages can be later merged to large-pages by THP, then the “struct page” of these pages should be modified as usual to make PageTransCompound() return true for them. I’m not highly familiar with this mechanism, but I would expect THP to be able to merge DAX-backed files small-pages to large-pages in case DAX provides “struct page” for the DAX pages.

> 
> Note that KVM already faulted in the page (or huge page) in the host's
> page table, and we hold the KVM mmu spinlock.  We grabbed that lock in
> kvm_mmu_notifier_invalidate_range_end, before checking the mmu seq.
> 
> Signed-off-by: Barret Rhoden <brho@...gle.com>
> ---
> arch/x86/kvm/mmu/mmu.c | 36 ++++++++++++++++++++++++++++++++----
> 1 file changed, 32 insertions(+), 4 deletions(-)
> 
> diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
> index 6f92b40d798c..cd07bc4e595f 100644
> --- a/arch/x86/kvm/mmu/mmu.c
> +++ b/arch/x86/kvm/mmu/mmu.c
> @@ -3384,6 +3384,35 @@ static int kvm_handle_bad_page(struct kvm_vcpu *vcpu, gfn_t gfn, kvm_pfn_t pfn)
> 	return -EFAULT;
> }
> 
> +static bool pfn_is_huge_mapped(struct kvm *kvm, gfn_t gfn, kvm_pfn_t pfn)
> +{
> +	struct page *page = pfn_to_page(pfn);
> +	unsigned long hva;
> +
> +	if (!is_zone_device_page(page))
> +		return PageTransCompoundMap(page);
> +
> +	/*
> +	 * DAX pages do not use compound pages.  The page should have already
> +	 * been mapped into the host-side page table during try_async_pf(), so
> +	 * we can check the page tables directly.
> +	 */
> +	hva = gfn_to_hva(kvm, gfn);
> +	if (kvm_is_error_hva(hva))
> +		return false;
> +
> +	/*
> +	 * Our caller grabbed the KVM mmu_lock with a successful
> +	 * mmu_notifier_retry, so we're safe to walk the page table.
> +	 */
> +	switch (dev_pagemap_mapping_shift(hva, current->mm)) {

Doesn’t dev_pagemap_mapping_shift() get “struct page” as first parameter?
Was this changed by a commit I missed?

-Liran

> +	case PMD_SHIFT:
> +	case PUD_SIZE:
> +		return true;
> +	}
> +	return false;
> +}
> +
> static void transparent_hugepage_adjust(struct kvm_vcpu *vcpu,
> 					gfn_t gfn, kvm_pfn_t *pfnp,
> 					int *levelp)
> @@ -3398,8 +3427,8 @@ static void transparent_hugepage_adjust(struct kvm_vcpu *vcpu,
> 	 * here.
> 	 */
> 	if (!is_error_noslot_pfn(pfn) && !kvm_is_reserved_pfn(pfn) &&
> -	    !kvm_is_zone_device_pfn(pfn) && level == PT_PAGE_TABLE_LEVEL &&
> -	    PageTransCompoundMap(pfn_to_page(pfn)) &&
> +	    level == PT_PAGE_TABLE_LEVEL &&
> +	    pfn_is_huge_mapped(vcpu->kvm, gfn, pfn) &&
> 	    !mmu_gfn_lpage_is_disallowed(vcpu, gfn, PT_DIRECTORY_LEVEL)) {
> 		unsigned long mask;
> 		/*
> @@ -6015,8 +6044,7 @@ static bool kvm_mmu_zap_collapsible_spte(struct kvm *kvm,
> 		 * mapping if the indirect sp has level = 1.
> 		 */
> 		if (sp->role.direct && !kvm_is_reserved_pfn(pfn) &&
> -		    !kvm_is_zone_device_pfn(pfn) &&
> -		    PageTransCompoundMap(pfn_to_page(pfn))) {
> +		    pfn_is_huge_mapped(kvm, sp->gfn, pfn)) {
> 			pte_list_remove(rmap_head, sptep);
> 
> 			if (kvm_available_flush_tlb_with_range())
> -- 
> 2.24.0.525.g8f36a354ae-goog
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ