lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20191213175031.GC31552@linux.intel.com>
Date:   Fri, 13 Dec 2019 09:50:31 -0800
From:   Sean Christopherson <sean.j.christopherson@...el.com>
To:     Liran Alon <liran.alon@...cle.com>
Cc:     Barret Rhoden <brho@...gle.com>,
        Paolo Bonzini <pbonzini@...hat.com>,
        Dan Williams <dan.j.williams@...el.com>,
        David Hildenbrand <david@...hat.com>,
        Dave Jiang <dave.jiang@...el.com>,
        Alexander Duyck <alexander.h.duyck@...ux.intel.com>,
        linux-nvdimm@...ts.01.org, x86@...nel.org, kvm@...r.kernel.org,
        linux-kernel@...r.kernel.org, jason.zeng@...el.com
Subject: Re: [PATCH v5 2/2] kvm: Use huge pages for DAX-backed files

On Fri, Dec 13, 2019 at 07:31:55PM +0200, Liran Alon wrote:
> 
> > On 13 Dec 2019, at 19:19, Sean Christopherson <sean.j.christopherson@...el.com> wrote:
> > 
> > Then allowed_hugepage_adjust() would look something like:
> > 
> > static void allowed_hugepage_adjust(struct kvm_vcpu *vcpu, gfn_t gfn,
> > 				    kvm_pfn_t *pfnp, int *levelp, int max_level)
> > {
> > 	kvm_pfn_t pfn = *pfnp;
> > 	int level = *levelp;	
> > 	unsigned long mask;
> > 
> > 	if (is_error_noslot_pfn(pfn) || !kvm_is_reserved_pfn(pfn) ||
> > 	    level == PT_PAGE_TABLE_LEVEL)
> > 		return;
> > 
> > 	/*
> > 	 * mmu_notifier_retry() was successful and mmu_lock is held, so
> > 	 * the pmd/pud can't be split from under us.
> > 	 */
> > 	level = host_pfn_mapping_level(vcpu->kvm, gfn, pfn);
> > 
> > 	*levelp = level = min(level, max_level);
> > 	mask = KVM_PAGES_PER_HPAGE(level) - 1;
> > 	VM_BUG_ON((gfn & mask) != (pfn & mask));
> > 	*pfnp = pfn & ~mask;
> 
> Why don’t you still need to kvm_release_pfn_clean() for original pfn and
> kvm_get_pfn() for new huge-page start pfn?

That code is gone in kvm/queue.  thp_adjust() is now called from
__direct_map() and FNAME(fetch), and so its pfn adjustment doesn't bleed
back to the page fault handlers.  The only reason the put/get pfn code
existed was because the page fault handlers called kvm_release_pfn_clean()
on the pfn, i.e. they would have put the wrong pfn.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ