lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Yrtar+i2X0YjmD/F@xz-m1.local>
Date:   Tue, 28 Jun 2022 15:46:55 -0400
From:   Peter Xu <peterx@...hat.com>
To:     John Hubbard <jhubbard@...dia.com>
Cc:     kvm@...r.kernel.org, linux-kernel@...r.kernel.org,
        Paolo Bonzini <pbonzini@...hat.com>,
        Andrew Morton <akpm@...ux-foundation.org>,
        David Hildenbrand <david@...hat.com>,
        "Dr . David Alan Gilbert" <dgilbert@...hat.com>,
        Andrea Arcangeli <aarcange@...hat.com>,
        Linux MM Mailing List <linux-mm@...ck.org>,
        Sean Christopherson <seanjc@...gle.com>
Subject: Re: [PATCH 2/4] kvm: Merge "atomic" and "write" in
 __gfn_to_pfn_memslot()

On Mon, Jun 27, 2022 at 07:17:09PM -0700, John Hubbard wrote:
> On 6/22/22 14:36, Peter Xu wrote:
> > Merge two boolean parameters into a bitmask flag called kvm_gtp_flag_t for
> > __gfn_to_pfn_memslot().  This cleans the parameter lists, and also prepare
> > for new boolean to be added to __gfn_to_pfn_memslot().
> > 
> > Signed-off-by: Peter Xu <peterx@...hat.com>
> > ---
> >   arch/arm64/kvm/mmu.c                   |  5 ++--
> >   arch/powerpc/kvm/book3s_64_mmu_hv.c    |  5 ++--
> >   arch/powerpc/kvm/book3s_64_mmu_radix.c |  5 ++--
> >   arch/x86/kvm/mmu/mmu.c                 | 10 +++----
> >   include/linux/kvm_host.h               |  9 ++++++-
> >   virt/kvm/kvm_main.c                    | 37 +++++++++++++++-----------
> >   virt/kvm/kvm_mm.h                      |  6 +++--
> >   virt/kvm/pfncache.c                    |  2 +-
> >   8 files changed, 49 insertions(+), 30 deletions(-)
> > 
> > diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
> > index f5651a05b6a8..ce1edb512b4e 100644
> > --- a/arch/arm64/kvm/mmu.c
> > +++ b/arch/arm64/kvm/mmu.c
> > @@ -1204,8 +1204,9 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
> >   	 */
> >   	smp_rmb();
> > -	pfn = __gfn_to_pfn_memslot(memslot, gfn, false, NULL,
> > -				   write_fault, &writable, NULL);
> > +	pfn = __gfn_to_pfn_memslot(memslot, gfn,
> > +				   write_fault ? KVM_GTP_WRITE : 0,
> > +				   NULL, &writable, NULL);
> >   	if (pfn == KVM_PFN_ERR_HWPOISON) {
> >   		kvm_send_hwpoison_signal(hva, vma_shift);
> >   		return 0;
> > diff --git a/arch/powerpc/kvm/book3s_64_mmu_hv.c b/arch/powerpc/kvm/book3s_64_mmu_hv.c
> > index 514fd45c1994..e2769d58dd87 100644
> > --- a/arch/powerpc/kvm/book3s_64_mmu_hv.c
> > +++ b/arch/powerpc/kvm/book3s_64_mmu_hv.c
> > @@ -598,8 +598,9 @@ int kvmppc_book3s_hv_page_fault(struct kvm_vcpu *vcpu,
> >   		write_ok = true;
> >   	} else {
> >   		/* Call KVM generic code to do the slow-path check */
> > -		pfn = __gfn_to_pfn_memslot(memslot, gfn, false, NULL,
> > -					   writing, &write_ok, NULL);
> > +		pfn = __gfn_to_pfn_memslot(memslot, gfn,
> > +					   writing ? KVM_GTP_WRITE : 0,
> > +					   NULL, &write_ok, NULL);
> >   		if (is_error_noslot_pfn(pfn))
> >   			return -EFAULT;
> >   		page = NULL;
> > diff --git a/arch/powerpc/kvm/book3s_64_mmu_radix.c b/arch/powerpc/kvm/book3s_64_mmu_radix.c
> > index 42851c32ff3b..232b17c75b83 100644
> > --- a/arch/powerpc/kvm/book3s_64_mmu_radix.c
> > +++ b/arch/powerpc/kvm/book3s_64_mmu_radix.c
> > @@ -845,8 +845,9 @@ int kvmppc_book3s_instantiate_page(struct kvm_vcpu *vcpu,
> >   		unsigned long pfn;
> >   		/* Call KVM generic code to do the slow-path check */
> > -		pfn = __gfn_to_pfn_memslot(memslot, gfn, false, NULL,
> > -					   writing, upgrade_p, NULL);
> > +		pfn = __gfn_to_pfn_memslot(memslot, gfn,
> > +					   writing ? KVM_GTP_WRITE : 0,
> > +					   NULL, upgrade_p, NULL);
> >   		if (is_error_noslot_pfn(pfn))
> >   			return -EFAULT;
> >   		page = NULL;
> > diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
> > index f4653688fa6d..e92f1ab63d6a 100644
> > --- a/arch/x86/kvm/mmu/mmu.c
> > +++ b/arch/x86/kvm/mmu/mmu.c
> > @@ -3968,6 +3968,7 @@ void kvm_arch_async_page_ready(struct kvm_vcpu *vcpu, struct kvm_async_pf *work)
> >   static int kvm_faultin_pfn(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault)
> >   {
> > +	kvm_gtp_flag_t flags = fault->write ? KVM_GTP_WRITE : 0;
> >   	struct kvm_memory_slot *slot = fault->slot;
> >   	bool async;
> > @@ -3999,8 +4000,8 @@ static int kvm_faultin_pfn(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault)
> >   	}
> >   	async = false;
> > -	fault->pfn = __gfn_to_pfn_memslot(slot, fault->gfn, false, &async,
> > -					  fault->write, &fault->map_writable,
> > +	fault->pfn = __gfn_to_pfn_memslot(slot, fault->gfn, flags,
> > +					  &async, &fault->map_writable,
> >   					  &fault->hva);
> >   	if (!async)
> >   		return RET_PF_CONTINUE; /* *pfn has correct page already */
> > @@ -4016,9 +4017,8 @@ static int kvm_faultin_pfn(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault)
> >   		}
> >   	}
> > -	fault->pfn = __gfn_to_pfn_memslot(slot, fault->gfn, false, NULL,
> > -					  fault->write, &fault->map_writable,
> > -					  &fault->hva);
> > +	fault->pfn = __gfn_to_pfn_memslot(slot, fault->gfn, flags, NULL,
> > +					  &fault->map_writable, &fault->hva);
> 
> The flags arg does improve the situation, yes.

Thanks for supporting a flag's existance. :)

I'd say ultimately it could be a personal preference thing when the struct
comes.

> 
> >   	return RET_PF_CONTINUE;
> >   }
> > diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
> > index c20f2d55840c..b646b6fcaec6 100644
> > --- a/include/linux/kvm_host.h
> > +++ b/include/linux/kvm_host.h
> > @@ -1146,8 +1146,15 @@ kvm_pfn_t gfn_to_pfn_prot(struct kvm *kvm, gfn_t gfn, bool write_fault,
> >   		      bool *writable);
> >   kvm_pfn_t gfn_to_pfn_memslot(const struct kvm_memory_slot *slot, gfn_t gfn);
> >   kvm_pfn_t gfn_to_pfn_memslot_atomic(const struct kvm_memory_slot *slot, gfn_t gfn);
> > +
> > +/* gfn_to_pfn (gtp) flags */
> > +typedef unsigned int __bitwise kvm_gtp_flag_t;
> 
> A minor naming problem: GTP and especially gtp_flags is way too close
> to gfp_flags. It will make people either wonder if it's a typo, or
> worse, *assume* that it's a typo. :)

I'd try to argu with "I prefixed it with kvm_", but oh well.. yes they're a
bit close :)

> 
> Yes, "read the code", but if you can come up with a better TLA than GTP
> here, let's consider using it.

Could I ask what's TLA?  Any suggestions on the abbrev, btw?

> 
> Overall, the change looks like an improvement, even though
> 
>     write_fault ? KVM_GTP_WRITE : 0
> 
> is not wonderful. But improving *that* leads to a a big pile of diffs
> that are rather beyond the scope here.

Thanks,

-- 
Peter Xu

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ