lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZeVRlzYC7huTFddO@yilunxu-OptiPlex-7050>
Date: Mon, 4 Mar 2024 12:44:07 +0800
From: Xu Yilun <yilun.xu@...ux.intel.com>
To: Paolo Bonzini <pbonzini@...hat.com>
Cc: linux-kernel@...r.kernel.org, kvm@...r.kernel.org, seanjc@...gle.com,
	michael.roth@....com, isaku.yamahata@...el.com,
	thomas.lendacky@....com
Subject: Re: [PATCH 19/21] KVM: guest_memfd: add API to undo
 kvm_gmem_get_uninit_pfn

On Tue, Feb 27, 2024 at 06:20:58PM -0500, Paolo Bonzini wrote:
> In order to be able to redo kvm_gmem_get_uninit_pfn, a hole must be punched
> into the filemap, thus allowing FGP_CREAT_ONLY to succeed again.  This will
> be used whenever an operation that follows kvm_gmem_get_uninit_pfn fails.
> 
> Signed-off-by: Paolo Bonzini <pbonzini@...hat.com>
> ---
>  include/linux/kvm_host.h |  7 +++++++
>  virt/kvm/guest_memfd.c   | 28 ++++++++++++++++++++++++++++
>  2 files changed, 35 insertions(+)
> 
> diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
> index 03bf616b7308..192c58116220 100644
> --- a/include/linux/kvm_host.h
> +++ b/include/linux/kvm_host.h
> @@ -2436,6 +2436,8 @@ int kvm_gmem_get_pfn(struct kvm *kvm, struct kvm_memory_slot *slot,
>  		     gfn_t gfn, kvm_pfn_t *pfn, int *max_order);
>  int kvm_gmem_get_uninit_pfn(struct kvm *kvm, struct kvm_memory_slot *slot,
>  		            gfn_t gfn, kvm_pfn_t *pfn, int *max_order);
> +int kvm_gmem_undo_get_pfn(struct kvm *kvm, struct kvm_memory_slot *slot,
> +			  gfn_t gfn, int order);
>  #else
>  static inline int kvm_gmem_get_pfn(struct kvm *kvm,
>  				   struct kvm_memory_slot *slot, gfn_t gfn,
> @@ -2452,6 +2454,11 @@ static inline int kvm_gmem_get_uninit_pfn(struct kvm *kvm,
>  	KVM_BUG_ON(1, kvm);
>  	return -EIO;
>  }
> +
> +static inline int kvm_gmem_undo_get_pfn(struct kvm *kvm,
> +				        struct kvm_memory_slot *slot, gfn_t gfn,
> +				        int order)
> +{}

return -EIO;

or compiler would complain that no return value.

>  #endif /* CONFIG_KVM_PRIVATE_MEM */
>  
>  #ifdef CONFIG_HAVE_KVM_GMEM_PREPARE
> diff --git a/virt/kvm/guest_memfd.c b/virt/kvm/guest_memfd.c
> index 7ec7afafc960..535ef1aa34fb 100644
> --- a/virt/kvm/guest_memfd.c
> +++ b/virt/kvm/guest_memfd.c
> @@ -590,3 +590,31 @@ int kvm_gmem_get_uninit_pfn(struct kvm *kvm, struct kvm_memory_slot *slot,
>  	return __kvm_gmem_get_pfn(kvm, slot, gfn, pfn, max_order, false);
>  }
>  EXPORT_SYMBOL_GPL(kvm_gmem_get_uninit_pfn);
> +
> +int kvm_gmem_undo_get_pfn(struct kvm *kvm, struct kvm_memory_slot *slot,
> +		          gfn_t gfn, int order)

Didn't see the caller yet, but do we need to ensure the gfn is aligned
with page order? e.g.

	WARN_ON(gfn & ((1UL << order) - 1));

> +{
> +	pgoff_t index = gfn - slot->base_gfn + slot->gmem.pgoff;
> +	struct kvm_gmem *gmem;
> +	struct file *file;
> +	int r;
> +
> +	file = kvm_gmem_get_file(slot);
> +	if (!file)
> +		return -EFAULT;
> +
> +	gmem = file->private_data;
> +
> +	if (WARN_ON_ONCE(xa_load(&gmem->bindings, index) != slot)) {
> +		r = -EIO;
> +		goto out_fput;
> +	}
> +
> +	r = kvm_gmem_punch_hole(file_inode(file), index << PAGE_SHIFT, PAGE_SHIFT << order);
                                                                       ^
PAGE_SIZE << order

Thanks,
Yilun

> +
> +out_fput:
> +	fput(file);
> +
> +	return r;
> +}
> +EXPORT_SYMBOL_GPL(kvm_gmem_undo_get_pfn);
> -- 
> 2.39.0
> 
> 
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ