lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Wed, 13 Sep 2023 09:28:55 -0700
From:   Sean Christopherson <seanjc@...gle.com>
To:     isaku.yamahata@...el.com
Cc:     kvm@...r.kernel.org, linux-kernel@...r.kernel.org,
        isaku.yamahata@...il.com, Michael Roth <michael.roth@....com>,
        Paolo Bonzini <pbonzini@...hat.com>, erdemaktas@...gle.com,
        Sagi Shahar <sagis@...gle.com>,
        David Matlack <dmatlack@...gle.com>,
        Kai Huang <kai.huang@...el.com>,
        Zhi Wang <zhi.wang.linux@...il.com>, chen.bo@...el.com,
        linux-coco@...ts.linux.dev,
        Chao Peng <chao.p.peng@...ux.intel.com>,
        Ackerley Tng <ackerleytng@...gle.com>,
        Vishal Annapurve <vannapurve@...gle.com>,
        Yuan Yao <yuan.yao@...ux.intel.com>,
        Jarkko Sakkinen <jarkko@...nel.org>,
        Xu Yilun <yilun.xu@...el.com>,
        Quentin Perret <qperret@...gle.com>, wei.w.wang@...el.com,
        Fuad Tabba <tabba@...gle.com>
Subject: Re: [RFC PATCH 2/6] KVM: guestmem_fd: Make error_remove_page callback
 to unmap guest memory

On Wed, Sep 13, 2023, isaku.yamahata@...el.com wrote:
> @@ -316,26 +316,43 @@ static int kvm_gmem_error_page(struct address_space *mapping, struct page *page)
>  	end = start + thp_nr_pages(page);
>  
>  	list_for_each_entry(gmem, gmem_list, entry) {
> +		struct kvm *kvm = gmem->kvm;
> +
> +		KVM_MMU_LOCK(kvm);
> +		kvm_mmu_invalidate_begin(kvm);
> +		KVM_MMU_UNLOCK(kvm);
> +
> +		flush = false;
>  		xa_for_each_range(&gmem->bindings, index, slot, start, end - 1) {
> -			for (gfn = start; gfn < end; gfn++) {
> -				if (WARN_ON_ONCE(gfn < slot->base_gfn ||
> -						gfn >= slot->base_gfn + slot->npages))
> -					continue;
> -
> -				/*
> -				 * FIXME: Tell userspace that the *private*
> -				 * memory encountered an error.
> -				 */
> -				send_sig_mceerr(BUS_MCEERR_AR,
> -						(void __user *)gfn_to_hva_memslot(slot, gfn),
> -						PAGE_SHIFT, current);
> -			}
> +			pgoff_t pgoff;
> +
> +			if (WARN_ON_ONCE(end < slot->base_gfn ||
> +					 start >= slot->base_gfn + slot->npages))
> +				continue;
> +
> +			pgoff = slot->gmem.pgoff;
> +			struct kvm_gfn_range gfn_range = {
> +				.slot = slot,
> +				.start = slot->base_gfn + max(pgoff, start) - pgoff,
> +				.end = slot->base_gfn + min(pgoff + slot->npages, end) - pgoff,
> +				.arg.page = page,
> +				.may_block = true,
> +				.memory_error = true,

Why pass arg.page and memory_error?  There's no usage in this mini-series, and no
explanation of what arch code would do the information.  And I can't think of why
arch would need to do anything but zap the SPTEs.  If the memory error is directly
related to the current instruction, the vCPU will fault on the zapped SPTE, see
-HWPOISON, and exit to userspace.  If the memory is unrelated, then the delayed
notification is less than ideal, but not fundamentally broken, e.g. it's no worse
than TDX's behavior of not signaling #MC until a poisoned cache line is actually
accessed.

I don't get arg.page in particular, because having the gfn should be enough for
arch code to take action beyond zapping SPTEs.

And _if_ we want to communicate the error to arch code, it would be much better
to add a dedicated arch hook instead of piggybacking kvm_mmu_unmap_gfn_range()
with a "memory_error" flag. 

If we just zap SPTEs, then can't this simply be?

  static int kvm_gmem_error_page(struct address_space *mapping, struct page *page)
  {
	struct list_head *gmem_list = &mapping->private_list;
	struct kvm_gmem *gmem;
	pgoff_t start, end;

	filemap_invalidate_lock_shared(mapping);

	start = page->index;
	end = start + thp_nr_pages(page);

	list_for_each_entry(gmem, gmem_list, entry)
		kvm_gmem_invalidate_begin(gmem, start, end);

	/*
	 * Do not truncate the range, what action is taken in response to the
	 * error is userspace's decision (assuming the architecture supports
	 * gracefully handling memory errors).  If/when the guest attempts to
	 * access a poisoned page, kvm_gmem_get_pfn() will return -EHWPOISON,
	 * at which point KVM can either terminate the VM or propagate the
	 * error to userspace.
	 */

	list_for_each_entry(gmem, gmem_list, entry)
		kvm_gmem_invalidate_end(gmem, start, end);

	filemap_invalidate_unlock_shared(mapping);

	return MF_DELAYED;
  }

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ