lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <5d309f95-84d0-462e-a463-16d303629907@linux.microsoft.com>
Date: Wed, 3 Dec 2025 12:36:07 -0800
From: Nuno Das Neves <nunodasneves@...ux.microsoft.com>
To: Stanislav Kinsburskii <skinsburskii@...ux.microsoft.com>,
 kys@...rosoft.com, haiyangz@...rosoft.com, wei.liu@...nel.org,
 decui@...rosoft.com
Cc: linux-hyperv@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH v8 6/6] Drivers: hv: Add support for movable memory
 regions

On 12/3/2025 10:24 AM, Stanislav Kinsburskii wrote:
> Introduce support for movable memory regions in the Hyper-V root partition
> driver to improve memory management flexibility and enable advanced use
> cases such as dynamic memory remapping.
> 
> Mirror the address space between the Linux root partition and guest VMs
> using HMM. The root partition owns the memory, while guest VMs act as
> devices with page tables managed via hypercalls. MSHV handles VP intercepts
> by invoking hmm_range_fault() and updating SLAT entries. When memory is
> reclaimed, HMM invalidates the relevant regions, prompting MSHV to clear
> SLAT entries; guest VMs will fault again on access.
> 
> Integrate mmu_interval_notifier for movable regions, implement handlers for
> HMM faults and memory invalidation, and update memory region mapping logic
> to support movable regions.
> 
> While MMU notifiers are commonly used in virtualization drivers, this
> implementation leverages HMM (Heterogeneous Memory Management) for its
> specialized functionality. HMM provides a framework for mirroring,
> invalidation, and fault handling, reducing boilerplate and improving
> maintainability compared to generic MMU notifiers.
> 
> Signed-off-by: Stanislav Kinsburskii <skinsburskii@...ux.microsoft.com>
> ---
>  drivers/hv/Kconfig          |    2 
>  drivers/hv/mshv_regions.c   |  215 ++++++++++++++++++++++++++++++++++++++++++-
>  drivers/hv/mshv_root.h      |   17 +++
>  drivers/hv/mshv_root_main.c |  139 +++++++++++++++++++++++-----
>  4 files changed, 343 insertions(+), 30 deletions(-)
> 
> diff --git a/drivers/hv/Kconfig b/drivers/hv/Kconfig
> index d4a8d349200c..7937ac0cbd0f 100644
> --- a/drivers/hv/Kconfig
> +++ b/drivers/hv/Kconfig
> @@ -76,6 +76,8 @@ config MSHV_ROOT
>  	depends on PAGE_SIZE_4KB
>  	select EVENTFD
>  	select VIRT_XFER_TO_GUEST_WORK
> +	select HMM_MIRROR
> +	select MMU_NOTIFIER
>  	default n
>  	help
>  	  Select this option to enable support for booting and running as root
> diff --git a/drivers/hv/mshv_regions.c b/drivers/hv/mshv_regions.c
> index 94f33754f545..afe03258caf0 100644
> --- a/drivers/hv/mshv_regions.c
> +++ b/drivers/hv/mshv_regions.c
> @@ -7,6 +7,8 @@
>   * Authors: Microsoft Linux virtualization team
>   */
>  
> +#include <linux/hmm.h>
> +#include <linux/hyperv.h>
>  #include <linux/kref.h>
>  #include <linux/mm.h>
>  #include <linux/vmalloc.h>
> @@ -15,6 +17,8 @@
>  
>  #include "mshv_root.h"
>  
> +#define MSHV_MAP_FAULT_IN_PAGES				PTRS_PER_PMD
> +
>  /**
>   * mshv_region_process_chunk - Processes a contiguous chunk of memory pages
>   *                             in a region.
> @@ -152,9 +156,6 @@ struct mshv_mem_region *mshv_region_create(u64 guest_pfn, u64 nr_pages,
>  	if (flags & BIT(MSHV_SET_MEM_BIT_EXECUTABLE))
>  		region->hv_map_flags |= HV_MAP_GPA_EXECUTABLE;
>  
> -	if (!is_mmio)
> -		region->flags.range_pinned = true;
> -

The parameter is_mmio is now unused in this function.

>  	kref_init(&region->refcount);
>  
>  	return region;
> @@ -239,7 +240,7 @@ int mshv_region_map(struct mshv_mem_region *region)
>  static void mshv_region_invalidate_pages(struct mshv_mem_region *region,
>  					 u64 page_offset, u64 page_count)
>  {
> -	if (region->flags.range_pinned)
> +	if (region->type == MSHV_REGION_TYPE_MEM_PINNED)
>  		unpin_user_pages(region->pages + page_offset, page_count);
>  
>  	memset(region->pages + page_offset, 0,
> @@ -313,6 +314,9 @@ static void mshv_region_destroy(struct kref *ref)
>  	struct mshv_partition *partition = region->partition;
>  	int ret;
>  
> +	if (region->type == MSHV_REGION_TYPE_MEM_MOVABLE)
> +		mshv_region_movable_fini(region);
> +
>  	if (mshv_partition_encrypted(partition)) {
>  		ret = mshv_region_share(region);
>  		if (ret) {
> @@ -339,3 +343,206 @@ int mshv_region_get(struct mshv_mem_region *region)
>  {
>  	return kref_get_unless_zero(&region->refcount);
>  }
> +
> +/**
> + * mshv_region_hmm_fault_and_lock - Handle HMM faults and lock the memory region
> + * @region: Pointer to the memory region structure
> + * @range: Pointer to the HMM range structure
> + *
> + * This function performs the following steps:
> + * 1. Reads the notifier sequence for the HMM range.
> + * 2. Acquires a read lock on the memory map.
> + * 3. Handles HMM faults for the specified range.
> + * 4. Releases the read lock on the memory map.
> + * 5. If successful, locks the memory region mutex.
> + * 6. Verifies if the notifier sequence has changed during the operation.
> + *    If it has, releases the mutex and returns -EBUSY to match with
> + *    hmm_range_fault() return code for repeating.
> + *
> + * Return: 0 on success, a negative error code otherwise.
> + */
> +static int mshv_region_hmm_fault_and_lock(struct mshv_mem_region *region,
> +					  struct hmm_range *range)
> +{
> +	int ret;
> +
> +	range->notifier_seq = mmu_interval_read_begin(range->notifier);
> +	mmap_read_lock(region->mni.mm);
> +	ret = hmm_range_fault(range);
> +	mmap_read_unlock(region->mni.mm);
> +	if (ret)
> +		return ret;
> +
> +	mutex_lock(&region->mutex);
> +
> +	if (mmu_interval_read_retry(range->notifier, range->notifier_seq)) {
> +		mutex_unlock(&region->mutex);
> +		cond_resched();
> +		return -EBUSY;
> +	}
> +
> +	return 0;
> +}
> +
> +/**
> + * mshv_region_range_fault - Handle memory range faults for a given region.
> + * @region: Pointer to the memory region structure.
> + * @page_offset: Offset of the page within the region.
> + * @page_count: Number of pages to handle.
> + *
> + * This function resolves memory faults for a specified range of pages
> + * within a memory region. It uses HMM (Heterogeneous Memory Management)
> + * to fault in the required pages and updates the region's page array.
> + *
> + * Return: 0 on success, negative error code on failure.
> + */
> +static int mshv_region_range_fault(struct mshv_mem_region *region,
> +				   u64 page_offset, u64 page_count)
> +{
> +	struct hmm_range range = {
> +		.notifier = &region->mni,
> +		.default_flags = HMM_PFN_REQ_FAULT | HMM_PFN_REQ_WRITE,
> +	};
> +	unsigned long *pfns;
> +	int ret;
> +	u64 i;
> +
> +	pfns = kmalloc_array(page_count, sizeof(unsigned long), GFP_KERNEL);

nit: Prefer sizeof(*pfns)

<snip>

The rest looks fine to me. With the minor issues above fixed,
Reviewed-by: Nuno Das Neves <nunodasneves@...ux.microsoft.com>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ