lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <MWHPR21MB15933BC87034940AB7170552D7AC9@MWHPR21MB1593.namprd21.prod.outlook.com>
Date:   Sat, 2 Oct 2021 13:26:26 +0000
From:   Michael Kelley <mikelley@...rosoft.com>
To:     Tianyu Lan <ltykernel@...il.com>,
        KY Srinivasan <kys@...rosoft.com>,
        Haiyang Zhang <haiyangz@...rosoft.com>,
        Stephen Hemminger <sthemmin@...rosoft.com>,
        "wei.liu@...nel.org" <wei.liu@...nel.org>,
        Dexuan Cui <decui@...rosoft.com>,
        "tglx@...utronix.de" <tglx@...utronix.de>,
        "mingo@...hat.com" <mingo@...hat.com>,
        "bp@...en8.de" <bp@...en8.de>, "x86@...nel.org" <x86@...nel.org>,
        "hpa@...or.com" <hpa@...or.com>,
        "dave.hansen@...ux.intel.com" <dave.hansen@...ux.intel.com>,
        "luto@...nel.org" <luto@...nel.org>,
        "peterz@...radead.org" <peterz@...radead.org>,
        "davem@...emloft.net" <davem@...emloft.net>,
        "kuba@...nel.org" <kuba@...nel.org>,
        "gregkh@...uxfoundation.org" <gregkh@...uxfoundation.org>,
        "arnd@...db.de" <arnd@...db.de>,
        "brijesh.singh@....com" <brijesh.singh@....com>,
        "jroedel@...e.de" <jroedel@...e.de>,
        Tianyu Lan <Tianyu.Lan@...rosoft.com>,
        "thomas.lendacky@....com" <thomas.lendacky@....com>,
        "pgonda@...gle.com" <pgonda@...gle.com>,
        "akpm@...ux-foundation.org" <akpm@...ux-foundation.org>,
        "rppt@...nel.org" <rppt@...nel.org>,
        "kirill.shutemov@...ux.intel.com" <kirill.shutemov@...ux.intel.com>,
        "saravanand@...com" <saravanand@...com>,
        "aneesh.kumar@...ux.ibm.com" <aneesh.kumar@...ux.ibm.com>,
        "rientjes@...gle.com" <rientjes@...gle.com>,
        "tj@...nel.org" <tj@...nel.org>
CC:     "linux-arch@...r.kernel.org" <linux-arch@...r.kernel.org>,
        "linux-hyperv@...r.kernel.org" <linux-hyperv@...r.kernel.org>,
        "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
        "netdev@...r.kernel.org" <netdev@...r.kernel.org>,
        vkuznets <vkuznets@...hat.com>,
        "konrad.wilk@...cle.com" <konrad.wilk@...cle.com>,
        "hch@....de" <hch@....de>,
        "robin.murphy@....com" <robin.murphy@....com>,
        "joro@...tes.org" <joro@...tes.org>,
        "parri.andrea@...il.com" <parri.andrea@...il.com>,
        "dave.hansen@...el.com" <dave.hansen@...el.com>
Subject: RE: [PATCH V6 7/8] Drivers: hv: vmbus: Add SNP support for VMbus
 channel initiate  message

From: Tianyu Lan <ltykernel@...il.com> Sent: Thursday, September 30, 2021 6:06 AM
> 
> The monitor pages in the CHANNELMSG_INITIATE_CONTACT msg are shared
> with host in Isolation VM and so it's necessary to use hvcall to set
> them visible to host. In Isolation VM with AMD SEV SNP, the access
> address should be in the extra space which is above shared gpa
> boundary. So remap these pages into the extra address(pa +
> shared_gpa_boundary).
> 
> Introduce monitor_pages_original[] in the struct vmbus_connection
> to store monitor page virtual address returned by hv_alloc_hyperv_
> zeroed_page() and free monitor page via monitor_pages_original in
> the vmbus_disconnect(). The monitor_pages[] is to used to access
> monitor page and it is initialized to be equal with monitor_pages_
> original. The monitor_pages[] will be overridden in the isolation VM
> with va of extra address. Introduce monitor_pages_pa[] to store
> monitor pages' physical address and use it to populate pa in the
> initiate msg.
> 
> Signed-off-by: Tianyu Lan <Tianyu.Lan@...rosoft.com>
> ---
> Change since v5:
> 	*  change vmbus_connection.monitor_pages_pa type from
> 	   unsigned long to phys_addr_t
> 	*  Plus vmbus_connection.monitor_pages_pa with ms_hyperv.
> 	   shared_gpa_boundary only in the IVM with AMD SEV.
> 
> Change since v4:
> 	* Introduce monitor_pages_pa[] to store monitor pages' physical
> 	  address and use it to populate pa in the initiate msg.
> 	* Move code of mapping moniter pages in extra address into
> 	  vmbus_connect().
> 
> Change since v3:
> 	* Rename monitor_pages_va with monitor_pages_original
> 	* free monitor page via monitor_pages_original and
> 	  monitor_pages is used to access monitor page.
> 
> Change since v1:
>         * Not remap monitor pages in the non-SNP isolation VM.
> ---
>  drivers/hv/connection.c   | 90 ++++++++++++++++++++++++++++++++++++---
>  drivers/hv/hyperv_vmbus.h |  2 +
>  2 files changed, 86 insertions(+), 6 deletions(-)
> 
> diff --git a/drivers/hv/connection.c b/drivers/hv/connection.c
> index 8820ae68f20f..7fac8d99541c 100644
> --- a/drivers/hv/connection.c
> +++ b/drivers/hv/connection.c
> @@ -19,6 +19,8 @@
>  #include <linux/vmalloc.h>
>  #include <linux/hyperv.h>
>  #include <linux/export.h>
> +#include <linux/io.h>
> +#include <linux/set_memory.h>
>  #include <asm/mshyperv.h>
> 
>  #include "hyperv_vmbus.h"
> @@ -102,8 +104,9 @@ int vmbus_negotiate_version(struct vmbus_channel_msginfo *msginfo, u32 version)
>  		vmbus_connection.msg_conn_id = VMBUS_MESSAGE_CONNECTION_ID;
>  	}
> 
> -	msg->monitor_page1 = virt_to_phys(vmbus_connection.monitor_pages[0]);
> -	msg->monitor_page2 = virt_to_phys(vmbus_connection.monitor_pages[1]);
> +	msg->monitor_page1 = vmbus_connection.monitor_pages_pa[0];
> +	msg->monitor_page2 = vmbus_connection.monitor_pages_pa[1];
> +
>  	msg->target_vcpu = hv_cpu_number_to_vp_number(VMBUS_CONNECT_CPU);
> 
>  	/*
> @@ -216,6 +219,65 @@ int vmbus_connect(void)
>  		goto cleanup;
>  	}
> 
> +	vmbus_connection.monitor_pages_original[0]
> +		= vmbus_connection.monitor_pages[0];
> +	vmbus_connection.monitor_pages_original[1]
> +		= vmbus_connection.monitor_pages[1];
> +	vmbus_connection.monitor_pages_pa[0]
> +		= virt_to_phys(vmbus_connection.monitor_pages[0]);
> +	vmbus_connection.monitor_pages_pa[1]
> +		= virt_to_phys(vmbus_connection.monitor_pages[1]);
> +
> +	if (hv_is_isolation_supported()) {
> +		ret = set_memory_decrypted((unsigned long)
> +					   vmbus_connection.monitor_pages[0],
> +					   1);
> +		ret |= set_memory_decrypted((unsigned long)
> +					    vmbus_connection.monitor_pages[1],
> +					    1);
> +		if (ret)
> +			goto cleanup;
> +
> +		/*
> +		 * Isolation VM with AMD SNP needs to access monitor page via
> +		 * address space above shared gpa boundary.
> +		 */
> +		if (hv_isolation_type_snp()) {
> +			vmbus_connection.monitor_pages_pa[0] +=
> +				ms_hyperv.shared_gpa_boundary;
> +			vmbus_connection.monitor_pages_pa[1] +=
> +				ms_hyperv.shared_gpa_boundary;
> +
> +			vmbus_connection.monitor_pages[0]
> +				= memremap(vmbus_connection.monitor_pages_pa[0],
> +					   HV_HYP_PAGE_SIZE,
> +					   MEMREMAP_WB);
> +			if (!vmbus_connection.monitor_pages[0]) {
> +				ret = -ENOMEM;
> +				goto cleanup;
> +			}
> +
> +			vmbus_connection.monitor_pages[1]
> +				= memremap(vmbus_connection.monitor_pages_pa[1],
> +					   HV_HYP_PAGE_SIZE,
> +					   MEMREMAP_WB);
> +			if (!vmbus_connection.monitor_pages[1]) {
> +				ret = -ENOMEM;
> +				goto cleanup;
> +			}
> +		}
> +
> +		/*
> +		 * Set memory host visibility hvcall smears memory
> +		 * and so zero monitor pages here.
> +		 */
> +		memset(vmbus_connection.monitor_pages[0], 0x00,
> +		       HV_HYP_PAGE_SIZE);
> +		memset(vmbus_connection.monitor_pages[1], 0x00,
> +		       HV_HYP_PAGE_SIZE);
> +
> +	}
> +
>  	msginfo = kzalloc(sizeof(*msginfo) +
>  			  sizeof(struct vmbus_channel_initiate_contact),
>  			  GFP_KERNEL);
> @@ -303,10 +365,26 @@ void vmbus_disconnect(void)
>  		vmbus_connection.int_page = NULL;
>  	}
> 
> -	hv_free_hyperv_page((unsigned long)vmbus_connection.monitor_pages[0]);
> -	hv_free_hyperv_page((unsigned long)vmbus_connection.monitor_pages[1]);
> -	vmbus_connection.monitor_pages[0] = NULL;
> -	vmbus_connection.monitor_pages[1] = NULL;
> +	if (hv_is_isolation_supported()) {
> +		memunmap(vmbus_connection.monitor_pages[0]);
> +		memunmap(vmbus_connection.monitor_pages[1]);

The matching memremap() calls are made in vmbus_connect() only in the
SNP case.  In the non-SNP case, monitor_pages and monitor_pages_original
are the same, so you would be doing an unmap, and then doing the
set_memory_encrypted() and hv_free_hyperv_page() using an address
that is no longer mapped, which seems wrong.   Looking at memunmap(),
it might be a no-op in this case, but even if it is, making them conditional
on the SNP case might be a safer thing to do, and it would make the code
more symmetrical.

> +
> +		set_memory_encrypted((unsigned long)
> +			vmbus_connection.monitor_pages_original[0],
> +			1);
> +		set_memory_encrypted((unsigned long)
> +			vmbus_connection.monitor_pages_original[1],
> +			1);
> +	}
> +
> +	hv_free_hyperv_page((unsigned long)
> +		vmbus_connection.monitor_pages_original[0]);
> +	hv_free_hyperv_page((unsigned long)
> +		vmbus_connection.monitor_pages_original[1]);
> +	vmbus_connection.monitor_pages_original[0] =
> +		vmbus_connection.monitor_pages[0] = NULL;
> +	vmbus_connection.monitor_pages_original[1] =
> +		vmbus_connection.monitor_pages[1] = NULL;
>  }
> 
>  /*
> diff --git a/drivers/hv/hyperv_vmbus.h b/drivers/hv/hyperv_vmbus.h
> index 42f3d9d123a1..d0a5232a1c3e 100644
> --- a/drivers/hv/hyperv_vmbus.h
> +++ b/drivers/hv/hyperv_vmbus.h
> @@ -240,6 +240,8 @@ struct vmbus_connection {
>  	 * is child->parent notification
>  	 */
>  	struct hv_monitor_page *monitor_pages[2];
> +	void *monitor_pages_original[2];
> +	phys_addr_t monitor_pages_pa[2];
>  	struct list_head chn_msg_list;
>  	spinlock_t channelmsg_lock;
> 
> --
> 2.25.1

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ