lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [day] [month] [year] [list]
Message-Id: <20260119033435.3358-1-mhklinux@outlook.com>
Date: Sun, 18 Jan 2026 19:34:35 -0800
From: mhkelley58@...il.com
To: kys@...rosoft.com,
	haiyangz@...rosoft.com,
	wei.liu@...nel.org,
	decui@...rosoft.com,
	longli@...rosoft.com,
	linux-hyperv@...r.kernel.org
Cc: linux-kernel@...r.kernel.org
Subject: [PATCH 1/1] Drivers: hv: Use memremap()/memunmap() instead of ioremap_cache()/iounmap()

From: Michael Kelley <mhklinux@...look.com>

When running with a paravisor or in the root partition, the SynIC event and
message pages are provided by the paravisor or hypervisor respectively,
instead of being allocated by Linux. The provided pages are normal memory,
but are outside of the physical address space seen by Linux. As such they
cannot be accessed via the kernel's direct map, and must be explicitly
mapped to a kernel virtual address.

Current code uses ioremap_cache() and iounmap() to map and unmap the pages.
These functions are for use on I/O address space that may not behave as
normal memory, so they generate or expect addresses with the __iomem
attribute. For normal memory, the preferred functions are memremap() and
memunmap(), which operate similarly but without __iomem.

At the time of the original work on CoCo VMs on Hyper-V, memremap() did not
support creating a decrypted mapping, so ioremap_cache() was used instead,
since I/O address space is always mapped decrypted. memremap() has since
been enhanced to allow decrypted mappings, so replace ioremap_cache() with
memremap() when mapping the event and message pages. Similarly, replace
iounmap() with memunmap(). As a side benefit, the replacement cleans up
'sparse' warnings about __iomem mismatches.

The replacement is done to use the correct functions as long-term goodness
and to clean up the sparse warnings. No runtime bugs are fixed.

Reported-by: kernel test robot <lkp@...el.com>
Closes: https://lore.kernel.org/oe-kbuild-all/202601170445.JtZQwndW-lkp@intel.com/
Closes: https://lore.kernel.org/oe-kbuild-all/202512150359.fMdmbddk-lkp@intel.com/
Signed-off-by: Michael Kelley <mhklinux@...look.com>
---
I've tested these changes in SEV-SNP and TDX VMs in Azure, and in a
D16lds v6 VM in Azure, which has a paravisor but no encryption. Normal
VMs without a paravisor don't go down this code path.

But I don't have a way to test in the root partition. If someone could do
a quick verification in the root partition, that would be helpful.

 drivers/hv/hv.c | 12 ++++++------
 1 file changed, 6 insertions(+), 6 deletions(-)

diff --git a/drivers/hv/hv.c b/drivers/hv/hv.c
index c100f04b3581..ea6835638505 100644
--- a/drivers/hv/hv.c
+++ b/drivers/hv/hv.c
@@ -287,11 +287,11 @@ void hv_hyp_synic_enable_regs(unsigned int cpu)
 	simp.simp_enabled = 1;
 
 	if (ms_hyperv.paravisor_present || hv_root_partition()) {
-		/* Mask out vTOM bit. ioremap_cache() maps decrypted */
+		/* Mask out vTOM bit and map as decrypted */
 		u64 base = (simp.base_simp_gpa << HV_HYP_PAGE_SHIFT) &
 				~ms_hyperv.shared_gpa_boundary;
 		hv_cpu->hyp_synic_message_page =
-			(void *)ioremap_cache(base, HV_HYP_PAGE_SIZE);
+			memremap(base, HV_HYP_PAGE_SIZE, MEMREMAP_WB | MEMREMAP_DEC);
 		if (!hv_cpu->hyp_synic_message_page)
 			pr_err("Fail to map synic message page.\n");
 	} else {
@@ -306,11 +306,11 @@ void hv_hyp_synic_enable_regs(unsigned int cpu)
 	siefp.siefp_enabled = 1;
 
 	if (ms_hyperv.paravisor_present || hv_root_partition()) {
-		/* Mask out vTOM bit. ioremap_cache() maps decrypted */
+		/* Mask out vTOM bit and map as decrypted */
 		u64 base = (siefp.base_siefp_gpa << HV_HYP_PAGE_SHIFT) &
 				~ms_hyperv.shared_gpa_boundary;
 		hv_cpu->hyp_synic_event_page =
-			(void *)ioremap_cache(base, HV_HYP_PAGE_SIZE);
+			memremap(base, HV_HYP_PAGE_SIZE, MEMREMAP_WB | MEMREMAP_DEC);
 		if (!hv_cpu->hyp_synic_event_page)
 			pr_err("Fail to map synic event page.\n");
 	} else {
@@ -429,7 +429,7 @@ void hv_hyp_synic_disable_regs(unsigned int cpu)
 	simp.simp_enabled = 0;
 	if (ms_hyperv.paravisor_present || hv_root_partition()) {
 		if (hv_cpu->hyp_synic_message_page) {
-			iounmap(hv_cpu->hyp_synic_message_page);
+			memunmap(hv_cpu->hyp_synic_message_page);
 			hv_cpu->hyp_synic_message_page = NULL;
 		}
 	} else {
@@ -443,7 +443,7 @@ void hv_hyp_synic_disable_regs(unsigned int cpu)
 
 	if (ms_hyperv.paravisor_present || hv_root_partition()) {
 		if (hv_cpu->hyp_synic_event_page) {
-			iounmap(hv_cpu->hyp_synic_event_page);
+			memunmap(hv_cpu->hyp_synic_event_page);
 			hv_cpu->hyp_synic_event_page = NULL;
 		}
 	} else {
-- 
2.25.1


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ