lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [day] [month] [year] [list]
Message-Id: <20160801161242.13277-1-rui.teng@linux.vnet.ibm.com>
Date:	Tue,  2 Aug 2016 00:12:42 +0800
From:	Rui Teng <rui.teng@...ux.vnet.ibm.com>
To:	linuxppc-dev@...ts.ozlabs.org, linux-kernel@...r.kernel.org
Cc:	paulus@...ba.org, mpe@...erman.id.au,
	Anshuman Khandual <khandual@...ux.vnet.ibm.com>,
	Rui Teng <rui.teng@...ux.vnet.ibm.com>
Subject: [PATCH] [PATCH] [V3] powerpc/mm: Add validation for platform reserved memory ranges

From: Anshuman Khandual <khandual@...ux.vnet.ibm.com>

For partition running on PHYP, there can be a adjunct partition
which shares the virtual address range with the operating system.
Virtual address ranges which can be used by the adjunct partition
are communicated with virtual device node of the device tree with
a property known as "ibm,reserved-virtual-addresses". This patch
introduces a new function named 'validate_reserved_va_range' which
is called  during initialization to validate that these reserved
virtual address ranges do not overlap with the address ranges used
by the kernel for all supported memory contexts. This helps prevent
the possibility of getting return codes similar to H_RESOURCE for
H_PROTECT hcalls for conflicting HPTE entries.

Signed-off-by: Anshuman Khandual <khandual@...ux.vnet.ibm.com>
Signed-off-by: Rui Teng <rui.teng@...ux.vnet.ibm.com>
---
- Tested on both POWER8 LE and BE platforms

Changes in V3:
- Use u32 and u64 to store the virtual address and use CPU endian mask.

Changes in V2:
- Added braces to the definition of LINUX_VA_BITS
- Adjusted tabs as spaces for the definition of PARTIAL_LINUX_VA_MASK

---
 arch/powerpc/mm/hash_utils_64.c | 68 +++++++++++++++++++++++++++++++++++++++++
 1 file changed, 68 insertions(+)

diff --git a/arch/powerpc/mm/hash_utils_64.c b/arch/powerpc/mm/hash_utils_64.c
index 2971ea1..6918198 100644
--- a/arch/powerpc/mm/hash_utils_64.c
+++ b/arch/powerpc/mm/hash_utils_64.c
@@ -1723,3 +1723,71 @@ void hash__setup_initial_memory_limit(phys_addr_t first_memblock_base,
 	/* Finally limit subsequent allocations */
 	memblock_set_current_limit(ppc64_rma_size);
 }
+
+/*
+ * PAPR says that each reserved virtual address range record
+ * contains three be32 elements which is of toal 12 bytes.
+ * First two be32 elements contain the abbreviated virtual
+ * address (high order 32 bits and low order 32 bits that
+ * generate the abbreviated virtual address of 64 bits which
+ * need to be concatenated with 24 bits of 0 at the end) and
+ * the third be32 element contains the size of the reserved
+ * virtual address range as number of consecutive 4K pages.
+ */
+struct reserved_va_record {
+	u32	high_addr;
+	u32	low_addr;
+	u32	nr_pages_4K;
+};
+
+/*
+ * Linux uses 65 bits (CONTEXT_BITS + ESID_BITS + SID_SHIFT)
+ * of virtual address. As reserved virtual address comes in
+ * as an abbreviated form (64 bits) from the device tree, we
+ * will use a partial address bit mask (65 >> 24) to match it
+ * for simplicity.
+ */
+#define RVA_LESS_BITS		24
+#define LINUX_VA_BITS		(CONTEXT_BITS + ESID_BITS + SID_SHIFT)
+#define PARTIAL_LINUX_VA_MASK	((1ULL << (LINUX_VA_BITS - RVA_LESS_BITS)) - 1)
+
+static int __init validate_reserved_va_range(void)
+{
+	struct reserved_va_record rva;
+	struct device_node *np;
+	int records, i;
+	u64 vaddr;
+
+	np = of_find_node_by_name(NULL, "vdevice");
+	if (!np)
+		return -ENODEV;
+
+	records = of_property_count_elems_of_size(np,
+			"ibm,reserved-virtual-addresses",
+				sizeof(struct reserved_va_record));
+	if (records < 0)
+		return records;
+
+	for (i = 0; i < records; i++) {
+		of_property_read_u32_index(np,
+			"ibm,reserved-virtual-addresses",
+				3 * i, &rva.high_addr);
+		of_property_read_u32_index(np,
+			"ibm,reserved-virtual-addresses",
+				3 * i + 1, &rva.low_addr);
+		of_property_read_u32_index(np,
+			"ibm,reserved-virtual-addresses",
+				3 * i + 2, &rva.nr_pages_4K);
+
+		vaddr =  rva.high_addr;
+		vaddr =  (vaddr << 32) | rva.low_addr;
+		if (unlikely(!(vaddr & ~PARTIAL_LINUX_VA_MASK))) {
+			pr_err("RVA [0x%llx000000 (0x%x in bytes)] overlapped\n",
+					vaddr, rva.nr_pages_4K * 4096);
+			BUG();
+		}
+	}
+	of_node_put(np);
+	return 0;
+}
+device_initcall(validate_reserved_va_range);
-- 
2.7.4

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ