lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <1459951089-14911-1-git-send-email-toshi.kani@hpe.com>
Date:	Wed,  6 Apr 2016 07:58:09 -0600
From:	Toshi Kani <toshi.kani@....com>
To:	mingo@...nel.org, bp@...e.de, hpa@...or.com, tglx@...utronix.de
Cc:	dan.j.williams@...el.com, willy@...ux.intel.com,
	kirill.shutemov@...ux.intel.com, linux-mm@...ck.org,
	x86@...nel.org, linux-nvdimm@...ts.01.org,
	linux-kernel@...r.kernel.org, Toshi Kani <toshi.kani@....com>
Subject: [PATCH] x86 get_unmapped_area: Add PMD alignment for DAX PMD mmap

When CONFIG_FS_DAX_PMD is set, DAX supports mmap() using PMD page
size.  This feature relies on both mmap virtual address and FS
block data (i.e. physical address) to be aligned by the PMD page
size.  Users can use mkfs options to specify FS to align block
allocations.  However, aligning mmap() address requires application
changes to mmap() calls, such as:

 -  /* let the kernel to assign a mmap addr */
 -  mptr = mmap(NULL, fsize, PROT_READ|PROT_WRITE, FLAGS, fd, 0);

 +  /* 1. obtain a PMD-aligned virtual address */
 +  ret = posix_memalign(&mptr, PMD_SIZE, fsize);
 +  if (!ret)
 +    free(mptr);  /* 2. release the virt addr */
 +
 +  /* 3. then pass the PMD-aligned virt addr to mmap() */
 +  mptr = mmap(mptr, fsize, PROT_READ|PROT_WRITE, FLAGS, fd, 0);

These changes add unnecessary dependency to DAX and PMD page size
into application code.  The kernel should assign a mmap address
appropriate for the operation.

Change arch_get_unmapped_area() and arch_get_unmapped_area_topdown()
to request PMD_SIZE alignment when the request is for a DAX file and
its mapping range is large enough for using a PMD page.

Signed-off-by: Toshi Kani <toshi.kani@....com>
Cc: Thomas Gleixner <tglx@...utronix.de>
Cc: Ingo Molnar <mingo@...nel.org>
Cc: H. Peter Anvin <hpa@...or.com>
Cc: Borislav Petkov <bp@...e.de>
Cc: Dan Williams <dan.j.williams@...el.com>
Cc: Matthew Wilcox <willy@...ux.intel.com>
Cc: Kirill A. Shutemov <kirill.shutemov@...ux.intel.com>
---
 arch/x86/kernel/sys_x86_64.c |   14 ++++++++++++++
 1 file changed, 14 insertions(+)

diff --git a/arch/x86/kernel/sys_x86_64.c b/arch/x86/kernel/sys_x86_64.c
index 10e0272..a294c66 100644
--- a/arch/x86/kernel/sys_x86_64.c
+++ b/arch/x86/kernel/sys_x86_64.c
@@ -157,6 +157,13 @@ arch_get_unmapped_area(struct file *filp, unsigned long addr,
 		info.align_mask = get_align_mask();
 		info.align_offset += get_align_bits();
 	}
+	if (filp && IS_ENABLED(CONFIG_FS_DAX_PMD) && IS_DAX(file_inode(filp))) {
+		unsigned long off_end = info.align_offset + len;
+		unsigned long off_pmd = round_up(info.align_offset, PMD_SIZE);
+
+		if ((off_end > off_pmd) && ((off_end - off_pmd) >= PMD_SIZE))
+			info.align_mask |= (PMD_SIZE - 1);
+	}
 	return vm_unmapped_area(&info);
 }
 
@@ -200,6 +207,13 @@ arch_get_unmapped_area_topdown(struct file *filp, const unsigned long addr0,
 		info.align_mask = get_align_mask();
 		info.align_offset += get_align_bits();
 	}
+	if (filp && IS_ENABLED(CONFIG_FS_DAX_PMD) && IS_DAX(file_inode(filp))) {
+		unsigned long off_end = info.align_offset + len;
+		unsigned long off_pmd = round_up(info.align_offset, PMD_SIZE);
+
+		if ((off_end > off_pmd) && ((off_end - off_pmd) >= PMD_SIZE))
+			info.align_mask |= (PMD_SIZE - 1);
+	}
 	addr = vm_unmapped_area(&info);
 	if (!(addr & ~PAGE_MASK))
 		return addr;

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ