[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20190108110944.23591-1-rpenyaev@suse.de>
Date: Tue, 8 Jan 2019 12:09:44 +0100
From: Roman Penyaev <rpenyaev@...e.de>
To: unlisted-recipients:; (no To-header on input)
Cc: Roman Penyaev <rpenyaev@...e.de>,
Andrew Morton <akpm@...ux-foundation.org>,
Stephen Rothwell <sfr@...b.auug.org.au>,
Michal Hocko <mhocko@...e.com>,
"David S . Miller" <davem@...emloft.net>,
Peter Zijlstra <peterz@...radead.org>, linux-mm@...ck.org,
linux-kernel@...r.kernel.org
Subject: [PATCH 1/1] mm/vmalloc: Make vmalloc_32_user() align base kernel virtual address to SHMLBA
This patch repeats the original one from David S. Miller:
2dca6999eed5 ("mm, perf_event: Make vmalloc_user() align base kernel virtual address to SHMLBA")
but for missed vmalloc_32_user() case, which also requires correct
alignment of virtual address on kernel side to avoid D-caches
aliases. A bit of copy-paste from original patch to recover in
memory of what is all about:
When a vmalloc'd area is mmap'd into userspace, some kind of
co-ordination is necessary for this to work on platforms with cpu
D-caches which can have aliases.
Otherwise kernel side writes won't be seen properly in userspace
and vice versa.
If the kernel side mapping and the user side one have the same
alignment, modulo SHMLBA, this can work as long as VM_SHARED is
shared of VMA and for all current users this is true. VM_SHARED
will force SHMLBA alignment of the user side mmap on platforms with
D-cache aliasing matters.
David S. Miller
Signed-off-by: Roman Penyaev <rpenyaev@...e.de>
Cc: Andrew Morton <akpm@...ux-foundation.org>
Cc: Stephen Rothwell <sfr@...b.auug.org.au>
Cc: Michal Hocko <mhocko@...e.com>
Cc: David S. Miller <davem@...emloft.net>
Cc: Peter Zijlstra <peterz@...radead.org>
Cc: linux-mm@...ck.org
Cc: linux-kernel@...r.kernel.org
---
mm/vmalloc.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index 50b17c745149..e83961767dc1 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -1971,7 +1971,7 @@ EXPORT_SYMBOL(vmalloc_32);
*/
void *vmalloc_32_user(unsigned long size)
{
- return __vmalloc_node_range(size, 1, VMALLOC_START, VMALLOC_END,
+ return __vmalloc_node_range(size, SHMLBA, VMALLOC_START, VMALLOC_END,
GFP_VMALLOC32 | __GFP_ZERO, PAGE_KERNEL,
VM_USERMAP, NUMA_NO_NODE,
__builtin_return_address(0));
--
2.19.1
Powered by blists - more mailing lists