[<prev] [next>] [day] [month] [year] [list]
Message-Id: <20200617233512.177519-1-zhangalex@google.com>
Date: Wed, 17 Jun 2020 16:35:12 -0700
From: Kaiyu Zhang <zhangalex@...gle.com>
To: Andrew Morton <akpm@...ux-foundation.org>
Cc: linux-mm@...ck.org, linux-kernel@...r.kernel.org,
Alex Zhang <zhangalex@...gle.com>
Subject: [PATCH] mm/memory.c: make remap_pfn_range() reject unaligned addr
From: Alex Zhang <zhangalex@...gle.com>
This function implicitly assumes that the addr passed in is page aligned.
A non page aligned addr could ultimately cause a kernel bug in
remap_pte_range as the exit condition in the logic loop may never be
satisfied. This patch documents the need for the requirement, as
well as explicitly adds a check for it.
Signed-off-by: Alex Zhang <zhangalex@...gle.com>
---
mm/memory.c | 5 ++++-
1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/mm/memory.c b/mm/memory.c
index dc7f3543b1fd..16422acb6da8 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -2081,7 +2081,7 @@ static inline int remap_p4d_range(struct mm_struct *mm, pgd_t *pgd,
/**
* remap_pfn_range - remap kernel memory to userspace
* @vma: user vma to map to
- * @addr: target user address to start at
+ * @addr: target page aligned user address to start at
* @pfn: page frame number of kernel physical memory address
* @size: size of mapping area
* @prot: page protection flags for this mapping
@@ -2100,6 +2100,9 @@ int remap_pfn_range(struct vm_area_struct *vma, unsigned long addr,
unsigned long remap_pfn = pfn;
int err;
+ if (WARN_ON_ONCE(!PAGE_ALIGNED(addr)))
+ return -EINVAL;
+
/*
* Physically remapped pages are special. Tell the
* rest of the world about it:
--
2.27.0.111.gc72c7da667-goog
Powered by blists - more mailing lists