[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-Id: <20200901161459.11772-3-sumit.semwal@linaro.org>
Date: Tue, 1 Sep 2020 21:44:58 +0530
From: Sumit Semwal <sumit.semwal@...aro.org>
To: Andrew Morton <akpm@...ux-foundation.org>, linux-mm@...ck.org,
linux-kernel@...r.kernel.org,
Alexey Dobriyan <adobriyan@...il.com>,
Jonathan Corbet <corbet@....net>
Cc: Mauro Carvalho Chehab <mchehab+huawei@...nel.org>,
Kees Cook <keescook@...omium.org>,
Michal Hocko <mhocko@...e.com>,
Colin Cross <ccross@...gle.com>,
Alexey Gladkov <gladkov.alexey@...il.com>,
Matthew Wilcox <willy@...radead.org>,
Jason Gunthorpe <jgg@...pe.ca>,
"Kirill A . Shutemov" <kirill.shutemov@...ux.intel.com>,
Michel Lespinasse <walken@...gle.com>,
Michal Koutný <mkoutny@...e.com>,
Song Liu <songliubraving@...com>,
Huang Ying <ying.huang@...el.com>,
Vlastimil Babka <vbabka@...e.cz>,
Yang Shi <yang.shi@...ux.alibaba.com>,
chenqiwu <chenqiwu@...omi.com>,
Mathieu Desnoyers <mathieu.desnoyers@...icios.com>,
John Hubbard <jhubbard@...dia.com>,
Mike Christie <mchristi@...hat.com>,
Bart Van Assche <bvanassche@....org>,
Amit Pundir <amit.pundir@...aro.org>,
Thomas Gleixner <tglx@...utronix.de>,
Christian Brauner <christian.brauner@...ntu.com>,
Daniel Jordan <daniel.m.jordan@...cle.com>,
Adrian Reber <areber@...hat.com>,
Nicolas Viennot <Nicolas.Viennot@...sigma.com>,
Al Viro <viro@...iv.linux.org.uk>,
linux-fsdevel@...r.kernel.org,
John Stultz <john.stultz@...aro.org>,
Sumit Semwal <sumit.semwal@...aro.org>
Subject: [PATCH v7 2/3] mm: memory: Add access_remote_vm_locked variant
This allows accessing a remote vm while the mmap_lock is already
held by the caller.
While adding support for anonymous vma naming, show_map_vma()
needs to access the remote vm to get the name of the anonymous vma.
Since show_map_vma() already holds the mmap_lock, so this _locked
variant was required.
Signed-off-by: Sumit Semwal <sumit.semwal@...aro.org>
---
include/linux/mm.h | 2 ++
mm/memory.c | 49 ++++++++++++++++++++++++++++++++++++++++------
2 files changed, 45 insertions(+), 6 deletions(-)
diff --git a/include/linux/mm.h b/include/linux/mm.h
index ca6e6a81576b..e9212c0bb5ac 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -1708,6 +1708,8 @@ extern int access_remote_vm(struct mm_struct *mm, unsigned long addr,
void *buf, int len, unsigned int gup_flags);
extern int __access_remote_vm(struct task_struct *tsk, struct mm_struct *mm,
unsigned long addr, void *buf, int len, unsigned int gup_flags);
+extern int access_remote_vm_locked(struct mm_struct *mm, unsigned long addr,
+ void *buf, int len, unsigned int gup_flags);
long get_user_pages_remote(struct mm_struct *mm,
unsigned long start, unsigned long nr_pages,
diff --git a/mm/memory.c b/mm/memory.c
index 602f4283122f..207be99390e9 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -4726,17 +4726,17 @@ EXPORT_SYMBOL_GPL(generic_access_phys);
/*
* Access another process' address space as given in mm. If non-NULL, use the
* given task for page fault accounting.
+ * This variant assumes that the mmap_lock is already held by the caller, so
+ * doesn't take the mmap_lock.
*/
-int __access_remote_vm(struct task_struct *tsk, struct mm_struct *mm,
- unsigned long addr, void *buf, int len, unsigned int gup_flags)
+int __access_remote_vm_locked(struct task_struct *tsk, struct mm_struct *mm,
+ unsigned long addr, void *buf, int len,
+ unsigned int gup_flags)
{
struct vm_area_struct *vma;
void *old_buf = buf;
int write = gup_flags & FOLL_WRITE;
- if (mmap_read_lock_killable(mm))
- return 0;
-
/* ignore errors, just check how much was successfully transferred */
while (len) {
int bytes, ret, offset;
@@ -4785,9 +4785,46 @@ int __access_remote_vm(struct task_struct *tsk, struct mm_struct *mm,
buf += bytes;
addr += bytes;
}
+ return buf - old_buf;
+}
+
+/*
+ * Access another process' address space as given in mm. If non-NULL, use the
+ * given task for page fault accounting.
+ */
+int __access_remote_vm(struct task_struct *tsk, struct mm_struct *mm,
+ unsigned long addr, void *buf, int len, unsigned int gup_flags)
+{
+ int ret;
+
+ if (mmap_read_lock_killable(mm))
+ return 0;
+
+ ret = __access_remote_vm_locked(tsk, mm, addr, buf, len, gup_flags);
mmap_read_unlock(mm);
- return buf - old_buf;
+ return ret;
+}
+
+/**
+ * access_remote_vm_locked - access another process' address space, without
+ * taking the mmap_lock. This allows nested calls from callers that already have
+ * taken the lock.
+ *
+ * @mm: the mm_struct of the target address space
+ * @addr: start address to access
+ * @buf: source or destination buffer
+ * @len: number of bytes to transfer
+ * @gup_flags: flags modifying lookup behaviour
+ *
+ * The caller must hold a reference on @mm, as well as hold the mmap_lock
+ *
+ * Return: number of bytes copied from source to destination.
+ */
+int access_remote_vm_locked(struct mm_struct *mm, unsigned long addr, void *buf,
+ int len, unsigned int gup_flags)
+{
+ return __access_remote_vm_locked(NULL, mm, addr, buf, len, gup_flags);
}
/**
--
2.28.0
Powered by blists - more mailing lists