[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20161025233609.5601-1-lstoakes@gmail.com>
Date: Wed, 26 Oct 2016 00:36:09 +0100
From: Lorenzo Stoakes <lstoakes@...il.com>
To: linux-mm@...ck.org
Cc: Linus Torvalds <torvalds@...ux-foundation.org>,
Michal Hocko <mhocko@...nel.org>, Jan Kara <jack@...e.cz>,
Hugh Dickins <hughd@...gle.com>,
Dave Hansen <dave.hansen@...ux.intel.com>,
Rik van Riel <riel@...hat.com>,
Mel Gorman <mgorman@...hsingularity.net>,
Andrew Morton <akpm@...ux-foundation.org>,
Paolo Bonzini <pbonzini@...hat.com>,
Radim Krčmář <rkrcmar@...hat.com>,
kvm@...r.kernel.org, linux-kernel@...r.kernel.org,
Lorenzo Stoakes <lstoakes@...il.com>
Subject: [PATCH] mm: remove unnecessary __get_user_pages_unlocked() calls
In hva_to_pfn_slow() we are able to replace __get_user_pages_unlocked() with
get_user_pages_unlocked() since we can now pass gup_flags.
In async_pf_execute() we need to pass different tsk, mm arguments so
get_user_pages_remote() is the sane replacement here (having added manual
acquisition and release of mmap_sem.)
Since we pass a NULL pages parameter the subsequent call to
__get_user_pages_locked() will have previously bailed any attempt at
VM_FAULT_RETRY, so we do not change this behaviour by using
get_user_pages_remote() which does not invoke VM_FAULT_RETRY logic at all.
Signed-off-by: Lorenzo Stoakes <lstoakes@...il.com>
---
virt/kvm/async_pf.c | 7 ++++---
virt/kvm/kvm_main.c | 5 ++---
2 files changed, 6 insertions(+), 6 deletions(-)
diff --git a/virt/kvm/async_pf.c b/virt/kvm/async_pf.c
index 8035cc1..e8c832c 100644
--- a/virt/kvm/async_pf.c
+++ b/virt/kvm/async_pf.c
@@ -82,10 +82,11 @@ static void async_pf_execute(struct work_struct *work)
/*
* This work is run asynchromously to the task which owns
* mm and might be done in another context, so we must
- * use FOLL_REMOTE.
+ * access remotely.
*/
- __get_user_pages_unlocked(NULL, mm, addr, 1, NULL,
- FOLL_WRITE | FOLL_REMOTE);
+ down_read(&mm->mmap_sem);
+ get_user_pages_remote(NULL, mm, addr, 1, FOLL_WRITE, NULL, NULL);
+ up_read(&mm->mmap_sem);
kvm_async_page_present_sync(vcpu, apf);
diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index 2907b7b..c45d951 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -1415,13 +1415,12 @@ static int hva_to_pfn_slow(unsigned long addr, bool *async, bool write_fault,
npages = get_user_page_nowait(addr, write_fault, page);
up_read(¤t->mm->mmap_sem);
} else {
- unsigned int flags = FOLL_TOUCH | FOLL_HWPOISON;
+ unsigned int flags = FOLL_HWPOISON;
if (write_fault)
flags |= FOLL_WRITE;
- npages = __get_user_pages_unlocked(current, current->mm, addr, 1,
- page, flags);
+ npages = get_user_pages_unlocked(addr, 1, page, flags);
}
if (npages != 1)
return npages;
--
2.10.1
Powered by blists - more mailing lists