[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200526062207.1360225-3-jhubbard@nvidia.com>
Date: Mon, 25 May 2020 23:22:07 -0700
From: John Hubbard <jhubbard@...dia.com>
To: LKML <linux-kernel@...r.kernel.org>
CC: Souptick Joarder <jrdr.linux@...il.com>,
John Hubbard <jhubbard@...dia.com>,
Ingo Molnar <mingo@...hat.com>, Borislav Petkov <bp@...en8.de>,
Thomas Gleixner <tglx@...utronix.de>,
Paolo Bonzini <pbonzini@...hat.com>,
Sean Christopherson <sean.j.christopherson@...el.com>,
Vitaly Kuznetsov <vkuznets@...hat.com>,
Wanpeng Li <wanpengli@...cent.com>,
Jim Mattson <jmattson@...gle.com>,
Joerg Roedel <joro@...tes.org>,
"H . Peter Anvin" <hpa@...or.com>, <x86@...nel.org>,
<kvm@...r.kernel.org>
Subject: [PATCH 2/2] KVM: SVM: convert get_user_pages() --> pin_user_pages()
This code was using get_user_pages*(), in a "Case 2" scenario
(DMA/RDMA), using the categorization from [1]. That means that it's
time to convert the get_user_pages*() + put_page() calls to
pin_user_pages*() + unpin_user_pages() calls.
There is some helpful background in [2]: basically, this is a small
part of fixing a long-standing disconnect between pinning pages, and
file systems' use of those pages.
[1] Documentation/core-api/pin_user_pages.rst
[2] "Explicit pinning of user-space pages":
https://lwn.net/Articles/807108/
Cc: Ingo Molnar <mingo@...hat.com>
Cc: Borislav Petkov <bp@...en8.de>
Cc: Thomas Gleixner <tglx@...utronix.de>
Cc: Paolo Bonzini <pbonzini@...hat.com>
Cc: Sean Christopherson <sean.j.christopherson@...el.com>
Cc: Vitaly Kuznetsov <vkuznets@...hat.com>
Cc: Wanpeng Li <wanpengli@...cent.com>
Cc: Jim Mattson <jmattson@...gle.com>
Cc: Joerg Roedel <joro@...tes.org>
Cc: H. Peter Anvin <hpa@...or.com>
Cc: x86@...nel.org
Cc: kvm@...r.kernel.org
Signed-off-by: John Hubbard <jhubbard@...dia.com>
---
arch/x86/kvm/svm/sev.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c
index 9693db1af57c..a83f2e73bcbb 100644
--- a/arch/x86/kvm/svm/sev.c
+++ b/arch/x86/kvm/svm/sev.c
@@ -349,7 +349,7 @@ static struct page **sev_pin_memory(struct kvm *kvm, unsigned long uaddr,
return NULL;
/* Pin the user virtual address. */
- npinned = get_user_pages_fast(uaddr, npages, write ? FOLL_WRITE : 0, pages);
+ npinned = pin_user_pages_fast(uaddr, npages, write ? FOLL_WRITE : 0, pages);
if (npinned != npages) {
pr_err("SEV: Failure locking %lu pages.\n", npages);
goto err;
@@ -362,7 +362,7 @@ static struct page **sev_pin_memory(struct kvm *kvm, unsigned long uaddr,
err:
if (npinned > 0)
- release_pages(pages, npinned);
+ unpin_user_pages(pages, npinned);
kvfree(pages);
return NULL;
@@ -373,7 +373,7 @@ static void sev_unpin_memory(struct kvm *kvm, struct page **pages,
{
struct kvm_sev_info *sev = &to_kvm_svm(kvm)->sev_info;
- release_pages(pages, npages);
+ unpin_user_pages(pages, npages);
kvfree(pages);
sev->pages_locked -= npages;
}
--
2.26.2
Powered by blists - more mailing lists