[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20190807013340.9706-7-jhubbard@nvidia.com>
Date: Tue, 6 Aug 2019 18:33:05 -0700
From: john.hubbard@...il.com
To: Andrew Morton <akpm@...ux-foundation.org>
Cc: Christoph Hellwig <hch@...radead.org>,
Dan Williams <dan.j.williams@...el.com>,
Dave Chinner <david@...morbit.com>,
Dave Hansen <dave.hansen@...ux.intel.com>,
Ira Weiny <ira.weiny@...el.com>, Jan Kara <jack@...e.cz>,
Jason Gunthorpe <jgg@...pe.ca>,
Jérôme Glisse <jglisse@...hat.com>,
LKML <linux-kernel@...r.kernel.org>,
amd-gfx@...ts.freedesktop.org, ceph-devel@...r.kernel.org,
devel@...verdev.osuosl.org, devel@...ts.orangefs.org,
dri-devel@...ts.freedesktop.org, intel-gfx@...ts.freedesktop.org,
kvm@...r.kernel.org, linux-arm-kernel@...ts.infradead.org,
linux-block@...r.kernel.org, linux-crypto@...r.kernel.org,
linux-fbdev@...r.kernel.org, linux-fsdevel@...r.kernel.org,
linux-media@...r.kernel.org, linux-mm@...ck.org,
linux-nfs@...r.kernel.org, linux-rdma@...r.kernel.org,
linux-rpi-kernel@...ts.infradead.org, linux-xfs@...r.kernel.org,
netdev@...r.kernel.org, rds-devel@....oracle.com,
sparclinux@...r.kernel.org, x86@...nel.org,
xen-devel@...ts.xenproject.org, John Hubbard <jhubbard@...dia.com>,
Joerg Roedel <joro@...tes.org>,
Paolo Bonzini <pbonzini@...hat.com>,
Radim Krčmář <rkrcmar@...hat.com>,
Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...hat.com>,
"H . Peter Anvin" <hpa@...or.com>
Subject: [PATCH v3 06/41] x86/kvm: convert put_page() to put_user_page*()
From: John Hubbard <jhubbard@...dia.com>
For pages that were retained via get_user_pages*(), release those pages
via the new put_user_page*() routines, instead of via put_page().
This is part a tree-wide conversion, as described in commit fc1d8e7cca2d
("mm: introduce put_user_page*(), placeholder versions").
Cc: Joerg Roedel <joro@...tes.org>
Cc: Paolo Bonzini <pbonzini@...hat.com>
Cc: Radim Krčmář <rkrcmar@...hat.com>
Cc: Thomas Gleixner <tglx@...utronix.de>
Cc: Ingo Molnar <mingo@...hat.com>
Cc: H. Peter Anvin <hpa@...or.com>
Cc: x86@...nel.org
Cc: kvm@...r.kernel.org
Signed-off-by: John Hubbard <jhubbard@...dia.com>
---
arch/x86/kvm/svm.c | 4 ++--
virt/kvm/kvm_main.c | 4 ++--
2 files changed, 4 insertions(+), 4 deletions(-)
diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index 7eafc6907861..ff93c923ed36 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -1827,7 +1827,7 @@ static struct page **sev_pin_memory(struct kvm *kvm, unsigned long uaddr,
err:
if (npinned > 0)
- release_pages(pages, npinned);
+ put_user_pages(pages, npinned);
kvfree(pages);
return NULL;
@@ -1838,7 +1838,7 @@ static void sev_unpin_memory(struct kvm *kvm, struct page **pages,
{
struct kvm_sev_info *sev = &to_kvm_svm(kvm)->sev_info;
- release_pages(pages, npages);
+ put_user_pages(pages, npages);
kvfree(pages);
sev->pages_locked -= npages;
}
diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index 887f3b0c2b60..4b6a596ea8e9 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -1499,7 +1499,7 @@ static int hva_to_pfn_slow(unsigned long addr, bool *async, bool write_fault,
if (__get_user_pages_fast(addr, 1, 1, &wpage) == 1) {
*writable = true;
- put_page(page);
+ put_user_page(page);
page = wpage;
}
}
@@ -1831,7 +1831,7 @@ EXPORT_SYMBOL_GPL(kvm_release_page_clean);
void kvm_release_pfn_clean(kvm_pfn_t pfn)
{
if (!is_error_noslot_pfn(pfn) && !kvm_is_reserved_pfn(pfn))
- put_page(pfn_to_page(pfn));
+ put_user_page(pfn_to_page(pfn));
}
EXPORT_SYMBOL_GPL(kvm_release_pfn_clean);
--
2.22.0
Powered by blists - more mailing lists