[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20190924212427.7734-4-leonardo@linux.ibm.com>
Date: Tue, 24 Sep 2019 18:24:20 -0300
From: Leonardo Bras <leonardo@...ux.ibm.com>
To: linuxppc-dev@...ts.ozlabs.org, linux-kernel@...r.kernel.org,
kvm-ppc@...r.kernel.org, linux-arch@...r.kernel.org,
linux-mm@...ck.org
Cc: Leonardo Bras <leonardo@...ux.ibm.com>,
Benjamin Herrenschmidt <benh@...nel.crashing.org>,
Paul Mackerras <paulus@...ba.org>,
Michael Ellerman <mpe@...erman.id.au>,
Arnd Bergmann <arnd@...db.de>,
"Aneesh Kumar K.V" <aneesh.kumar@...ux.ibm.com>,
Christophe Leroy <christophe.leroy@....fr>,
Andrew Morton <akpm@...ux-foundation.org>,
Dan Williams <dan.j.williams@...el.com>,
Nicholas Piggin <npiggin@...il.com>,
Mahesh Salgaonkar <mahesh@...ux.vnet.ibm.com>,
Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
Thomas Gleixner <tglx@...utronix.de>,
Ganesh Goudar <ganeshgr@...ux.ibm.com>,
Allison Randal <allison@...utok.net>,
Mike Rapoport <rppt@...ux.ibm.com>,
YueHaibing <yuehaibing@...wei.com>,
Ira Weiny <ira.weiny@...el.com>,
Jason Gunthorpe <jgg@...pe.ca>,
John Hubbard <jhubbard@...dia.com>,
Keith Busch <keith.busch@...el.com>
Subject: [PATCH v3 03/11] mm/gup: Applies counting method to monitor gup_pgd_range
As decribed, gup_pgd_range is a lockless pagetable walk. So, in order to
monitor against THP split/collapse with the couting method, it's necessary
to bound it with {start,end}_lockless_pgtbl_walk.
There are dummy functions, so it is not going to add any overhead on archs
that don't use this method.
Signed-off-by: Leonardo Bras <leonardo@...ux.ibm.com>
---
mm/gup.c | 8 ++++++++
1 file changed, 8 insertions(+)
diff --git a/mm/gup.c b/mm/gup.c
index 98f13ab37bac..eabd6fd15cf8 100644
--- a/mm/gup.c
+++ b/mm/gup.c
@@ -2325,6 +2325,7 @@ static bool gup_fast_permitted(unsigned long start, unsigned long end)
int __get_user_pages_fast(unsigned long start, int nr_pages, int write,
struct page **pages)
{
+ struct mm_struct mm;
unsigned long len, end;
unsigned long flags;
int nr = 0;
@@ -2352,9 +2353,12 @@ int __get_user_pages_fast(unsigned long start, int nr_pages, int write,
if (IS_ENABLED(CONFIG_HAVE_FAST_GUP) &&
gup_fast_permitted(start, end)) {
+ mm = current->mm;
+ start_lockless_pgtbl_walk(mm);
local_irq_save(flags);
gup_pgd_range(start, end, write ? FOLL_WRITE : 0, pages, &nr);
local_irq_restore(flags);
+ end_lockless_pgtbl_walk(mm);
}
return nr;
@@ -2404,6 +2408,7 @@ int get_user_pages_fast(unsigned long start, int nr_pages,
unsigned int gup_flags, struct page **pages)
{
unsigned long addr, len, end;
+ struct mm_struct *mm;
int nr = 0, ret = 0;
if (WARN_ON_ONCE(gup_flags & ~(FOLL_WRITE | FOLL_LONGTERM)))
@@ -2421,9 +2426,12 @@ int get_user_pages_fast(unsigned long start, int nr_pages,
if (IS_ENABLED(CONFIG_HAVE_FAST_GUP) &&
gup_fast_permitted(start, end)) {
+ mm = current->mm;
+ start_lockless_pgtbl_walk(mm);
local_irq_disable();
gup_pgd_range(addr, end, gup_flags, pages, &nr);
local_irq_enable();
+ end_lockless_pgtbl_walk(mm);
ret = nr;
}
--
2.20.1
Powered by blists - more mailing lists