[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20200925222600.6832-2-peterx@redhat.com>
Date: Fri, 25 Sep 2020 18:25:57 -0400
From: Peter Xu <peterx@...hat.com>
To: linux-kernel@...r.kernel.org, linux-mm@...ck.org
Cc: peterx@...hat.com, Jason Gunthorpe <jgg@...pe.ca>,
John Hubbard <jhubbard@...dia.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Christoph Hellwig <hch@....de>, Yang Shi <shy828301@...il.com>,
Oleg Nesterov <oleg@...hat.com>,
Kirill Tkhai <ktkhai@...tuozzo.com>,
Kirill Shutemov <kirill@...temov.name>,
Hugh Dickins <hughd@...gle.com>, Jann Horn <jannh@...gle.com>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Michal Hocko <mhocko@...e.com>, Jan Kara <jack@...e.cz>,
Andrea Arcangeli <aarcange@...hat.com>,
Leon Romanovsky <leonro@...dia.com>
Subject: [PATCH v2 1/4] mm: Introduce mm_struct.has_pinned
(Commit message majorly collected from Jason Gunthorpe)
Reduce the chance of false positive from page_maybe_dma_pinned() by keeping
track if the mm_struct has ever been used with pin_user_pages(). This allows
cases that might drive up the page ref_count to avoid any penalty from handling
dma_pinned pages.
Future work is planned, to provide a more sophisticated solution, likely to
turn it into a real counter. For now, make it atomic_t but use it as a boolean
for simplicity.
Suggested-by: Jason Gunthorpe <jgg@...pe.ca>
Signed-off-by: Peter Xu <peterx@...hat.com>
---
include/linux/mm_types.h | 10 ++++++++++
kernel/fork.c | 1 +
mm/gup.c | 6 ++++++
3 files changed, 17 insertions(+)
diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
index 496c3ff97cce..ed028af3cb19 100644
--- a/include/linux/mm_types.h
+++ b/include/linux/mm_types.h
@@ -436,6 +436,16 @@ struct mm_struct {
*/
atomic_t mm_count;
+ /**
+ * @has_pinned: Whether this mm has pinned any pages. This can
+ * be either replaced in the future by @pinned_vm when it
+ * becomes stable, or grow into a counter on its own. We're
+ * aggresive on this bit now - even if the pinned pages were
+ * unpinned later on, we'll still keep this bit set for the
+ * lifecycle of this mm just for simplicity.
+ */
+ atomic_t has_pinned;
+
#ifdef CONFIG_MMU
atomic_long_t pgtables_bytes; /* PTE page table pages */
#endif
diff --git a/kernel/fork.c b/kernel/fork.c
index 49677d668de4..e65d8192d080 100644
--- a/kernel/fork.c
+++ b/kernel/fork.c
@@ -1011,6 +1011,7 @@ static struct mm_struct *mm_init(struct mm_struct *mm, struct task_struct *p,
mm_pgtables_bytes_init(mm);
mm->map_count = 0;
mm->locked_vm = 0;
+ atomic_set(&mm->has_pinned, 0);
atomic64_set(&mm->pinned_vm, 0);
memset(&mm->rss_stat, 0, sizeof(mm->rss_stat));
spin_lock_init(&mm->page_table_lock);
diff --git a/mm/gup.c b/mm/gup.c
index e5739a1974d5..238667445337 100644
--- a/mm/gup.c
+++ b/mm/gup.c
@@ -1255,6 +1255,9 @@ static __always_inline long __get_user_pages_locked(struct mm_struct *mm,
BUG_ON(*locked != 1);
}
+ if (flags & FOLL_PIN)
+ atomic_set(¤t->mm->has_pinned, 1);
+
/*
* FOLL_PIN and FOLL_GET are mutually exclusive. Traditional behavior
* is to set FOLL_GET if the caller wants pages[] filled in (but has
@@ -2660,6 +2663,9 @@ static int internal_get_user_pages_fast(unsigned long start, int nr_pages,
FOLL_FAST_ONLY)))
return -EINVAL;
+ if (gup_flags & FOLL_PIN)
+ atomic_set(¤t->mm->has_pinned, 1);
+
if (!(gup_flags & FOLL_FAST_ONLY))
might_lock_read(¤t->mm->mmap_lock);
--
2.26.2
Powered by blists - more mailing lists