[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1457469802-11850-25-git-send-email-jglisse@redhat.com>
Date: Tue, 8 Mar 2016 15:43:17 -0500
From: Jérôme Glisse <jglisse@...hat.com>
To: akpm@...ux-foundation.org, <linux-kernel@...r.kernel.org>,
linux-mm@...ck.org
Cc: Linus Torvalds <torvalds@...ux-foundation.org>, <joro@...tes.org>,
Mel Gorman <mgorman@...e.de>, "H. Peter Anvin" <hpa@...or.com>,
Peter Zijlstra <peterz@...radead.org>,
Andrea Arcangeli <aarcange@...hat.com>,
Johannes Weiner <jweiner@...hat.com>,
Larry Woodman <lwoodman@...hat.com>,
Rik van Riel <riel@...hat.com>,
Dave Airlie <airlied@...hat.com>,
Brendan Conoboy <blc@...hat.com>,
Joe Donohue <jdonohue@...hat.com>,
Christophe Harle <charle@...dia.com>,
Duncan Poole <dpoole@...dia.com>,
Sherry Cheung <SCheung@...dia.com>,
Subhash Gutti <sgutti@...dia.com>,
John Hubbard <jhubbard@...dia.com>,
Mark Hairgrove <mhairgrove@...dia.com>,
Lucien Dunning <ldunning@...dia.com>,
Cameron Buschardt <cabuschardt@...dia.com>,
Arvind Gopalakrishnan <arvindg@...dia.com>,
Haggai Eran <haggaie@...lanox.com>,
Shachar Raindel <raindel@...lanox.com>,
Liran Liss <liranl@...lanox.com>,
Roland Dreier <roland@...estorage.com>,
Ben Sander <ben.sander@....com>,
Greg Stoner <Greg.Stoner@....com>,
John Bridgman <John.Bridgman@....com>,
Michael Mantor <Michael.Mantor@....com>,
Paul Blinzer <Paul.Blinzer@....com>,
Leonid Shamis <Leonid.Shamis@....com>,
Laurent Morichetti <Laurent.Morichetti@....com>,
Alexander Deucher <Alexander.Deucher@....com>,
Jérôme Glisse <jglisse@...hat.com>
Subject: [PATCH v12 24/29] HMM: allow to get pointer to spinlock protecting a directory.
Several use case for getting pointer to spinlock protecting a directory.
Signed-off-by: Jérôme Glisse <jglisse@...hat.com>
---
include/linux/hmm_pt.h | 25 +++++++++++++++++++++++++
1 file changed, 25 insertions(+)
diff --git a/include/linux/hmm_pt.h b/include/linux/hmm_pt.h
index f745d6c..22100a6 100644
--- a/include/linux/hmm_pt.h
+++ b/include/linux/hmm_pt.h
@@ -255,6 +255,16 @@ static inline void hmm_pt_directory_lock(struct hmm_pt *pt,
spin_lock(&pt->lock);
}
+static inline spinlock_t *hmm_pt_directory_lock_ptr(struct hmm_pt *pt,
+ struct page *ptd,
+ unsigned level)
+{
+ if (level)
+ return &ptd->ptl;
+ else
+ return &pt->lock;
+}
+
static inline void hmm_pt_directory_unlock(struct hmm_pt *pt,
struct page *ptd,
unsigned level)
@@ -272,6 +282,13 @@ static inline void hmm_pt_directory_lock(struct hmm_pt *pt,
spin_lock(&pt->lock);
}
+static inline spinlock_t *hmm_pt_directory_lock_ptr(struct hmm_pt *pt,
+ struct page *ptd,
+ unsigned level)
+{
+ return &pt->lock;
+}
+
static inline void hmm_pt_directory_unlock(struct hmm_pt *pt,
struct page *ptd,
unsigned level)
@@ -358,6 +375,14 @@ static inline void hmm_pt_iter_directory_lock(struct hmm_pt_iter *iter)
hmm_pt_directory_lock(pt, iter->ptd[pt->llevel - 1], pt->llevel);
}
+static inline spinlock_t *hmm_pt_iter_directory_lock_ptr(struct hmm_pt_iter *i)
+{
+ struct hmm_pt *pt = i->pt;
+
+ return hmm_pt_directory_lock_ptr(pt, i->ptd[pt->llevel - 1],
+ pt->llevel);
+}
+
static inline void hmm_pt_iter_directory_unlock(struct hmm_pt_iter *iter)
{
struct hmm_pt *pt = iter->pt;
--
2.4.3
Powered by blists - more mailing lists