[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200201034029.4063170-10-jhubbard@nvidia.com>
Date: Fri, 31 Jan 2020 19:40:26 -0800
From: John Hubbard <jhubbard@...dia.com>
To: Andrew Morton <akpm@...ux-foundation.org>
CC: Al Viro <viro@...iv.linux.org.uk>,
Christoph Hellwig <hch@...radead.org>,
Dan Williams <dan.j.williams@...el.com>,
Dave Chinner <david@...morbit.com>,
Ira Weiny <ira.weiny@...el.com>, Jan Kara <jack@...e.cz>,
Jason Gunthorpe <jgg@...pe.ca>,
Jonathan Corbet <corbet@....net>,
Jérôme Glisse <jglisse@...hat.com>,
"Kirill A . Shutemov" <kirill@...temov.name>,
Michal Hocko <mhocko@...e.com>,
Mike Kravetz <mike.kravetz@...cle.com>,
Shuah Khan <shuah@...nel.org>,
Vlastimil Babka <vbabka@...e.cz>,
Matthew Wilcox <willy@...radead.org>,
<linux-doc@...r.kernel.org>, <linux-fsdevel@...r.kernel.org>,
<linux-kselftest@...r.kernel.org>, <linux-rdma@...r.kernel.org>,
<linux-mm@...ck.org>, LKML <linux-kernel@...r.kernel.org>,
John Hubbard <jhubbard@...dia.com>
Subject: [PATCH v3 09/12] mm: dump_page(): better diagnostics for huge pinned pages
As part of pin_user_pages() and related API calls, pages are
"dma-pinned". For the case of compound pages of order > 1, the per-page
accounting of dma pins is accomplished via the 3rd struct page in the
compound page. In order to support debugging of any pin_user_pages()-
related problems, enhance dump_page() so as to report the pin count
in that case.
Documentation/core-api/pin_user_pages.rst is also updated accordingly.
Signed-off-by: John Hubbard <jhubbard@...dia.com>
---
Documentation/core-api/pin_user_pages.rst | 7 +++++
mm/debug.c | 34 +++++++++++++++++------
2 files changed, 33 insertions(+), 8 deletions(-)
diff --git a/Documentation/core-api/pin_user_pages.rst b/Documentation/core-api/pin_user_pages.rst
index 3f72b1ea1104..dd21ea140ef4 100644
--- a/Documentation/core-api/pin_user_pages.rst
+++ b/Documentation/core-api/pin_user_pages.rst
@@ -215,6 +215,13 @@ Those are both going to show zero, unless CONFIG_DEBUG_VM is set. This is
because there is a noticeable performance drop in unpin_user_page(), when they
are activated.
+Other diagnostics
+=================
+
+dump_page() has been enhanced slightly, to handle these new counting fields, and
+to better report on compound pages in general. Specifically, for compound pages
+with order > 1, the exact (hpage_pinned_refcount) pincount is reported.
+
References
==========
diff --git a/mm/debug.c b/mm/debug.c
index beb1c59d784b..db81b11345be 100644
--- a/mm/debug.c
+++ b/mm/debug.c
@@ -57,10 +57,20 @@ static void __dump_tail_page(struct page *page, int mapcount)
page, page_ref_count(page), mapcount, page->mapping,
page_to_pgoff(page));
} else {
- pr_warn("page:%px compound refcount:%d mapcount:%d mapping:%px "
- "index:%#lx compound_mapcount:%d\n",
- page, page_ref_count(head), mapcount, head->mapping,
- page_to_pgoff(head), compound_mapcount(page));
+ if (hpage_pincount_available(page))
+ pr_warn("page:%px compound refcount:%d mapcount:%d "
+ "mapping:%px index:%#lx compound_mapcount:%d "
+ "compound_pincount:%d\n",
+ page, page_ref_count(head), mapcount,
+ head->mapping, page_to_pgoff(head),
+ compound_mapcount(page),
+ compound_pincount(page));
+ else
+ pr_warn("page:%px compound refcount:%d mapcount:%d "
+ "mapping:%px index:%#lx compound_mapcount:%d\n",
+ page, page_ref_count(head), mapcount,
+ head->mapping, page_to_pgoff(head),
+ compound_mapcount(page));
}
if (page_ref_count(page) != 0)
@@ -103,10 +113,18 @@ void __dump_page(struct page *page, const char *reason)
if (PageTail(page))
__dump_tail_page(page, mapcount);
- else
- pr_warn("page:%px refcount:%d mapcount:%d mapping:%px index:%#lx\n",
- page, page_ref_count(page), mapcount,
- page->mapping, page_to_pgoff(page));
+ else {
+ if (hpage_pincount_available(page))
+ pr_warn("page:%px refcount:%d mapcount:%d mapping:%px "
+ "index:%#lx compound pincount: %d\n",
+ page, page_ref_count(page), mapcount,
+ page->mapping, page_to_pgoff(page),
+ compound_pincount(page));
+ else
+ pr_warn("page:%px refcount:%d mapcount:%d mapping:%px "
+ "index:%#lx\n", page, page_ref_count(page),
+ mapcount, page->mapping, page_to_pgoff(page));
+ }
if (PageKsm(page))
type = "ksm ";
else if (PageAnon(page))
--
2.25.0
Powered by blists - more mailing lists