[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200806134851.GN23808@casper.infradead.org>
Date: Thu, 6 Aug 2020 14:48:51 +0100
From: Matthew Wilcox <willy@...radead.org>
To: Vlastimil Babka <vbabka@...e.cz>
Cc: John Hubbard <jhubbard@...dia.com>,
Andrew Morton <akpm@...ux-foundation.org>,
LKML <linux-kernel@...r.kernel.org>, linux-mm@...ck.org,
cai@....pw, kirill@...temov.name, rppt@...ux.ibm.com,
william.kucharski@...cle.com,
"Kirill A . Shutemov" <kirill.shutemov@...ux.intel.com>
Subject: Re: [PATCH v2] mm, dump_page: do not crash with bad
compound_mapcount()
On Thu, Aug 06, 2020 at 01:45:11PM +0200, Vlastimil Babka wrote:
> How about this additional patch now that we have head_mapcoun()? (I wouldn't
> go for squashing as the goal and scope is too different).
I like it. It bothers me that the compiler doesn't know that
compound_head(compound_head(x)) == compound_head(x). I updated
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=32911 with a request to be
able to tell the compiler that compound_head() is idempotent.
> The bloat-o-meter difference without DEBUG_VM is the following:
>
> add/remove: 0/0 grow/shrink: 1/4 up/down: 32/-56 (-24)
> Function old new delta
> __split_huge_pmd 2867 2899 +32
> shrink_page_list 3860 3847 -13
> reuse_swap_page 762 748 -14
> page_trans_huge_mapcount 153 139 -14
> total_mapcount 187 172 -15
> Total: Before=8687306, After=8687282, chg -0.00%
That's great. I'm expecting improvements from my thp_head() macro when
that lands (currently in Andrew's tree). I have been reluctant to replace
current callers of compound_head() with thp_head(), but I suspect PF_HEAD
could use thp_head() and save a few bytes on a tinyconfig build.
> +++ b/mm/huge_memory.c
> @@ -2125,7 +2125,7 @@ static void __split_huge_pmd_locked(struct vm_area_struct *vma, pmd_t *pmd,
> * Set PG_double_map before dropping compound_mapcount to avoid
> * false-negative page_mapped().
> */
> - if (compound_mapcount(page) > 1 && !TestSetPageDoubleMap(page)) {
> + if (head_mapcount(page) > 1 && !TestSetPageDoubleMap(page)) {
I'm a little nervous about this one. The page does actually come from
pmd_page(), and today that's guaranteed to be a head page. But I'm
not convinced that's going to still be true in twenty years. With the
current THP patchset, I won't allocate pages larger than PMD order, but
I can see there being interest in tracking pages in chunks larger than
2MB in the future. And then pmd_page() might well return a tail page.
So it might be a good idea to not convert this one.
> @@ -2467,7 +2467,7 @@ int total_mapcount(struct page *page)
> if (likely(!PageCompound(page)))
> return atomic_read(&page->_mapcount) + 1;
>
> - compound = compound_mapcount(page);
> + compound = head_mapcount(page);
> if (PageHuge(page))
> return compound;
> ret = compound;
Yes. This function is a little confusing because it uses PageCompound()
all the way through when it really should be using PageHead ... because
the opening line of the function is: VM_BUG_ON_PAGE(PageTail(page), page);
> @@ -2531,7 +2531,7 @@ int page_trans_huge_mapcount(struct page *page, int *total_mapcount)
> ret -= 1;
> _total_mapcount -= HPAGE_PMD_NR;
> }
> - mapcount = compound_mapcount(page);
> + mapcount = head_mapcount(page);
> ret += mapcount;
> _total_mapcount += mapcount;
> if (total_mapcount)
Yes, we called compound_head() earlier in the function. Safe.
> diff --git a/mm/swapfile.c b/mm/swapfile.c
> index 9ee4211835c6..c5e722de38b8 100644
> --- a/mm/swapfile.c
> +++ b/mm/swapfile.c
> @@ -1673,7 +1673,7 @@ static int page_trans_huge_map_swapcount(struct page *page, int *total_mapcount,
> map_swapcount -= 1;
> _total_mapcount -= HPAGE_PMD_NR;
> }
> - mapcount = compound_mapcount(page);
> + mapcount = head_mapcount(page);
> map_swapcount += mapcount;
> _total_mapcount += mapcount;
> if (total_mapcount)
Yes. page is a head page at this point.
> diff --git a/mm/vmscan.c b/mm/vmscan.c
> index a086c104a9a6..72218cdfd902 100644
> --- a/mm/vmscan.c
> +++ b/mm/vmscan.c
> @@ -1248,7 +1248,7 @@ static unsigned int shrink_page_list(struct list_head *page_list,
> * away. Chances are some or all of the
> * tail pages can be freed without IO.
> */
> - if (!compound_mapcount(page) &&
> + if (!head_mapcount(page) &&
> split_huge_page_to_list(page,
> page_list))
> goto activate_locked;
Yes. We don't put (can't put!) tail pages on the lists, so this must be a
head page.
Powered by blists - more mailing lists