[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZVYsiNxXGJCk0EYs@casper.infradead.org>
Date: Thu, 16 Nov 2023 14:51:52 +0000
From: Matthew Wilcox <willy@...radead.org>
To: Peter Xu <peterx@...hat.com>
Cc: linux-kernel@...r.kernel.org, linux-mm@...ck.org,
Mike Kravetz <mike.kravetz@...cle.com>,
"Kirill A . Shutemov" <kirill@...temov.name>,
Lorenzo Stoakes <lstoakes@...il.com>,
Axel Rasmussen <axelrasmussen@...gle.com>,
John Hubbard <jhubbard@...dia.com>,
Mike Rapoport <rppt@...nel.org>,
Hugh Dickins <hughd@...gle.com>,
David Hildenbrand <david@...hat.com>,
Andrea Arcangeli <aarcange@...hat.com>,
Rik van Riel <riel@...riel.com>,
James Houghton <jthoughton@...gle.com>,
Yang Shi <shy828301@...il.com>,
Jason Gunthorpe <jgg@...dia.com>,
Vlastimil Babka <vbabka@...e.cz>,
Andrew Morton <akpm@...ux-foundation.org>
Subject: Re: [PATCH RFC 07/12] mm/gup: Refactor record_subpages() to find 1st
small page
On Wed, Nov 15, 2023 at 08:29:03PM -0500, Peter Xu wrote:
> All the fast-gup functions take a tail page to operate, always need to do
> page mask calculations before feeding that into record_subpages().
>
> Merge that logic into record_subpages(), so that we always take a head
> page, and leave the rest calculation to record_subpages().
This is a bit fragile. You're assuming that pmd_page() always returns
a head page, and that's only true today because I looked at the work
required vs the reward and decided to cap the large folio size at PMD
size. If we allowed 2*PMD_SIZE (eg 4MB on x86), pmd_page() would not
return a head page. There is a small amount of demand for > PMD size
large folio support, so I suspect we will want to do this eventually.
I'm not particularly trying to do these conversions, but it would be
good to not add more assumptions that pmd_page() returns a head page.
> +static int record_subpages(struct page *head, unsigned long sz,
> + unsigned long addr, unsigned long end,
> + struct page **pages)
> @@ -2870,8 +2873,8 @@ static int gup_huge_pmd(pmd_t orig, pmd_t *pmdp, unsigned long addr,
> pages, nr);
> }
>
> - page = nth_page(pmd_page(orig), (addr & ~PMD_MASK) >> PAGE_SHIFT);
> - refs = record_subpages(page, addr, end, pages + *nr);
> + page = pmd_page(orig);
> + refs = record_subpages(page, PMD_SIZE, addr, end, pages + *nr);
>
> folio = try_grab_folio(page, refs, flags);
> if (!folio)
Powered by blists - more mailing lists