[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <83497a70-45c3-4386-8994-92207ab521bd@arm.com>
Date: Mon, 4 Dec 2023 12:57:05 +0000
From: Ryan Roberts <ryan.roberts@....com>
To: David Hildenbrand <david@...hat.com>, Zi Yan <ziy@...dia.com>,
Matthew Wilcox <willy@...radead.org>
Cc: Will Deacon <will@...nel.org>,
"Aneesh Kumar K.V" <aneesh.kumar@...ux.ibm.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Nick Piggin <npiggin@...il.com>,
Peter Zijlstra <peterz@...radead.org>,
Christian Borntraeger <borntraeger@...ux.ibm.com>,
Sven Schnelle <svens@...ux.ibm.com>,
Arnd Bergmann <arnd@...db.de>, Yu Zhao <yuzhao@...gle.com>,
"Kirill A. Shutemov" <kirill.shutemov@...ux.intel.com>,
Yin Fengwei <fengwei.yin@...el.com>,
Yang Shi <shy828301@...il.com>,
"Huang, Ying" <ying.huang@...el.com>, linux-mm@...ck.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH v1 4/4] mm/mmu_gather: Store and process pages in contig
ranges
On 04/12/2023 12:43, David Hildenbrand wrote:
> On 04.12.23 13:39, Ryan Roberts wrote:
>> On 04/12/2023 12:28, David Hildenbrand wrote:
>>> On 04.12.23 13:26, Ryan Roberts wrote:
>>>>>>>
>>>>>>> Also, struct page (memmap) might not be always contiguous, using struct page
>>>>>>> points to represent folio range might not give the result you want.
>>>>>>> See nth_page() and folio_page_idx() in include/linux/mm.h.
>>>>>>
>>>>>> Is that true for pages within the same folio too? Or are all pages in a folio
>>>>>> guarranteed contiguous? Perhaps I'm better off using pfn?
>>>>>
>>>>> folio_page_idx() says not all pages in a folio is guaranteed to be contiguous.
>>>>> PFN might be a better choice.
>>>>
>>>> Hi Zi, Matthew,
>>>>
>>>> Zi made this comment a couple of months back that it is incorrect to assume
>>>> that
>>>> `struct page`s within a folio are (virtually) contiguous. I'm not sure if
>>>> that's
>>>> really the case though? I see other sites in the source that do page++ when
>>>> iterating over a folio. e.g. smaps_account(), splice_folio_into_pipe(),
>>>> __collapse_huge_page_copy(), etc.
>>>>
>>>> Any chance someone could explain the rules?
>>>
>>> With the vmemmap, they are contiguous. Without a vmemmap, but with sparsemem, we
>>> might end up allocating one memmap chunk per memory section (e.g., 128 MiB).
>>>
>>> So, for example, a 1 GiB hugetlb page could cross multiple 128 MiB sections, and
>>> therefore, the memmap might not be virtually consecutive.
>>
>> OK, is a "memory section" always 128M or is it variable? If fixed, does that
>> mean that it's impossible for a THP to cross section boundaries? (because a THP
>> is always smaller than a section?)
>
> Section size is variable (see SECTION_SIZE_BITS), but IIRC, buddy allocations
> will never cross them.
>
>>
>> Trying to figure out why my original usage in this series was wrong, but
>> presumably the other places that I mentioned are safe.
>
> If only dealing with buddy allocations, *currently* it might always fall into a
> single memory section.
OK that makes sense - thanks!
>
Powered by blists - more mailing lists