[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ee6ff4d7-8e48-487b-9685-ce5363f6b615@redhat.com>
Date: Wed, 30 Jul 2025 14:54:46 +0200
From: David Hildenbrand <david@...hat.com>
To: Lorenzo Stoakes <lorenzo.stoakes@...cle.com>
Cc: linux-kernel@...r.kernel.org, linux-mm@...ck.org,
xen-devel@...ts.xenproject.org, linux-fsdevel@...r.kernel.org,
nvdimm@...ts.linux.dev, Andrew Morton <akpm@...ux-foundation.org>,
Juergen Gross <jgross@...e.com>, Stefano Stabellini
<sstabellini@...nel.org>,
Oleksandr Tyshchenko <oleksandr_tyshchenko@...m.com>,
Dan Williams <dan.j.williams@...el.com>, Matthew Wilcox
<willy@...radead.org>, Jan Kara <jack@...e.cz>,
Alexander Viro <viro@...iv.linux.org.uk>,
Christian Brauner <brauner@...nel.org>,
"Liam R. Howlett" <Liam.Howlett@...cle.com>, Vlastimil Babka
<vbabka@...e.cz>, Mike Rapoport <rppt@...nel.org>,
Suren Baghdasaryan <surenb@...gle.com>, Michal Hocko <mhocko@...e.com>,
Zi Yan <ziy@...dia.com>, Baolin Wang <baolin.wang@...ux.alibaba.com>,
Nico Pache <npache@...hat.com>, Ryan Roberts <ryan.roberts@....com>,
Dev Jain <dev.jain@....com>, Barry Song <baohua@...nel.org>,
Jann Horn <jannh@...gle.com>, Pedro Falcato <pfalcato@...e.de>,
Hugh Dickins <hughd@...gle.com>, Oscar Salvador <osalvador@...e.de>,
Lance Yang <lance.yang@...ux.dev>
Subject: Re: [PATCH v2 7/9] mm/memory: factor out common code from
vm_normal_page_*()
On 18.07.25 14:43, Lorenzo Stoakes wrote:
> On Thu, Jul 17, 2025 at 10:03:44PM +0200, David Hildenbrand wrote:
>> On 17.07.25 21:55, Lorenzo Stoakes wrote:
>>> On Thu, Jul 17, 2025 at 08:51:51PM +0100, Lorenzo Stoakes wrote:
>>>>> @@ -721,37 +772,21 @@ struct page *vm_normal_page_pmd(struct vm_area_struct *vma, unsigned long addr,
>>>>> print_bad_page_map(vma, addr, pmd_val(pmd), NULL);
>>>>> return NULL;
>>>>> }
>>>>> -
>>>>> - if (unlikely(vma->vm_flags & (VM_PFNMAP|VM_MIXEDMAP))) {
>>>>> - if (vma->vm_flags & VM_MIXEDMAP) {
>>>>> - if (!pfn_valid(pfn))
>>>>> - return NULL;
>>>>> - goto out;
>>>>> - } else {
>>>>> - unsigned long off;
>>>>> - off = (addr - vma->vm_start) >> PAGE_SHIFT;
>>>>> - if (pfn == vma->vm_pgoff + off)
>>>>> - return NULL;
>>>>> - if (!is_cow_mapping(vma->vm_flags))
>>>>> - return NULL;
>>>>> - }
>>>>> - }
>>>>> -
>>>>> - if (is_huge_zero_pfn(pfn))
>>>>> - return NULL;
>>>>> - if (unlikely(pfn > highest_memmap_pfn)) {
>>>>> - print_bad_page_map(vma, addr, pmd_val(pmd), NULL);
>>>>> - return NULL;
>>>>> - }
>>>>> -
>>>>> - /*
>>>>> - * NOTE! We still have PageReserved() pages in the page tables.
>>>>> - * eg. VDSO mappings can cause them to exist.
>>>>> - */
>>>>> -out:
>>>>> - return pfn_to_page(pfn);
>>>>> + return vm_normal_page_pfn(vma, addr, pfn, pmd_val(pmd));
>>>>
>>>> Hmm this seems broken, because you're now making these special on arches with
>>>> pte_special() right? But then you're invoking the not-special function?
>>>>
>>>> Also for non-pte_special() arches you're kind of implying they _maybe_ could be
>>>> special.
>>>
>>> OK sorry the diff caught me out here, you explicitly handle the pmd_special()
>>> case here, duplicatively (yuck).
>>>
>>> Maybe you fix this up in a later patch :)
>>
>> I had that, but the conditions depend on the level, meaning: unnecessary
>> checks for pte/pmd/pud level.
>>
>> I had a variant where I would pass "bool special" into vm_normal_page_pfn(),
>> but I didn't like it.
>>
>> To optimize out, I would have to provide a "level" argument, and did not
>> convince myself yet that that is a good idea at this point.
>
> Yeah fair enough. That probably isn't worth it or might end up making things
> even more ugly.
So, I decided to add the level arguments, but not use them to optimize the checks,
only to forward it to the new print_bad_pte().
So the new helper will be
/**
* __vm_normal_page() - Get the "struct page" associated with a page table entry.
* @vma: The VMA mapping the page table entry.
* @addr: The address where the page table entry is mapped.
* @pfn: The PFN stored in the page table entry.
* @special: Whether the page table entry is marked "special".
* @level: The page table level for error reporting purposes only.
* @entry: The page table entry value for error reporting purposes only.
...
*/
static inline struct page *__vm_normal_page(struct vm_area_struct *vma,
unsigned long addr, unsigned long pfn, bool special,
unsigned long long entry, enum pgtable_level level)
...
And vm_nomal_page() will for example be
/**
* vm_normal_page() - Get the "struct page" associated with a PTE
* @vma: The VMA mapping the @pte.
* @addr: The address where the @pte is mapped.
* @pte: The PTE.
*
* Get the "struct page" associated with a PTE. See __vm_normal_page()
* for details on "normal" and "special" mappings.
*
* Return: Returns the "struct page" if this is a "normal" mapping. Returns
* NULL if this is a "special" mapping.
*/
struct page *vm_normal_page(struct vm_area_struct *vma, unsigned long addr,
pte_t pte)
{
return __vm_normal_page(vma, addr, pte_pfn(pte), pte_special(pte),
pte_val(pte), PGTABLE_LEVEL_PTE);
}
--
Cheers,
David / dhildenb
Powered by blists - more mailing lists