lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <dde6d861-daa3-49ed-ad4f-ff9dcaf1f2b8@linux.intel.com>
Date: Tue, 26 Aug 2025 10:49:29 +0800
From: Baolu Lu <baolu.lu@...ux.intel.com>
To: Dave Hansen <dave.hansen@...el.com>, "Tian, Kevin"
 <kevin.tian@...el.com>, Jason Gunthorpe <jgg@...dia.com>
Cc: Joerg Roedel <joro@...tes.org>, Will Deacon <will@...nel.org>,
 Robin Murphy <robin.murphy@....com>, Jann Horn <jannh@...gle.com>,
 Vasant Hegde <vasant.hegde@....com>, Alistair Popple <apopple@...dia.com>,
 Peter Zijlstra <peterz@...radead.org>, Uladzislau Rezki <urezki@...il.com>,
 Jean-Philippe Brucker <jean-philippe@...aro.org>,
 Andy Lutomirski <luto@...nel.org>, "Lai, Yi1" <yi1.lai@...el.com>,
 "iommu@...ts.linux.dev" <iommu@...ts.linux.dev>,
 "security@...nel.org" <security@...nel.org>,
 "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
 "stable@...r.kernel.org" <stable@...r.kernel.org>
Subject: Re: [PATCH v3 1/1] iommu/sva: Invalidate KVA range on kernel TLB
 flush

On 8/26/25 09:25, Baolu Lu wrote:
> On 8/26/25 06:36, Dave Hansen wrote:
>> On 8/22/25 20:26, Baolu Lu wrote:
>>> +static struct {
>>> +    /* list for pagetable_dtor_free() */
>>> +    struct list_head dtor;
>>> +    /* list for __free_page() */
>>> +    struct list_head page;
>>> +    /* list for free_pages() */
>>> +    struct list_head pages;
>>> +    /* protect all the ptdesc lists */
>>> +    spinlock_t lock;
>>> +    struct work_struct work;
>>
>> Could you explain a bit why this now needs three separate lists? Seems
>> like pure overkill.
> 
> Yes, sure.
> 
> The three separate lists are needed because we're handling three
> distinct types of page deallocation. Grouping the pages this way allows
> the workqueue handler to free each type using the correct function.

Please allow me to add more details.

> 
> - pagetable_dtor_free(): This is for freeing PTE pages, which require
>    specific cleanup of a ptdesc structure.

This is used in

static inline void pte_free_kernel(struct mm_struct *mm, pte_t *pte)

and

int pud_free_pmd_page(pud_t *pud, unsigned long addr)

> 
>   - __free_page(): This is for freeing a single page.

This is used in

static void cpa_collapse_large_pages(struct cpa_data *cpa)
{
         ... ...

	list_for_each_entry_safe(ptdesc, tmp, &pgtables, pt_list) {
                 list_del(&ptdesc->pt_list);
                 __free_page(ptdesc_page(ptdesc));
         }
}

> 
>   - free_pages(): This is for freeing a contiguous block of pages that
>     were allocated together.

This is used in

static void __meminit free_pagetable(struct page *page, int order)
{
	... ...

	free_pages((unsigned long)page_address(page), order);
}

What's strange is that order is almost always 0, except in the path of
remove_pmd_table() -> free_hugepage_table(), where order can be greater
than 0. However, in this context path, free_hugepage_table() isn't used
to free a page table page itself. Instead, it's used to free the actual
pages that a leaf PMD is pointing to.

static void __meminit
remove_pmd_table(pmd_t *pmd_start, unsigned long addr, unsigned long end,
                  bool direct, struct vmem_altmap *altmap)
{
         ... ...

         if (pmd_leaf(*pmd)) {
                 if (IS_ALIGNED(addr, PMD_SIZE) &&
                     IS_ALIGNED(next, PMD_SIZE)) {
                         if (!direct)
                                 free_hugepage_table(pmd_page(*pmd),
                                                     altmap);

                         spin_lock(&init_mm.page_table_lock);
                         pmd_clear(pmd);
                         spin_unlock(&init_mm.page_table_lock);
                         pages++;
                 } else if (vmemmap_pmd_is_unused(addr, next)) {
                                 free_hugepage_table(pmd_page(*pmd),
                                                     altmap);
                                 spin_lock(&init_mm.page_table_lock);
                                 pmd_clear(pmd);
                                 spin_unlock(&init_mm.page_table_lock);
                 }
                 continue;

         ... ...
}

Is this a misuse of free_pagetable() or anything overlooked?

Thanks,
baolu


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ