[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <8cdb459f-f7d1-4ca0-a6a0-5c83d5092cd8@linux.intel.com>
Date: Thu, 16 Oct 2025 16:00:47 +0800
From: Baolu Lu <baolu.lu@...ux.intel.com>
To: Dave Hansen <dave.hansen@...el.com>,
syzbot ci <syzbot+cid009622971eb4566@...kaller.appspotmail.com>,
akpm@...ux-foundation.org, apopple@...dia.com, bp@...en8.de,
dave.hansen@...ux.intel.com, david@...hat.com, iommu@...ts.linux.dev,
jannh@...gle.com, jean-philippe@...aro.org, jgg@...dia.com, joro@...tes.org,
kevin.tian@...el.com, liam.howlett@...cle.com, linux-kernel@...r.kernel.org,
linux-mm@...ck.org, lorenzo.stoakes@...cle.com, luto@...nel.org,
mhocko@...nel.org, mingo@...hat.com, peterz@...radead.org,
robin.murphy@....com, rppt@...nel.org, security@...nel.org,
stable@...r.kernel.org, tglx@...utronix.de, urezki@...il.com,
vasant.hegde@....com, vbabka@...e.cz, will@...nel.org, willy@...radead.org,
x86@...nel.org, yi1.lai@...el.com
Cc: syzbot@...ts.linux.dev, syzkaller-bugs@...glegroups.com
Subject: Re: [syzbot ci] Re: Fix stale IOTLB entries for kernel address space
On 10/16/25 00:25, Dave Hansen wrote:
> Here's the part that confuses me:
>
> On 10/14/25 13:59, syzbot ci wrote:
>> page last free pid 5965 tgid 5964 stack trace:
>> reset_page_owner include/linux/page_owner.h:25 [inline]
>> free_pages_prepare mm/page_alloc.c:1394 [inline]
>> __free_frozen_pages+0xbc4/0xd30 mm/page_alloc.c:2906
>> pmd_free_pte_page+0xa1/0xc0 arch/x86/mm/pgtable.c:783
>> vmap_try_huge_pmd mm/vmalloc.c:158 [inline]
> ...
>
> So, vmap_try_huge_pmd() did a pmd_free_pte_page(). Yet, somehow, the PMD
> stuck around so that it *could* be used after being freed. It _looks_
> like pmd_free_pte_page() freed the page, returned 0, and made
> vmap_try_huge_pmd() return early, skipping the pmd pmd_set_huge().
>
> But I don't know how that could possibly happen.
The reported issue is only related to this patch:
- [PATCH v6 3/7] x86/mm: Use 'ptdesc' when freeing PMD pages
It appears that the pmd_ptdesc() helper can't be used directly here in
this patch. pmd_ptdesc() retrieves the page table page that the PMD
entry resides in:
static inline struct page *pmd_pgtable_page(pmd_t *pmd)
{
unsigned long mask = ~(PTRS_PER_PMD * sizeof(pmd_t) - 1);
return virt_to_page((void *)((unsigned long) pmd & mask));
}
static inline struct ptdesc *pmd_ptdesc(pmd_t *pmd)
{
return page_ptdesc(pmd_pgtable_page(pmd));
}
while, in this patch, we need the page descriptor that a pmd entry
points to. Perhaps we should roll back to the previous approach used in
v5?
I'm sorry that I didn't discover this during my development testing.
Fortunately, I can reproduce it stably on my development machine now.
Thanks,
baolu
Powered by blists - more mailing lists