[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <bcb4a3b0-4fcd-af3a-2a2c-fd662d9eaba9@linux.alibaba.com>
Date: Sat, 30 Apr 2022 11:22:33 +0800
From: Baolin Wang <baolin.wang@...ux.alibaba.com>
To: Gerald Schaefer <gerald.schaefer@...ux.ibm.com>
Cc: akpm@...ux-foundation.org, mike.kravetz@...cle.com,
catalin.marinas@....com, will@...nel.org,
tsbogend@...ha.franken.de, James.Bottomley@...senPartnership.com,
deller@....de, mpe@...erman.id.au, benh@...nel.crashing.org,
paulus@...ba.org, hca@...ux.ibm.com, gor@...ux.ibm.com,
agordeev@...ux.ibm.com, borntraeger@...ux.ibm.com,
svens@...ux.ibm.com, ysato@...rs.sourceforge.jp, dalias@...c.org,
davem@...emloft.net, arnd@...db.de,
linux-arm-kernel@...ts.infradead.org, linux-kernel@...r.kernel.org,
linux-ia64@...r.kernel.org, linux-mips@...r.kernel.org,
linux-parisc@...r.kernel.org, linuxppc-dev@...ts.ozlabs.org,
linux-s390@...r.kernel.org, linux-sh@...r.kernel.org,
sparclinux@...r.kernel.org, linux-arch@...r.kernel.org,
linux-mm@...ck.org
Subject: Re: [PATCH 3/3] mm: rmap: Fix CONT-PTE/PMD size hugetlb issue when
unmapping
On 4/30/2022 4:02 AM, Gerald Schaefer wrote:
> On Fri, 29 Apr 2022 16:14:43 +0800
> Baolin Wang <baolin.wang@...ux.alibaba.com> wrote:
>
>> On some architectures (like ARM64), it can support CONT-PTE/PMD size
>> hugetlb, which means it can support not only PMD/PUD size hugetlb:
>> 2M and 1G, but also CONT-PTE/PMD size: 64K and 32M if a 4K page
>> size specified.
>>
>> When unmapping a hugetlb page, we will get the relevant page table
>> entry by huge_pte_offset() only once to nuke it. This is correct
>> for PMD or PUD size hugetlb, since they always contain only one
>> pmd entry or pud entry in the page table.
>>
>> However this is incorrect for CONT-PTE and CONT-PMD size hugetlb,
>> since they can contain several continuous pte or pmd entry with
>> same page table attributes, so we will nuke only one pte or pmd
>> entry for this CONT-PTE/PMD size hugetlb page.
>>
>> And now we only use try_to_unmap() to unmap a poisoned hugetlb page,
>> which means now we will unmap only one pte entry for a CONT-PTE or
>> CONT-PMD size poisoned hugetlb page, and we can still access other
>> subpages of a CONT-PTE or CONT-PMD size poisoned hugetlb page,
>> which will cause serious issues possibly.
>>
>> So we should change to use huge_ptep_clear_flush() to nuke the
>> hugetlb page table to fix this issue, which already considered
>> CONT-PTE and CONT-PMD size hugetlb.
>>
>> Note we've already used set_huge_swap_pte_at() to set a poisoned
>> swap entry for a poisoned hugetlb page.
>>
>> Signed-off-by: Baolin Wang <baolin.wang@...ux.alibaba.com>
>> ---
>> mm/rmap.c | 34 +++++++++++++++++-----------------
>> 1 file changed, 17 insertions(+), 17 deletions(-)
>>
>> diff --git a/mm/rmap.c b/mm/rmap.c
>> index 7cf2408..1e168d7 100644
>> --- a/mm/rmap.c
>> +++ b/mm/rmap.c
>> @@ -1564,28 +1564,28 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma,
>> break;
>> }
>> }
>> + pteval = huge_ptep_clear_flush(vma, address, pvmw.pte);
>
> Unlike in your patch 2/3, I do not see that this (huge) pteval would later
> be used again with set_huge_pte_at() instead of set_pte_at(). Not sure if
> this (huge) pteval could end up at a set_pte_at() later, but if yes, then
> this would be broken on s390, and you'd need to use set_huge_pte_at()
> instead of set_pte_at() like in your patch 2/3.
IIUC, As I said in the commit message, we will only unmap a poisoned
hugetlb page by try_to_unmap(), and the poisoned hugetlb page will be
remapped with a poisoned entry by set_huge_swap_pte_at() in
try_to_unmap_one(). So I think no need change to use set_huge_pte_at()
instead of set_pte_at() for other cases, since the hugetlb page will not
hit other cases.
if (PageHWPoison(subpage) && !(flags & TTU_IGNORE_HWPOISON)) {
pteval = swp_entry_to_pte(make_hwpoison_entry(subpage));
if (folio_test_hugetlb(folio)) {
hugetlb_count_sub(folio_nr_pages(folio), mm);
set_huge_swap_pte_at(mm, address, pvmw.pte, pteval,
vma_mmu_pagesize(vma));
} else {
dec_mm_counter(mm, mm_counter(&folio->page));
set_pte_at(mm, address, pvmw.pte, pteval);
}
}
>
> Please note that huge_ptep_get functions do not return valid PTEs on s390,
> and such PTEs must never be set directly with set_pte_at(), but only with
> set_huge_pte_at().
>
> Background is that, for hugetlb pages, we are of course not really dealing
> with PTEs at this level, but rather PMDs or PUDs, depending on hugetlb size.
> On s390, the layout is quite different for PTEs and PMDs / PUDs, and
> unfortunately the hugetlb code is not properly reflecting this by using
> PMD or PUD types, like the THP code does.
>
> So, as work-around, on s390, the huge_ptep_xxx functions will return
> only fake PTEs, which must be converted again to a proper PMD or PUD,
> before writing them to the page table, which is what happens in
> set_huge_pte_at(), but not in set_pte_at().
Thanks for your explanation. As I said as above, I think we've already
handled the hugetlb with set_huge_swap_pte_at() in try_to_unmap_one().
Powered by blists - more mailing lists