[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <918c0479-4d1a-3f3c-346c-051de4b26d30@oracle.com>
Date: Mon, 9 May 2022 15:25:57 -0700
From: Mike Kravetz <mike.kravetz@...cle.com>
To: Baolin Wang <baolin.wang@...ux.alibaba.com>,
akpm@...ux-foundation.org, catalin.marinas@....com, will@...nel.org
Cc: tsbogend@...ha.franken.de, James.Bottomley@...senPartnership.com,
deller@....de, mpe@...erman.id.au, benh@...nel.crashing.org,
paulus@...ba.org, hca@...ux.ibm.com, gor@...ux.ibm.com,
agordeev@...ux.ibm.com, borntraeger@...ux.ibm.com,
svens@...ux.ibm.com, ysato@...rs.sourceforge.jp, dalias@...c.org,
davem@...emloft.net, arnd@...db.de,
linux-arm-kernel@...ts.infradead.org, linux-kernel@...r.kernel.org,
linux-ia64@...r.kernel.org, linux-mips@...r.kernel.org,
linux-parisc@...r.kernel.org, linuxppc-dev@...ts.ozlabs.org,
linux-s390@...r.kernel.org, linux-sh@...r.kernel.org,
sparclinux@...r.kernel.org, linux-arch@...r.kernel.org,
linux-mm@...ck.org
Subject: Re: [PATCH v2 3/3] mm: rmap: Fix CONT-PTE/PMD size hugetlb issue when
unmapping
On 5/8/22 02:36, Baolin Wang wrote:
> On some architectures (like ARM64), it can support CONT-PTE/PMD size
> hugetlb, which means it can support not only PMD/PUD size hugetlb:
> 2M and 1G, but also CONT-PTE/PMD size: 64K and 32M if a 4K page
> size specified.
>
> When unmapping a hugetlb page, we will get the relevant page table
> entry by huge_pte_offset() only once to nuke it. This is correct
> for PMD or PUD size hugetlb, since they always contain only one
> pmd entry or pud entry in the page table.
>
> However this is incorrect for CONT-PTE and CONT-PMD size hugetlb,
> since they can contain several continuous pte or pmd entry with
> same page table attributes, so we will nuke only one pte or pmd
> entry for this CONT-PTE/PMD size hugetlb page.
>
> And now try_to_unmap() is only passed a hugetlb page in the case
> where the hugetlb page is poisoned. Which means now we will unmap
> only one pte entry for a CONT-PTE or CONT-PMD size poisoned hugetlb
> page, and we can still access other subpages of a CONT-PTE or CONT-PMD
> size poisoned hugetlb page, which will cause serious issues possibly.
>
> So we should change to use huge_ptep_clear_flush() to nuke the
> hugetlb page table to fix this issue, which already considered
> CONT-PTE and CONT-PMD size hugetlb.
>
> We've already used set_huge_swap_pte_at() to set a poisoned
> swap entry for a poisoned hugetlb page. Meanwhile adding a VM_BUG_ON()
> to make sure the passed hugetlb page is poisoned in try_to_unmap().
>
> Signed-off-by: Baolin Wang <baolin.wang@...ux.alibaba.com>
> ---
> mm/rmap.c | 39 ++++++++++++++++++++++-----------------
> 1 file changed, 22 insertions(+), 17 deletions(-)
>
> diff --git a/mm/rmap.c b/mm/rmap.c
> index 7cf2408..37c8fd2 100644
> --- a/mm/rmap.c
> +++ b/mm/rmap.c
> @@ -1530,6 +1530,11 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma,
>
> if (folio_test_hugetlb(folio)) {
> /*
> + * The try_to_unmap() is only passed a hugetlb page
> + * in the case where the hugetlb page is poisoned.
> + */
> + VM_BUG_ON_PAGE(!PageHWPoison(subpage), subpage);
> + /*
It is unfortunate that this could not easily be added to the first
if (folio_test_hugetlb(folio)) block in this routine. However, it
is fine to add here.
Looks good. Thanks for all these changes,
Reviewed-by: Mike Kravetz <mike.kravetz@...cle.com>
--
Mike Kravetz
Powered by blists - more mailing lists