[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <971cfb54-f5a6-921c-b0c5-195a5daed0fb@linux.alibaba.com>
Date: Sat, 7 May 2022 09:32:46 +0800
From: Baolin Wang <baolin.wang@...ux.alibaba.com>
To: Mike Kravetz <mike.kravetz@...cle.com>, akpm@...ux-foundation.org,
catalin.marinas@....com, will@...nel.org
Cc: tsbogend@...ha.franken.de, James.Bottomley@...senPartnership.com,
deller@....de, mpe@...erman.id.au, benh@...nel.crashing.org,
paulus@...ba.org, hca@...ux.ibm.com, gor@...ux.ibm.com,
agordeev@...ux.ibm.com, borntraeger@...ux.ibm.com,
svens@...ux.ibm.com, ysato@...rs.sourceforge.jp, dalias@...c.org,
davem@...emloft.net, arnd@...db.de,
linux-arm-kernel@...ts.infradead.org, linux-kernel@...r.kernel.org,
linux-ia64@...r.kernel.org, linux-mips@...r.kernel.org,
linux-parisc@...r.kernel.org, linuxppc-dev@...ts.ozlabs.org,
linux-s390@...r.kernel.org, linux-sh@...r.kernel.org,
sparclinux@...r.kernel.org, linux-arch@...r.kernel.org,
linux-mm@...ck.org
Subject: Re: [PATCH 3/3] mm: rmap: Fix CONT-PTE/PMD size hugetlb issue when
unmapping
On 5/7/2022 2:55 AM, Mike Kravetz wrote:
> On 4/29/22 01:14, Baolin Wang wrote:
>> On some architectures (like ARM64), it can support CONT-PTE/PMD size
>> hugetlb, which means it can support not only PMD/PUD size hugetlb:
>> 2M and 1G, but also CONT-PTE/PMD size: 64K and 32M if a 4K page
>> size specified.
>>
>> When unmapping a hugetlb page, we will get the relevant page table
>> entry by huge_pte_offset() only once to nuke it. This is correct
>> for PMD or PUD size hugetlb, since they always contain only one
>> pmd entry or pud entry in the page table.
>>
>> However this is incorrect for CONT-PTE and CONT-PMD size hugetlb,
>> since they can contain several continuous pte or pmd entry with
>> same page table attributes, so we will nuke only one pte or pmd
>> entry for this CONT-PTE/PMD size hugetlb page.
>>
>> And now we only use try_to_unmap() to unmap a poisoned hugetlb page,
>
> Since try_to_unmap can be called for non-hugetlb pages, perhaps the following
> is more accurate?
>
> try_to_unmap is only passed a hugetlb page in the case where the
> hugetlb page is poisoned.
Yes, will update in next version.
> It does concern me that this assumption is built into the code as
> pointed out in your discussion with Gerald. Should we perhaps add
> a VM_BUG_ON() to make sure the passed huge page is poisoned? This
> would be in the same 'if block' where we call
> adjust_range_if_pmd_sharing_possible.
Good point. Will do in next version. Thanks.
Powered by blists - more mailing lists