[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <24074aed-e9dc-bfc6-2f67-2c24b11ee60f@linux.ibm.com>
Date: Wed, 2 Sep 2020 13:45:18 +0530
From: "Aneesh Kumar K.V" <aneesh.kumar@...ux.ibm.com>
To: Christophe Leroy <christophe.leroy@...roup.eu>,
Benjamin Herrenschmidt <benh@...nel.crashing.org>,
Paul Mackerras <paulus@...ba.org>,
Michael Ellerman <mpe@...erman.id.au>
Cc: linuxppc-dev@...ts.ozlabs.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH] powerpc: Fix random segfault when freeing hugetlb range
On 9/2/20 1:41 PM, Christophe Leroy wrote:
>
>
> Le 02/09/2020 à 05:23, Aneesh Kumar K.V a écrit :
>> Christophe Leroy <christophe.leroy@...roup.eu> writes:
>>
>>> The following random segfault is observed from time to time with
>>> map_hugetlb selftest:
>>>
>>> root@...alhost:~# ./map_hugetlb 1 19
>>> 524288 kB hugepages
>>> Mapping 1 Mbytes
>>> Segmentation fault
>>>
>>> [ 31.219972] map_hugetlb[365]: segfault (11) at 117 nip 77974f8c lr
>>> 779a6834 code 1 in ld-2.23.so[77966000+21000]
>>> [ 31.220192] map_hugetlb[365]: code: 9421ffc0 480318d1 93410028
>>> 90010044 9361002c 93810030 93a10034 93c10038
>>> [ 31.220307] map_hugetlb[365]: code: 93e1003c 93210024 8123007c
>>> 81430038 <80e90004> 814a0004 7f443a14 813a0004
>>> [ 31.221911] BUG: Bad rss-counter state mm:(ptrval)
>>> type:MM_FILEPAGES val:33
>>> [ 31.229362] BUG: Bad rss-counter state mm:(ptrval)
>>> type:MM_ANONPAGES val:5
>>>
>>> This fault is due to hugetlb_free_pgd_range() freeing page tables
>>> that are also used by regular pages.
>>>
>>> As explain in the comment at the beginning of
>>> hugetlb_free_pgd_range(), the verification done in free_pgd_range()
>>> on floor and ceiling is not done here, which means
>>> hugetlb_free_pte_range() can free outside the expected range.
>>>
>>> As the verification cannot be done in hugetlb_free_pgd_range(), it
>>> must be done in hugetlb_free_pte_range().
>>>
>>
>> Reviewed-by: Aneesh Kumar K.V <aneesh.kumar@...ux.ibm.com>
>>
>>> Fixes: b250c8c08c79 ("powerpc/8xx: Manage 512k huge pages as standard
>>> pages.")
>>> Cc: stable@...r.kernel.org
>>> Signed-off-by: Christophe Leroy <christophe.leroy@...roup.eu>
>>> ---
>>> arch/powerpc/mm/hugetlbpage.c | 18 ++++++++++++++++--
>>> 1 file changed, 16 insertions(+), 2 deletions(-)
>>>
>>> diff --git a/arch/powerpc/mm/hugetlbpage.c
>>> b/arch/powerpc/mm/hugetlbpage.c
>>> index 26292544630f..e7ae2a2c4545 100644
>>> --- a/arch/powerpc/mm/hugetlbpage.c
>>> +++ b/arch/powerpc/mm/hugetlbpage.c
>>> @@ -330,10 +330,24 @@ static void free_hugepd_range(struct mmu_gather
>>> *tlb, hugepd_t *hpdp, int pdshif
>>> get_hugepd_cache_index(pdshift - shift));
>>> }
>>> -static void hugetlb_free_pte_range(struct mmu_gather *tlb, pmd_t
>>> *pmd, unsigned long addr)
>>> +static void hugetlb_free_pte_range(struct mmu_gather *tlb, pmd_t *pmd,
>>> + unsigned long addr, unsigned long end,
>>> + unsigned long floor, unsigned long ceiling)
>>> {
>>> + unsigned long start = addr;
>>> pgtable_t token = pmd_pgtable(*pmd);
>>> + start &= PMD_MASK;
>>> + if (start < floor)
>>> + return;
>>> + if (ceiling) {
>>> + ceiling &= PMD_MASK;
>>> + if (!ceiling)
>>> + return;
>>> + }
>>> + if (end - 1 > ceiling - 1)
>>> + return;
>>> +
>>
>> We do repeat that for pud/pmd/pte hugetlb_free_range. Can we consolidate
>> that with comment explaining we are checking if the pgtable entry is
>> mapping outside the range?
>
> I was thinking about refactoring that into a helper and add all the
> necessary comments to explain what it does.
>
> Will do that in a followup series if you are OK. This patch is a bug fix
> and also have to go through stable.
>
agreed.
Thanks.
-aneesh
Powered by blists - more mailing lists