lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 7 May 2020 10:15:59 +1000
From:   Gavin Shan <gshan@...hat.com>
To:     Will Deacon <will@...nel.org>,
        Anshuman Khandual <anshuman.khandual@....com>
Cc:     linux-arm-kernel@...ts.infradead.org, mark.rutland@....com,
        catalin.marinas@....com, linux-kernel@...r.kernel.org,
        shan.gavin@...il.com
Subject: Re: [PATCH] arm64/mm: Remove add_huge_page_size()

On 5/6/20 5:19 PM, Will Deacon wrote:
> On Wed, May 06, 2020 at 12:36:43PM +0530, Anshuman Khandual wrote:
>>
>>
>> On 05/06/2020 12:16 PM, Gavin Shan wrote:
>>> The function add_huge_page_size(), wrapper of hugetlb_add_hstate(),
>>> avoids to register duplicated huge page states for same size. However,
>>> the same logic has been included in hugetlb_add_hstate(). So it seems
>>> unnecessary to keep add_huge_page_size() and this just removes it.
>>
>> Makes sense.
>>
>>>
>>> Signed-off-by: Gavin Shan <gshan@...hat.com>
>>> ---
>>>   arch/arm64/mm/hugetlbpage.c | 18 +++++-------------
>>>   1 file changed, 5 insertions(+), 13 deletions(-)
>>>
>>> diff --git a/arch/arm64/mm/hugetlbpage.c b/arch/arm64/mm/hugetlbpage.c
>>> index bbeb6a5a6ba6..ed7530413941 100644
>>> --- a/arch/arm64/mm/hugetlbpage.c
>>> +++ b/arch/arm64/mm/hugetlbpage.c
>>> @@ -441,22 +441,14 @@ void huge_ptep_clear_flush(struct vm_area_struct *vma,
>>>   	clear_flush(vma->vm_mm, addr, ptep, pgsize, ncontig);
>>>   }
>>>   
>>> -static void __init add_huge_page_size(unsigned long size)
>>> -{
>>> -	if (size_to_hstate(size))
>>> -		return;
>>> -
>>> -	hugetlb_add_hstate(ilog2(size) - PAGE_SHIFT);
>>> -}
>>> -
>>>   static int __init hugetlbpage_init(void)
>>>   {
>>>   #ifdef CONFIG_ARM64_4K_PAGES
>>> -	add_huge_page_size(PUD_SIZE);
>>> +	hugetlb_add_hstate(PUD_SHIFT - PAGE_SHIFT);
>>>   #endif
>>> -	add_huge_page_size(CONT_PMD_SIZE);
>>> -	add_huge_page_size(PMD_SIZE);
>>> -	add_huge_page_size(CONT_PTE_SIZE);
>>> +	hugetlb_add_hstate(CONT_PMD_SHIFT + PMD_SHIFT - PAGE_SHIFT);
>>> +	hugetlb_add_hstate(PMD_SHIFT - PAGE_SHIFT);
>>> +	hugetlb_add_hstate(CONT_PTE_SHIFT);
> 
> Something similar has already been done in linux-next.
> 

Thanks, Will. I didn't check linux-next before posting this patch.
Please ignore it then :)

>> Should these page order values be converted into macros instead. Also
>> we should probably keep (CONT_PTE_SHIFT + PAGE_SHIFT - PAGE_SHIFT) as
>> is to make things more clear.
> 
> I think the real confusion stems from us not being consistent with your
> *_SHIFT definitions on arm64. It's madness for CONT_PTE_SHIFT to be smaller
> than PAGE_SHIFT imo, but it's just cosmetic I guess.
> 

Yeah, Do you want me to post a patch, to fix it?

> Will
> 

Thanks,
Gavin

Powered by blists - more mailing lists