lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 30 Jun 2020 14:44:00 +0200
From:   David Hildenbrand <david@...hat.com>
To:     Wei Yang <richard.weiyang@...ux.alibaba.com>,
        dave.hansen@...ux.intel.com, luto@...nel.org, peterz@...radead.org,
        tglx@...utronix.de, mingo@...hat.com, bp@...en8.de,
        akpm@...ux-foundation.org
Cc:     x86@...nel.org, linux-kernel@...r.kernel.org,
        kasan-dev@...glegroups.com, linux-mm@...ck.org
Subject: Re: [PATCH] mm: define pte_add_end for consistency

On 30.06.20 05:18, Wei Yang wrote:
> When walking page tables, we define several helpers to get the address of
> the next boundary. But we don't have one for pte level.
> 
> Let's define it and consolidate the code in several places.
> 
> Signed-off-by: Wei Yang <richard.weiyang@...ux.alibaba.com>
> ---
>  arch/x86/mm/init_64.c   | 6 ++----
>  include/linux/pgtable.h | 7 +++++++
>  mm/kasan/init.c         | 4 +---
>  3 files changed, 10 insertions(+), 7 deletions(-)
> 
> diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c
> index dbae185511cd..f902fbd17f27 100644
> --- a/arch/x86/mm/init_64.c
> +++ b/arch/x86/mm/init_64.c
> @@ -973,9 +973,7 @@ remove_pte_table(pte_t *pte_start, unsigned long addr, unsigned long end,
>  
>  	pte = pte_start + pte_index(addr);
>  	for (; addr < end; addr = next, pte++) {
> -		next = (addr + PAGE_SIZE) & PAGE_MASK;
> -		if (next > end)
> -			next = end;
> +		next = pte_addr_end(addr, end);
>  
>  		if (!pte_present(*pte))
>  			continue;
> @@ -1558,7 +1556,7 @@ void register_page_bootmem_memmap(unsigned long section_nr,
>  		get_page_bootmem(section_nr, pud_page(*pud), MIX_SECTION_INFO);
>  
>  		if (!boot_cpu_has(X86_FEATURE_PSE)) {
> -			next = (addr + PAGE_SIZE) & PAGE_MASK;
> +			next = pte_addr_end(addr, end);
>  			pmd = pmd_offset(pud, addr);
>  			if (pmd_none(*pmd))
>  				continue;
> diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h
> index 32b6c52d41b9..0de09c6c89d2 100644
> --- a/include/linux/pgtable.h
> +++ b/include/linux/pgtable.h
> @@ -706,6 +706,13 @@ static inline pgprot_t pgprot_modify(pgprot_t oldprot, pgprot_t newprot)
>  })
>  #endif
>  
> +#ifndef pte_addr_end
> +#define pte_addr_end(addr, end)						\
> +({	unsigned long __boundary = ((addr) + PAGE_SIZE) & PAGE_MASK;	\
> +	(__boundary - 1 < (end) - 1) ? __boundary : (end);		\
> +})
> +#endif
> +
>  /*
>   * When walking page tables, we usually want to skip any p?d_none entries;
>   * and any p?d_bad entries - reporting the error before resetting to none.
> diff --git a/mm/kasan/init.c b/mm/kasan/init.c
> index fe6be0be1f76..89f748601f74 100644
> --- a/mm/kasan/init.c
> +++ b/mm/kasan/init.c
> @@ -349,9 +349,7 @@ static void kasan_remove_pte_table(pte_t *pte, unsigned long addr,
>  	unsigned long next;
>  
>  	for (; addr < end; addr = next, pte++) {
> -		next = (addr + PAGE_SIZE) & PAGE_MASK;
> -		if (next > end)
> -			next = end;
> +		next = pte_addr_end(addr, end);
>  
>  		if (!pte_present(*pte))
>  			continue;
> 

I'm not really a friend of this I have to say. We're simply iterating
over single pages, not much magic ....

What would definitely make sense is replacing (addr + PAGE_SIZE) &
PAGE_MASK; by PAGE_ALIGN() ...

-- 
Thanks,

David / dhildenb

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ