lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Mon, 21 Nov 2016 09:40:06 -0800
From:   Laura Abbott <labbott@...hat.com>
To:     Mark Rutland <mark.rutland@....com>, lorenzo.pieralisi@....com
Cc:     Ard Biesheuvel <ard.biesheuvel@...aro.org>,
        Will Deacon <will.deacon@....com>,
        Catalin Marinas <catalin.marinas@....com>,
        Thomas Gleixner <tglx@...utronix.de>,
        Ingo Molnar <mingo@...hat.com>,
        "H. Peter Anvin" <hpa@...or.com>, x86@...nel.org,
        linux-kernel@...r.kernel.org, linux-mm@...ck.org,
        Andrew Morton <akpm@...ux-foundation.org>,
        Marek Szyprowski <m.szyprowski@...sung.com>,
        Joonsoo Kim <iamjoonsoo.kim@....com>,
        linux-arm-kernel@...ts.infradead.org
Subject: Re: [PATCHv3 5/6] arm64: Use __pa_symbol for kernel symbols

On 11/18/2016 06:35 AM, Mark Rutland wrote:
> Hi Laura,
> 
> On Thu, Nov 17, 2016 at 05:16:55PM -0800, Laura Abbott wrote:
>>
>> __pa_symbol is technically the marco that should be used for kernel
>> symbols. Switch to this as a pre-requisite for DEBUG_VIRTUAL which
>> will do bounds checking.
>>
>> Signed-off-by: Laura Abbott <labbott@...hat.com>
>> ---
>> v3: Conversion of more sites besides just _end. Addition of __lm_sym_addr
>> macro to take care of the _va(__pa_symbol(..)) idiom.
>>
>> Note that a copy of __pa_symbol was added to avoid a mess of headers
>> since the #ifndef __pa_symbol case is defined in linux/mm.h
> 
> I think we also need to fix up virt_to_phys(__cpu_soft_restart) in
> arch/arm64/kernel/cpu-reset.h. Otherwise, this looks complete for uses
> falling under arch/arm64/.
> 
> I also think it's worth mentioning in the commit message that this patch
> adds and __lm_sym_addr() and uses it in some places so that low-level
> helpers can use virt_to_phys() or __pa() consistently.
> 
> The PSCI change doesn't conflict with patches [1] that'll go via
> arm-soc, so I'm happy for that PSCI change to go via the arm64 tree,
> though it may be worth splitting into its own patch just in case
> something unexpected crops up.
> 
> With those fixed up:
> 
> Reviewed-by: Mark Rutland <mark.rutland@....com>
> 
> [1] http://lists.infradead.org/pipermail/linux-arm-kernel/2016-November/466522.html
> 
> Otherwise, I just have a few nits below.
> 
>> @@ -271,7 +271,7 @@ static inline void __kvm_flush_dcache_pud(pud_t pud)
>>  	kvm_flush_dcache_to_poc(page_address(page), PUD_SIZE);
>>  }
>>  
>> -#define kvm_virt_to_phys(x)		__virt_to_phys((unsigned long)(x))
>> +#define kvm_virt_to_phys(x)		__pa_symbol((unsigned long)(x))
> 
> Nit: we can drop the unsigned long cast given __pa_symbol() contains
> one.
> 
>> diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h
>> index ffbb9a5..c2041a3 100644
>> --- a/arch/arm64/include/asm/pgtable.h
>> +++ b/arch/arm64/include/asm/pgtable.h
>> @@ -52,7 +52,7 @@ extern void __pgd_error(const char *file, int line, unsigned long val);
>>   * for zero-mapped memory areas etc..
>>   */
>>  extern unsigned long empty_zero_page[PAGE_SIZE / sizeof(unsigned long)];
>> -#define ZERO_PAGE(vaddr)	pfn_to_page(PHYS_PFN(__pa(empty_zero_page)))
>> +#define ZERO_PAGE(vaddr)	pfn_to_page(PHYS_PFN(__pa_symbol(empty_zero_page)))
> 
> Nit: I think we can also simplify this to:
> 
> 	phys_to_page(__pa_symbol(empty_zero_page))
> 
> ... since phys_to_page(p) is (pfn_to_page(__phys_to_pfn(p)))
> ... and __phys_to_pfn(p) is PHYS_PFN(p)
> 
>> diff --git a/arch/arm64/kernel/insn.c b/arch/arm64/kernel/insn.c
>> index 6f2ac4f..af8967a 100644
>> --- a/arch/arm64/kernel/insn.c
>> +++ b/arch/arm64/kernel/insn.c
>> @@ -97,7 +97,7 @@ static void __kprobes *patch_map(void *addr, int fixmap)
>>  	if (module && IS_ENABLED(CONFIG_DEBUG_SET_MODULE_RONX))
>>  		page = vmalloc_to_page(addr);
>>  	else if (!module)
>> -		page = pfn_to_page(PHYS_PFN(__pa(addr)));
>> +		page = pfn_to_page(PHYS_PFN(__pa_symbol(addr)));
> 
> Nit: likewise, we can use phys_to_page() here.
> 
>> diff --git a/arch/arm64/kernel/vdso.c b/arch/arm64/kernel/vdso.c
>> index a2c2478..791e87a 100644
>> --- a/arch/arm64/kernel/vdso.c
>> +++ b/arch/arm64/kernel/vdso.c
>> @@ -140,11 +140,11 @@ static int __init vdso_init(void)
>>  		return -ENOMEM;
>>  
>>  	/* Grab the vDSO data page. */
>> -	vdso_pagelist[0] = pfn_to_page(PHYS_PFN(__pa(vdso_data)));
>> +	vdso_pagelist[0] = pfn_to_page(PHYS_PFN(__pa_symbol(vdso_data)));
> 
> Nit: phys_to_page() again.
> 
>>  
>>  	/* Grab the vDSO code pages. */
>>  	for (i = 0; i < vdso_pages; i++)
>> -		vdso_pagelist[i + 1] = pfn_to_page(PHYS_PFN(__pa(&vdso_start)) + i);
>> +		vdso_pagelist[i + 1] = pfn_to_page(PHYS_PFN(__pa_symbol(&vdso_start)) + i);
> 
> Nit: phys_to_page() again.

I think it makes sense to keep this one as is. It's offsetting
by pfn number and trying force phys_to_page would make it more
difficult to read.

> 
> Thanks,
> Mark.
> 

Thanks,
Laura

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ