lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Wed, 14 Mar 2018 17:02:33 +0530
From:   Chintan Pandya <cpandya@...eaurora.org>
To:     Marc Zyngier <marc.zyngier@....com>, catalin.marinas@....com,
        will.deacon@....com, arnd@...db.de
Cc:     mark.rutland@....com, ard.biesheuvel@...aro.org,
        james.morse@....com, kristina.martsenko@....com,
        takahiro.akashi@...aro.org, gregkh@...uxfoundation.org,
        tglx@...utronix.de, linux-arm-kernel@...ts.infradead.org,
        linux-kernel@...r.kernel.org, linux-arch@...r.kernel.org,
        akpm@...ux-foundation.org, toshi.kani@....com
Subject: Re: [PATCH v1 4/4] Revert "arm64: Enforce BBM for huge IO/VMAP
 mappings"



On 3/14/2018 4:16 PM, Marc Zyngier wrote:
> On 14/03/18 08:48, Chintan Pandya wrote:
>> This commit 15122ee2c515a ("arm64: Enforce BBM for huge
>> IO/VMAP mappings") is a temporary work-around until the
>> issues with CONFIG_HAVE_ARCH_HUGE_VMAP gets fixed.
>>
>> Revert this change as we have fixes for the issue.
>>
>> Signed-off-by: Chintan Pandya <cpandya@...eaurora.org>
>> ---
>>   arch/arm64/mm/mmu.c | 8 --------
>>   1 file changed, 8 deletions(-)
>>
>> diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
>> index c0df264..19116c6 100644
>> --- a/arch/arm64/mm/mmu.c
>> +++ b/arch/arm64/mm/mmu.c
>> @@ -935,10 +935,6 @@ int pud_set_huge(pud_t *pudp, phys_addr_t phys, pgprot_t prot)
>>   	pgprot_t sect_prot = __pgprot(PUD_TYPE_SECT |
>>   					pgprot_val(mk_sect_prot(prot)));
>>   
>> -	/* ioremap_page_range doesn't honour BBM */
>> -	if (pud_present(READ_ONCE(*pudp)))
>> -		return 0;
>> -
>>   	BUG_ON(phys & ~PUD_MASK);
>>   	if (pud_val(*pud) && !pud_huge(*pud))
>>   		free_page((unsigned long)__va(pud_val(*pud)));
>> @@ -952,10 +948,6 @@ int pmd_set_huge(pmd_t *pmdp, phys_addr_t phys, pgprot_t prot)
>>   	pgprot_t sect_prot = __pgprot(PMD_TYPE_SECT |
>>   					pgprot_val(mk_sect_prot(prot)));
>>   
>> -	/* ioremap_page_range doesn't honour BBM */
>> -	if (pmd_present(READ_ONCE(*pmdp)))
>> -		return 0;
>> -
>>   	BUG_ON(phys & ~PMD_MASK);
>>   	if (pmd_val(*pmd) && !pmd_huge(*pmd))
>>   		free_page((unsigned long)__va(pmd_val(*pmd)));
>>
> 
> But you're still not doing a BBM, right? What prevents a speculative
> access from using the (now freed) entry? The TLB invalidation you've
> introduce just narrows the window where bad things can happen.
Valid point. I will rework on these patches.

Thanks Marc.

> 
> My gut feeling is that this series introduces more bugs than it fixes...
> If you're going to fix it, please fix it by correctly implementing BBM
> for huge mappings.
> 
> Or am I missing something terribly obvious?
> 
> 	M.
> 

Chintan
-- 
Qualcomm India Private Limited, on behalf of Qualcomm Innovation Center,
Inc. is a member of the Code Aurora Forum, a Linux Foundation
Collaborative Project

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ