lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <f1f41e1e-d9f9-4a3f-8ada-8d4b612996f3@arm.com>
Date: Fri, 26 Jul 2024 16:56:59 +0530
From: Anshuman Khandual <anshuman.khandual@....com>
To: Ryan Roberts <ryan.roberts@....com>, linux-arm-kernel@...ts.infradead.org
Cc: Catalin Marinas <catalin.marinas@....com>, Will Deacon <will@...nel.org>,
 linux-kernel@...r.kernel.org
Subject: Re: [PATCH] arm64/mm: Avoid direct referencing page table enties in
 map_range()


On 7/25/24 16:06, Ryan Roberts wrote:
> On 25/07/2024 10:10, Anshuman Khandual wrote:
>> Like else where in arm64 platform, use WRITE_ONCE() in map_range() while
>> creating page table entries. This avoids referencing page table entries
>> directly.
> 
> I could be wrong, but I don't think this code is ever operating on live

map_range() is called on these page tables but sequentially during boot.

primary_entry()
	create_init_idmap()
		map_range(...init_idmap_pg_dir...)

primary_switch()
	early_map_kernel()
		map_fdt()
			map_range(...init_idmap_pg_dir...)

		remap_idmap_for_lpa2()
			create_init_idmap()
				map_range(...init_pg_dir...)
			create_init_idmap()
				map_range(...init_idmap_pg_dir...)

		map_kernel()
			map_segment()
				map_range(...init_pg_dir...)
paging_init()
	create_idmap()
		__pi_map_range(...idmap_pg_dir...)


> pgtables? So there is never a potential to race with the HW walker and therefore
> no need to guarrantee copy atomicity? As long as the correct barriers are placed

Unless there is possibility of concurrent HW walk through these page
tables, WRITE_ONCE() based atomic is not required here ?

I thought arm64 platform decided some time earlier (but don't remember
when) to use READ_ONCE()-WRITE_ONCE() for all page table entry, direct
references for read or write accesses - possibly for some increased
safety ?

> at the point where you load the pgdir into the TTBRx there should be no problem?

Those barriers are already placed as required.

> 
> If my assertion is correct, I don't think there is any need for this change.
> 
> Thanks,
> Ryan
> 
>>
>> Cc: Catalin Marinas <catalin.marinas@....com>
>> Cc: Will Deacon <will@...nel.org>
>> Cc: Ryan Roberts <ryan.roberts@....com>
>> Cc: linux-arm-kernel@...ts.infradead.org
>> Cc: linux-kernel@...r.kernel.org
>> Signed-off-by: Anshuman Khandual <anshuman.khandual@....com>
>> ---
>>  arch/arm64/kernel/pi/map_range.c | 6 +++---
>>  1 file changed, 3 insertions(+), 3 deletions(-)
>>
>> diff --git a/arch/arm64/kernel/pi/map_range.c b/arch/arm64/kernel/pi/map_range.c
>> index 5410b2cac590..b93b70cdfb62 100644
>> --- a/arch/arm64/kernel/pi/map_range.c
>> +++ b/arch/arm64/kernel/pi/map_range.c
>> @@ -56,8 +56,8 @@ void __init map_range(u64 *pte, u64 start, u64 end, u64 pa, pgprot_t prot,
>>  			 * table mapping if necessary and recurse.
>>  			 */
>>  			if (pte_none(*tbl)) {
>> -				*tbl = __pte(__phys_to_pte_val(*pte) |
>> -					     PMD_TYPE_TABLE | PMD_TABLE_UXN);
>> +				WRITE_ONCE(*tbl, __pte(__phys_to_pte_val(*pte) |
>> +					   PMD_TYPE_TABLE | PMD_TABLE_UXN));
>>  				*pte += PTRS_PER_PTE * sizeof(pte_t);
>>  			}
>>  			map_range(pte, start, next, pa, prot, level + 1,
>> @@ -79,7 +79,7 @@ void __init map_range(u64 *pte, u64 start, u64 end, u64 pa, pgprot_t prot,
>>  				protval &= ~PTE_CONT;
>>  
>>  			/* Put down a block or page mapping */
>> -			*tbl = __pte(__phys_to_pte_val(pa) | protval);
>> +			WRITE_ONCE(*tbl, __pte(__phys_to_pte_val(pa) | protval));
>>  		}
>>  		pa += next - start;
>>  		start = next;
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ