lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <05cca7de-2dc1-4dae-abd5-da757dd9eaea@arm.com>
Date: Wed, 27 Mar 2024 16:11:07 +0000
From: Ryan Roberts <ryan.roberts@....com>
To: Ard Biesheuvel <ardb@...nel.org>
Cc: Catalin Marinas <catalin.marinas@....com>, Will Deacon <will@...nel.org>,
 Mark Rutland <mark.rutland@....com>, David Hildenbrand <david@...hat.com>,
 Donald Dutile <ddutile@...hat.com>, Eric Chanudet <echanude@...hat.com>,
 linux-arm-kernel@...ts.infradead.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH v1 0/3] Speed up boot with faster linear map creation

On 27/03/2024 15:57, Ard Biesheuvel wrote:
> On Wed, 27 Mar 2024 at 17:01, Ryan Roberts <ryan.roberts@....com> wrote:
>>
>> On 27/03/2024 13:36, Ard Biesheuvel wrote:
>>> On Wed, 27 Mar 2024 at 12:43, Ryan Roberts <ryan.roberts@....com> wrote:
>>>>
>>>> On 27/03/2024 10:09, Ard Biesheuvel wrote:
> ...
>>>
>>> I think a mix of the fixmap approach with a 1:1 map could work here:
>>> - use TTBR0 to create a temp 1:1 map of DRAM
>>> - map page tables lazily as they are allocated but using a coarse mapping
>>> - avoid all TLB maintenance except at the end when tearing down the 1:1 mapping.
>>
>> Yes that could work I think. So to make sure I've understood:
>>
>>  - create a 1:1 map for all of DRAM using block and cont mappings where possible
>>      - use memblock_phys_alloc_*() to allocate pgtable memory
>>      - access via fixmap (should be minimal due to block mappings)
> 
> Yes but you'd only need the fixmap for pages that are not in the 1:1
> map yet, so after an initial ramp up you wouldn't need it at all,
> assuming locality of memblock allocations and the use of PMD mappings.
> The only tricky thing here is ensuring that we are not mapping memory
> that we shouldn't be touching.

That sounds a bit nasty though. I think it would be simpler to just reuse the
machinery we have, doing the 1:1 map using blocks and fixmap; It should be a
factor of 512 better than what we have, so probably not a problem at that point.
That way, we can rely on memblock to tell us what to map. If its still
problematic I can add a layer to support 1G mappings too.

> 
>>  - install it in TTBR0
>>  - create all the swapper mappings as normal (no block or cont mappings)
>>      - use memblock_phys_alloc_*() to alloc pgtable memory
>>      - phys address is also virtual address due to installed 1:1 map
>>  - Remove 1:1 map from TTBR0
>>  - memblock_phys_free() all the memory associated with 1:1 map
>>
> 
> Indeed.

One question on the state of TTBR0 at entrance to paging_init(); what is it? I
need to know so I can set it back after.

Currently I'm thinking I can do:

cpu_install_ttbr0(my_dram_idmap, TCR_T0SZ(vabits_actual));
<create swapper>
cpu_set_reserved_ttbr0();
local_flush_tlb_all();

But is it ok to leave the reserved pdg in ttbr0, or is it expecting something else?

> 
>> That sounds doable on top of the first 2 patches in this series - I'll have a
>> crack. The only missing piece is depth-first 1:1 map traversal to free the
>> tables. I'm guessing something already exists that I can repurpose?
>>
> 
> Not that I am aware of, but that doesn't sound too complicated.


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ