lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <7f69758c-b849-48ca-b279-569469183e91@arm.com>
Date: Wed, 27 Mar 2024 15:01:14 +0000
From: Ryan Roberts <ryan.roberts@....com>
To: Ard Biesheuvel <ardb@...nel.org>
Cc: Catalin Marinas <catalin.marinas@....com>, Will Deacon <will@...nel.org>,
 Mark Rutland <mark.rutland@....com>, David Hildenbrand <david@...hat.com>,
 Donald Dutile <ddutile@...hat.com>, Eric Chanudet <echanude@...hat.com>,
 linux-arm-kernel@...ts.infradead.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH v1 0/3] Speed up boot with faster linear map creation

On 27/03/2024 13:36, Ard Biesheuvel wrote:
> On Wed, 27 Mar 2024 at 12:43, Ryan Roberts <ryan.roberts@....com> wrote:
>>
>> On 27/03/2024 10:09, Ard Biesheuvel wrote:
>>> Hi Ryan,
>>>
>>> On Tue, 26 Mar 2024 at 12:15, Ryan Roberts <ryan.roberts@....com> wrote:
>>>>
>>>> Hi All,
>>>>
>>>> It turns out that creating the linear map can take a significant proportion of
>>>> the total boot time, especially when rodata=full. And a large portion of the
>>>> time it takes to create the linear map is issuing TLBIs. This series reworks the
>>>> kernel pgtable generation code to significantly reduce the number of TLBIs. See
>>>> each patch for details.
>>>>
>>>> The below shows the execution time of map_mem() across a couple of different
>>>> systems with different RAM configurations. We measure after applying each patch
>>>> and show the improvement relative to base (v6.9-rc1):
>>>>
>>>>                | Apple M2 VM | Ampere Altra| Ampere Altra| Ampere Altra
>>>>                | VM, 16G     | VM, 64G     | VM, 256G    | Metal, 512G
>>>> ---------------|-------------|-------------|-------------|-------------
>>>>                |   ms    (%) |   ms    (%) |   ms    (%) |    ms    (%)
>>>> ---------------|-------------|-------------|-------------|-------------
>>>> base           |  151   (0%) | 2191   (0%) | 8990   (0%) | 17443   (0%)
>>>> no-cont-remap  |   77 (-49%) |  429 (-80%) | 1753 (-80%) |  3796 (-78%)
>>>> no-alloc-remap |   77 (-49%) |  375 (-83%) | 1532 (-83%) |  3366 (-81%)
>>>> lazy-unmap     |   63 (-58%) |  330 (-85%) | 1312 (-85%) |  2929 (-83%)
>>>>
>>>> This series applies on top of v6.9-rc1. All mm selftests pass. I haven't yet
>>>> tested all VA size configs (although I don't anticipate any issues); I'll do
>>>> this as part of followup.
>>>>
>>>
>>> These are very nice results!
>>>
>>> Before digging into the details: do we still have a strong case for
>>> supporting contiguous PTEs and PMDs in these routines?
>>
>> We are currently using contptes and pmds for the linear map when rodata=[on|off]
>> IIRC?
> 
> In principle, yes. In practice?
> 
>> I don't see a need to remove the capability personally.
>>
> 
> Since we are making changes here, it is a relevant question to ask imho.
> 
>> Also I was talking with Mark R yesterday and he suggested that an even better
>> solution might be to create a temp pgtable that maps the linear map with pmds,
>> switch to it, then create the real pgtable that maps the linear map with ptes,
>> then switch to that. The benefit being that we can avoid the fixmap entirely
>> when creating the second pgtable - we think this would likely be significantly
>> faster still.
>>
> 
> If this is going to be a temporary mapping for the duration of the
> initial population of the linear map page tables, we might just as
> well use a 1:1 TTBR0 mapping here, which would be completely disjoint
> from swapper. And we'd only need to map memory that is being used for
> page tables, so on those large systems we'd need to map only a small
> slice. Maybe it's time to bring back the memblock alloc limit so we
> can manage this more easily?
> 
>> My second patch adds the infrastructure to make this possible. But your changes
>> for LPA2 make it significantly more effort; since that change we are now using
>> the swapper pgtable when we populate the linear map into it - the kernel is
>> already mapped and that isn't done in paging_init() anymore. So I'm not quite
>> sure how we can easily make that work at the moment.
>>
> 
> I think a mix of the fixmap approach with a 1:1 map could work here:
> - use TTBR0 to create a temp 1:1 map of DRAM
> - map page tables lazily as they are allocated but using a coarse mapping
> - avoid all TLB maintenance except at the end when tearing down the 1:1 mapping.

Yes that could work I think. So to make sure I've understood:

 - create a 1:1 map for all of DRAM using block and cont mappings where possible
     - use memblock_phys_alloc_*() to allocate pgtable memory
     - access via fixmap (should be minimal due to block mappings)
 - install it in TTBR0
 - create all the swapper mappings as normal (no block or cont mappings)
     - use memblock_phys_alloc_*() to alloc pgtable memory
     - phys address is also virtual address due to installed 1:1 map
 - Remove 1:1 map from TTBR0
 - memblock_phys_free() all the memory associated with 1:1 map

That sounds doable on top of the first 2 patches in this series - I'll have a
crack. The only missing piece is depth-first 1:1 map traversal to free the
tables. I'm guessing something already exists that I can repurpose?

Thanks,
Ryan


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ