lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <lgyiy6z7rss6t3ytmcg2ofcfzpxg5nw6mbbc2evnshhw7xls4i@vfnbqiyabmi2>
Date: Thu, 28 Mar 2024 19:08:47 -0400
From: Eric Chanudet <echanude@...hat.com>
To: Ryan Roberts <ryan.roberts@....com>
Cc: Catalin Marinas <catalin.marinas@....com>, 
	Will Deacon <will@...nel.org>, Mark Rutland <mark.rutland@....com>, 
	Ard Biesheuvel <ardb@...nel.org>, David Hildenbrand <david@...hat.com>, 
	Donald Dutile <ddutile@...hat.com>, linux-arm-kernel@...ts.infradead.org, 
	linux-kernel@...r.kernel.org
Subject: Re: [PATCH v1 0/3] Speed up boot with faster linear map creation

On Wed, Mar 27, 2024 at 07:12:06PM +0000, Ryan Roberts wrote:
> On 26/03/2024 10:14, Ryan Roberts wrote:
> > Hi All,
> > 
> > It turns out that creating the linear map can take a significant proportion of
> > the total boot time, especially when rodata=full. And a large portion of the
> > time it takes to create the linear map is issuing TLBIs. This series reworks the
> > kernel pgtable generation code to significantly reduce the number of TLBIs. See
> > each patch for details.
> > 
> > The below shows the execution time of map_mem() across a couple of different
> > systems with different RAM configurations. We measure after applying each patch
> > and show the improvement relative to base (v6.9-rc1):
> > 
> >                | Apple M2 VM | Ampere Altra| Ampere Altra| Ampere Altra
> >                | VM, 16G     | VM, 64G     | VM, 256G    | Metal, 512G
> > ---------------|-------------|-------------|-------------|-------------
> >                |   ms    (%) |   ms    (%) |   ms    (%) |    ms    (%)
> > ---------------|-------------|-------------|-------------|-------------
> > base           |  151   (0%) | 2191   (0%) | 8990   (0%) | 17443   (0%)
> > no-cont-remap  |   77 (-49%) |  429 (-80%) | 1753 (-80%) |  3796 (-78%)
> > no-alloc-remap |   77 (-49%) |  375 (-83%) | 1532 (-83%) |  3366 (-81%)
> > lazy-unmap     |   63 (-58%) |  330 (-85%) | 1312 (-85%) |  2929 (-83%)
> 
> I've just appended an additional patch to this series. This takes us to a ~95%
> reduction overall:
> 
>                | Apple M2 VM | Ampere Altra| Ampere Altra| Ampere Altra
>                | VM, 16G     | VM, 64G     | VM, 256G    | Metal, 512G
> ---------------|-------------|-------------|-------------|-------------
>                |   ms    (%) |   ms    (%) |   ms    (%) |    ms    (%)
> ---------------|-------------|-------------|-------------|-------------
> base           |  151   (0%) | 2191   (0%) | 8990   (0%) | 17443   (0%)
> no-cont-remap  |   77 (-49%) |  429 (-80%) | 1753 (-80%) |  3796 (-78%)
> no-alloc-remap |   77 (-49%) |  375 (-83%) | 1532 (-83%) |  3366 (-81%)
> lazy-unmap     |   63 (-58%) |  330 (-85%) | 1312 (-85%) |  2929 (-83%)
> batch-barriers |   11 (-93%) |   61 (-97%) |  261 (-97%) |   837 (-95%)
> 
> Don't believe the intermediate block-based pgtable idea will now be neccessary
> so I don't intend to persue that. It might be that we choose to drop the middle
> two patchs; I'm keen to hear opinions.
> 

Applied on v6.9-rc1, I have much shorter base timing on a similar
machine (Ampere HR350A). no-alloc-remap didn't show much difference
either.

               | SA8775p-ride | Ampere HR350A|
               | VM, 36G      | Metal, 256G  |
---------------|--------------|--------------|
               |   ms     (%) |   ms     (%) |
---------------|--------------|--------------|
base           |  358    (0%) | 2213    (0%) |
no-cont-remap  |  232  (-35%) | 1283  (-42%) |
no-alloc-remap |  228  (-36%) | 1282  (-42%) |
lazy-unmap     |  231  (-35%) | 1248  (-44%) |
batch-barriers |   25  (-93%) |  204  (-91%) |

Tested-By: Eric Chanudet <echanude@...hat.com>


> > This series applies on top of v6.9-rc1. All mm selftests pass. I haven't yet
> > tested all VA size configs (although I don't anticipate any issues); I'll do
> > this as part of followup.
> > 
> > Thanks,
> > Ryan
> > 
> > 
> > Ryan Roberts (3):
> >   arm64: mm: Don't remap pgtables per- cont(pte|pmd) block
> >   arm64: mm: Don't remap pgtables for allocate vs populate
> >   arm64: mm: Lazily clear pte table mappings from fixmap
> > 
> >  arch/arm64/include/asm/fixmap.h  |   5 +-
> >  arch/arm64/include/asm/mmu.h     |   8 +
> >  arch/arm64/include/asm/pgtable.h |   4 -
> >  arch/arm64/kernel/cpufeature.c   |  10 +-
> >  arch/arm64/mm/fixmap.c           |  11 +
> >  arch/arm64/mm/mmu.c              | 364 +++++++++++++++++++++++--------
> >  include/linux/pgtable.h          |   8 +
> >  7 files changed, 307 insertions(+), 103 deletions(-)
> > 
> > --
> > 2.25.1
> > 
> 

-- 
Eric Chanudet


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ