lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <533adb77-8c2b-40db-84cb-88de77ab92bb@arm.com>
Date: Mon, 8 Apr 2024 08:30:38 +0100
From: Ryan Roberts <ryan.roberts@....com>
To: Itaru Kitayama <itaru.kitayama@...ux.dev>
Cc: Catalin Marinas <catalin.marinas@....com>, Will Deacon <will@...nel.org>,
 Mark Rutland <mark.rutland@....com>, Ard Biesheuvel <ardb@...nel.org>,
 David Hildenbrand <david@...hat.com>, Donald Dutile <ddutile@...hat.com>,
 Eric Chanudet <echanude@...hat.com>, linux-arm-kernel@...ts.infradead.org,
 linux-kernel@...r.kernel.org
Subject: Re: [PATCH v2 0/4] Speed up boot with faster linear map creation

On 06/04/2024 11:31, Itaru Kitayama wrote:
> Hi Ryan,
> 
> On Sat, Apr 06, 2024 at 09:32:34AM +0100, Ryan Roberts wrote:
>> Hi Itaru,
>>
>> On 05/04/2024 08:39, Itaru Kitayama wrote:
>>> On Thu, Apr 04, 2024 at 03:33:04PM +0100, Ryan Roberts wrote:
>>>> Hi All,
>>>>
>>>> It turns out that creating the linear map can take a significant proportion of
>>>> the total boot time, especially when rodata=full. And most of the time is spent
>>>> waiting on superfluous tlb invalidation and memory barriers. This series reworks
>>>> the kernel pgtable generation code to significantly reduce the number of those
>>>> TLBIs, ISBs and DSBs. See each patch for details.
>>>>
>>>> The below shows the execution time of map_mem() across a couple of different
>>>> systems with different RAM configurations. We measure after applying each patch
>>>> and show the improvement relative to base (v6.9-rc2):
>>>>
>>>>                | Apple M2 VM | Ampere Altra| Ampere Altra| Ampere Altra
>>>>                | VM, 16G     | VM, 64G     | VM, 256G    | Metal, 512G
>>>> ---------------|-------------|-------------|-------------|-------------
>>>>                |   ms    (%) |   ms    (%) |   ms    (%) |    ms    (%)
>>>> ---------------|-------------|-------------|-------------|-------------
>>>> base           |  153   (0%) | 2227   (0%) | 8798   (0%) | 17442   (0%)
>>>> no-cont-remap  |   77 (-49%) |  431 (-81%) | 1727 (-80%) |  3796 (-78%)
>>>> batch-barriers |   13 (-92%) |  162 (-93%) |  655 (-93%) |  1656 (-91%)
>>>> no-alloc-remap |   11 (-93%) |  109 (-95%) |  449 (-95%) |  1257 (-93%)
>>>> lazy-unmap     |    6 (-96%) |   61 (-97%) |  257 (-97%) |   838 (-95%)
>>>>
>>>> This series applies on top of v6.9-rc2. All mm selftests pass. I've compile and
>>>> boot tested various PAGE_SIZE and VA size configs.
>>>>
>>>> ---
>>>>
>>>> Changes since v1 [1]
>>>> ====================
>>>>
>>>>   - Added Tested-by tags (thanks to Eric and Itaru)
>>>>   - Renamed ___set_pte() -> __set_pte_nosync() (per Ard)
>>>>   - Reordered patches (biggest impact & least controversial first)
>>>>   - Reordered alloc/map/unmap functions in mmu.c to aid reader
>>>>   - pte_clear() -> __pte_clear() in clear_fixmap_nosync()
>>>>   - Reverted generic p4d_index() which caused x86 build error. Replaced with
>>>>     unconditional p4d_index() define under arm64.
>>>>
>>>>
>>>> [1] https://lore.kernel.org/linux-arm-kernel/20240326101448.3453626-1-ryan.roberts@arm.com/
>>>>
>>>> Thanks,
>>>> Ryan
>>>>
>>>>
>>>> Ryan Roberts (4):
>>>>   arm64: mm: Don't remap pgtables per-cont(pte|pmd) block
>>>>   arm64: mm: Batch dsb and isb when populating pgtables
>>>>   arm64: mm: Don't remap pgtables for allocate vs populate
>>>>   arm64: mm: Lazily clear pte table mappings from fixmap
>>>>
>>>>  arch/arm64/include/asm/fixmap.h  |   5 +-
>>>>  arch/arm64/include/asm/mmu.h     |   8 +
>>>>  arch/arm64/include/asm/pgtable.h |  13 +-
>>>>  arch/arm64/kernel/cpufeature.c   |  10 +-
>>>>  arch/arm64/mm/fixmap.c           |  11 +
>>>>  arch/arm64/mm/mmu.c              | 377 +++++++++++++++++++++++--------
>>>>  6 files changed, 319 insertions(+), 105 deletions(-)
>>>>
>>>> --
>>>> 2.25.1
>>>>
>>>
>>> I've build and boot tested the v2 on FVP, base is taken from your
>>> linux-rr repo. Running run_vmtests.sh on v2 left some gup longterm not oks, would you take a look at it? The mm ksefltests used is from your linux-rr repo too.
>>
>> Thanks for taking a look at this.
>>
>> I can't reproduce your issue unfortunately; steps as follows on Apple M2 VM:
>>
>> Config: arm64 defconfig + the following:
>>
>> # Squashfs for snaps, xfs for large file folios.
>> ./scripts/config --enable CONFIG_SQUASHFS_LZ4
>> ./scripts/config --enable CONFIG_SQUASHFS_LZO
>> ./scripts/config --enable CONFIG_SQUASHFS_XZ
>> ./scripts/config --enable CONFIG_SQUASHFS_ZSTD
>> ./scripts/config --enable CONFIG_XFS_FS
>>
>> # For general mm debug.
>> ./scripts/config --enable CONFIG_DEBUG_VM
>> ./scripts/config --enable CONFIG_DEBUG_VM_MAPLE_TREE
>> ./scripts/config --enable CONFIG_DEBUG_VM_RB
>> ./scripts/config --enable CONFIG_DEBUG_VM_PGFLAGS
>> ./scripts/config --enable CONFIG_DEBUG_VM_PGTABLE
>> ./scripts/config --enable CONFIG_PAGE_TABLE_CHECK
>>
>> # For mm selftests.
>> ./scripts/config --enable CONFIG_USERFAULTFD
>> ./scripts/config --enable CONFIG_TEST_VMALLOC
>> ./scripts/config --enable CONFIG_GUP_TEST
>>
>> Running on VM with 12G memory, split across 2 (emulated) NUMA nodes (needed by
>> some mm selftests), with kernel command line to reserve hugetlbs and other
>> features required by some mm selftests:
>>
>> "
>> transparent_hugepage=madvise earlycon root=/dev/vda2 secretmem.enable
>> hugepagesz=1G hugepages=0:2,1:2 hugepagesz=32M hugepages=0:2,1:2
>> default_hugepagesz=2M hugepages=0:64,1:64 hugepagesz=64K hugepages=0:2,1:2
>> "
>>
>> Ubuntu userspace running off XFS rootfs. Build and run mm selftests from same
>> git tree.
>>
>>
>> Although I don't think any of this config should make a difference to gup_longterm.
>>
>> Looks like your errors are all "ftruncate() failed". I've seen this problem on
>> our CI system. There it is due to running the tests from NFS file system. What
>> filesystem are you using? Perhaps you are sharing into the FVP using 9p? That
>> might also be problematic.
> 
> That was it. This time I booted up the kernel including your series on
> QEMU on my M1 and executed the gup_longterm program without the ftruncate
> failures. When testing your kernel on FVP, I was executing the script from the FVP's host filesystem using 9p.

I'm not sure exactly what the root cause is. Perhaps there isn't enough space on
the disk? It might be worth enhancing the error log to provide the errno in
tools/testing/selftests/mm/gup_longterm.c.

Thanks,
Ryan

> 
> Thanks,
> Itaru.
> 
>>
>> Does this problem reproduce with v6.9-rc2, without my patches? I except it
>> probably does?
>>
>> Thanks,
>> Ryan
>>
>>>
>>> Thanks,
>>> Itaru.
>>


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ