[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ed61ead5-5468-4c75-a355-1308c8cb6fb4@redhat.com>
Date: Wed, 10 Apr 2024 09:45:56 +0200
From: David Hildenbrand <david@...hat.com>
To: Itaru Kitayama <itaru.kitayama@...ux.dev>
Cc: Ryan Roberts <ryan.roberts@....com>,
Catalin Marinas <Catalin.Marinas@....com>, Will Deacon <will@...nel.org>,
Mark Rutland <Mark.Rutland@....com>, Ard Biesheuvel <ardb@...nel.org>,
Donald Dutile <ddutile@...hat.com>, Eric Chanudet <echanude@...hat.com>,
Linux ARM <linux-arm-kernel@...ts.infradead.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH v2 0/4] Speed up boot with faster linear map creation
On 10.04.24 09:37, Itaru Kitayama wrote:
>
>
>> On Apr 10, 2024, at 16:10, David Hildenbrand <david@...hat.com> wrote:
>>
>> On 10.04.24 08:47, Itaru Kitayama wrote:
>>>> On Apr 10, 2024, at 8:30, Itaru Kitayama <itaru.kitayama@...ux.dev> wrote:
>>>>
>>>> Hi David,
>>>>
>>>>> On Apr 9, 2024, at 23:45, David Hildenbrand <david@...hat.com> wrote:
>>>>>
>>>>> On 09.04.24 16:39, Ryan Roberts wrote:
>>>>>> On 09/04/2024 15:29, David Hildenbrand wrote:
>>>>>>> On 09.04.24 16:13, Ryan Roberts wrote:
>>>>>>>> On 09/04/2024 12:51, David Hildenbrand wrote:
>>>>>>>>> On 09.04.24 13:29, David Hildenbrand wrote:
>>>>>>>>>> On 09.04.24 13:22, David Hildenbrand wrote:
>>>>>>>>>>> On 09.04.24 12:13, Itaru Kitayama wrote:
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>> On Apr 9, 2024, at 19:04, Ryan Roberts <ryan.roberts@....com> wrote:
>>>>>>>>>>>>>
>>>>>>>>>>>>> On 09/04/2024 01:10, Itaru Kitayama wrote:
>>>>>>>>>>>>>> Hi Ryan,
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> On Apr 8, 2024, at 16:30, Ryan Roberts <ryan.roberts@....com> wrote:
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> On 06/04/2024 11:31, Itaru Kitayama wrote:
>>>>>>>>>>>>>>>> Hi Ryan,
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> On Sat, Apr 06, 2024 at 09:32:34AM +0100, Ryan Roberts wrote:
>>>>>>>>>>>>>>>>> Hi Itaru,
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> On 05/04/2024 08:39, Itaru Kitayama wrote:
>>>>>>>>>>>>>>>>>> On Thu, Apr 04, 2024 at 03:33:04PM +0100, Ryan Roberts wrote:
>>>>>>>>>>>>>>>>>>> Hi All,
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> It turns out that creating the linear map can take a significant
>>>>>>>>>>>>>>>>>>> proportion of
>>>>>>>>>>>>>>>>>>> the total boot time, especially when rodata=full. And most of the
>>>>>>>>>>>>>>>>>>> time is spent
>>>>>>>>>>>>>>>>>>> waiting on superfluous tlb invalidation and memory barriers. This
>>>>>>>>>>>>>>>>>>> series reworks
>>>>>>>>>>>>>>>>>>> the kernel pgtable generation code to significantly reduce the number
>>>>>>>>>>>>>>>>>>> of those
>>>>>>>>>>>>>>>>>>> TLBIs, ISBs and DSBs. See each patch for details.
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> The below shows the execution time of map_mem() across a couple of
>>>>>>>>>>>>>>>>>>> different
>>>>>>>>>>>>>>>>>>> systems with different RAM configurations. We measure after applying
>>>>>>>>>>>>>>>>>>> each patch
>>>>>>>>>>>>>>>>>>> and show the improvement relative to base (v6.9-rc2):
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> | Apple M2 VM | Ampere Altra| Ampere Altra| Ampere
>>>>>>>>>>>>>>>>>>> Altra
>>>>>>>>>>>>>>>>>>> | VM, 16G | VM, 64G | VM, 256G | Metal,
>>>>>>>>>>>>>>>>>>> 512G
>>>>>>>>>>>>>>>>>>> ---------------|-------------|-------------|-------------|-------------
>>>>>>>>>>>>>>>>>>> | ms (%) | ms (%) | ms (%) |
>>>>>>>>>>>>>>>>>>> ms (%)
>>>>>>>>>>>>>>>>>>> ---------------|-------------|-------------|-------------|-------------
>>>>>>>>>>>>>>>>>>> base | 153 (0%) | 2227 (0%) | 8798 (0%) | 17442
>>>>>>>>>>>>>>>>>>> (0%)
>>>>>>>>>>>>>>>>>>> no-cont-remap | 77 (-49%) | 431 (-81%) | 1727 (-80%) | 3796
>>>>>>>>>>>>>>>>>>> (-78%)
>>>>>>>>>>>>>>>>>>> batch-barriers | 13 (-92%) | 162 (-93%) | 655 (-93%) | 1656
>>>>>>>>>>>>>>>>>>> (-91%)
>>>>>>>>>>>>>>>>>>> no-alloc-remap | 11 (-93%) | 109 (-95%) | 449 (-95%) | 1257
>>>>>>>>>>>>>>>>>>> (-93%)
>>>>>>>>>>>>>>>>>>> lazy-unmap | 6 (-96%) | 61 (-97%) | 257 (-97%) | 838
>>>>>>>>>>>>>>>>>>> (-95%)
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> This series applies on top of v6.9-rc2. All mm selftests pass. I've
>>>>>>>>>>>>>>>>>>> compile and
>>>>>>>>>>>>>>>>>>> boot tested various PAGE_SIZE and VA size configs.
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> ---
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> Changes since v1 [1]
>>>>>>>>>>>>>>>>>>> ====================
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> - Added Tested-by tags (thanks to Eric and Itaru)
>>>>>>>>>>>>>>>>>>> - Renamed ___set_pte() -> __set_pte_nosync() (per Ard)
>>>>>>>>>>>>>>>>>>> - Reordered patches (biggest impact & least controversial first)
>>>>>>>>>>>>>>>>>>> - Reordered alloc/map/unmap functions in mmu.c to aid reader
>>>>>>>>>>>>>>>>>>> - pte_clear() -> __pte_clear() in clear_fixmap_nosync()
>>>>>>>>>>>>>>>>>>> - Reverted generic p4d_index() which caused x86 build error.
>>>>>>>>>>>>>>>>>>> Replaced with
>>>>>>>>>>>>>>>>>>> unconditional p4d_index() define under arm64.
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> [1]
>>>>>>>>>>>>>>>>>>> https://lore.kernel.org/linux-arm-kernel/20240326101448.3453626-1-ryan.roberts@arm.com/<https://lore.kernel.org/linux-arm-kernel/20240326101448.3453626-1-ryan.roberts@arm.com/>
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> Thanks,
>>>>>>>>>>>>>>>>>>> Ryan
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> Ryan Roberts (4):
>>>>>>>>>>>>>>>>>>> arm64: mm: Don't remap pgtables per-cont(pte|pmd) block
>>>>>>>>>>>>>>>>>>> arm64: mm: Batch dsb and isb when populating pgtables
>>>>>>>>>>>>>>>>>>> arm64: mm: Don't remap pgtables for allocate vs populate
>>>>>>>>>>>>>>>>>>> arm64: mm: Lazily clear pte table mappings from fixmap
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> arch/arm64/include/asm/fixmap.h | 5 +-
>>>>>>>>>>>>>>>>>>> arch/arm64/include/asm/mmu.h | 8 +
>>>>>>>>>>>>>>>>>>> arch/arm64/include/asm/pgtable.h | 13 +-
>>>>>>>>>>>>>>>>>>> arch/arm64/kernel/cpufeature.c | 10 +-
>>>>>>>>>>>>>>>>>>> arch/arm64/mm/fixmap.c | 11 +
>>>>>>>>>>>>>>>>>>> arch/arm64/mm/mmu.c | 377 +++++++++++++++++++++++--------
>>>>>>>>>>>>>>>>>>> 6 files changed, 319 insertions(+), 105 deletions(-)
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>>>>> 2.25.1
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> I've build and boot tested the v2 on FVP, base is taken from your
>>>>>>>>>>>>>>>>>> linux-rr repo. Running run_vmtests.sh on v2 left some gup longterm not
>>>>>>>>>>>>>>>>>> oks, would you take a look at it? The mm ksefltests used is from your
>>>>>>>>>>>>>>>>>> linux-rr repo too.
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> Thanks for taking a look at this.
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> I can't reproduce your issue unfortunately; steps as follows on Apple
>>>>>>>>>>>>>>>>> M2 VM:
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> Config: arm64 defconfig + the following:
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> # Squashfs for snaps, xfs for large file folios.
>>>>>>>>>>>>>>>>> ./scripts/config --enable CONFIG_SQUASHFS_LZ4
>>>>>>>>>>>>>>>>> ./scripts/config --enable CONFIG_SQUASHFS_LZO
>>>>>>>>>>>>>>>>> ./scripts/config --enable CONFIG_SQUASHFS_XZ
>>>>>>>>>>>>>>>>> ./scripts/config --enable CONFIG_SQUASHFS_ZSTD
>>>>>>>>>>>>>>>>> ./scripts/config --enable CONFIG_XFS_FS
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> # For general mm debug.
>>>>>>>>>>>>>>>>> ./scripts/config --enable CONFIG_DEBUG_VM
>>>>>>>>>>>>>>>>> ./scripts/config --enable CONFIG_DEBUG_VM_MAPLE_TREE
>>>>>>>>>>>>>>>>> ./scripts/config --enable CONFIG_DEBUG_VM_RB
>>>>>>>>>>>>>>>>> ./scripts/config --enable CONFIG_DEBUG_VM_PGFLAGS
>>>>>>>>>>>>>>>>> ./scripts/config --enable CONFIG_DEBUG_VM_PGTABLE
>>>>>>>>>>>>>>>>> ./scripts/config --enable CONFIG_PAGE_TABLE_CHECK
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> # For mm selftests.
>>>>>>>>>>>>>>>>> ./scripts/config --enable CONFIG_USERFAULTFD
>>>>>>>>>>>>>>>>> ./scripts/config --enable CONFIG_TEST_VMALLOC
>>>>>>>>>>>>>>>>> ./scripts/config --enable CONFIG_GUP_TEST
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> Running on VM with 12G memory, split across 2 (emulated) NUMA nodes
>>>>>>>>>>>>>>>>> (needed by
>>>>>>>>>>>>>>>>> some mm selftests), with kernel command line to reserve hugetlbs and
>>>>>>>>>>>>>>>>> other
>>>>>>>>>>>>>>>>> features required by some mm selftests:
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> "
>>>>>>>>>>>>>>>>> transparent_hugepage=madvise earlycon root=/dev/vda2 secretmem.enable
>>>>>>>>>>>>>>>>> hugepagesz=1G hugepages=0:2,1:2 hugepagesz=32M hugepages=0:2,1:2
>>>>>>>>>>>>>>>>> default_hugepagesz=2M hugepages=0:64,1:64 hugepagesz=64K
>>>>>>>>>>>>>>>>> hugepages=0:2,1:2
>>>>>>>>>>>>>>>>> "
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> Ubuntu userspace running off XFS rootfs. Build and run mm selftests
>>>>>>>>>>>>>>>>> from same
>>>>>>>>>>>>>>>>> git tree.
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> Although I don't think any of this config should make a difference to
>>>>>>>>>>>>>>>>> gup_longterm.
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> Looks like your errors are all "ftruncate() failed". I've seen this
>>>>>>>>>>>>>>>>> problem on
>>>>>>>>>>>>>>>>> our CI system. There it is due to running the tests from NFS file
>>>>>>>>>>>>>>>>> system. What
>>>>>>>>>>>>>>>>> filesystem are you using? Perhaps you are sharing into the FVP using
>>>>>>>>>>>>>>>>> 9p? That
>>>>>>>>>>>>>>>>> might also be problematic.
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> That was it. This time I booted up the kernel including your series on
>>>>>>>>>>>>>>>> QEMU on my M1 and executed the gup_longterm program without the ftruncate
>>>>>>>>>>>>>>>> failures. When testing your kernel on FVP, I was executing the script
>>>>>>>>>>>>>>>> from the FVP's host filesystem using 9p.
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> I'm not sure exactly what the root cause is. Perhaps there isn't enough
>>>>>>>>>>>>>>> space on
>>>>>>>>>>>>>>> the disk? It might be worth enhancing the error log to provide the
>>>>>>>>>>>>>>> errno in
>>>>>>>>>>>>>>> tools/testing/selftests/mm/gup_longterm.c.
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Attached is the strace’d gup_longterm executiong log on your
>>>>>>>>>>>>>> pgtable-boot-speedup-v2 kernel.
>>>>>>>>>>>>>
>>>>>>>>>>>>> Sorry are you saying that it only fails with the pgtable-boot-speedup-v2
>>>>>>>>>>>>> patch
>>>>>>>>>>>>> set applied? I thought we previously concluded that it was independent of
>>>>>>>>>>>>> that?
>>>>>>>>>>>>> I was under the impression that it was filesystem related and not something
>>>>>>>>>>>>> that
>>>>>>>>>>>>> I was planning to investigate.
>>>>>>>>>>>>
>>>>>>>>>>>> No, irrespective of the kernel, if using 9p on FVP the test program fails.
>>>>>>>>>>>> It is indeed 9p filesystem related, as I switched to using NFS all the
>>>>>>>>>>>> issues are gone.
>>>>>>>>>>>
>>>>>>>>>>> Did it never work on 9p? If so, we might have to SKIP that test.
>>>>>>>>>>>
>>>>>>>>>>> openat(AT_FDCWD, "gup_longterm.c_tmpfile_BLboOt", O_RDWR|O_CREAT|O_EXCL,
>>>>>>>>>>> 0600) = 3
>>>>>>>>>>> unlinkat(AT_FDCWD, "gup_longterm.c_tmpfile_BLboOt", 0) = 0
>>>>>>>>>>> fstatfs(3, 0xffffe505a840) = -1 EOPNOTSUPP (Operation not
>>>>>>>>>>> supported)
>>>>>>>>>>> ftruncate(3, 4096) = -1 ENOENT (No such file or
>>>>>>>>>>> directory)
>>>>>>>>>>
>>>>>>>>>> Note: I'm wondering if the unlinkat here is the problem that makes
>>>>>>>>>> ftruncate() with 9p result in weird errors (e.g., the hypervisor
>>>>>>>>>> unlinked the file and cannot reopen it for the fstatfs/ftruncate. ...
>>>>>>>>>> which gives us weird errors here).
>>>>>>>>>>
>>>>>>>>>> Then, we should lookup the fs type in run_with_local_tmpfile() before
>>>>>>>>>> the unlink() and simply skip the test if it is 9p.
>>>>>>>>>
>>>>>>>>> The unlink with 9p most certainly was a known issue in the past:
>>>>>>>>>
>>>>>>>>> https://gitlab.com/qemu-project/qemu/-/issues/103
>>>>>>>>>
>>>>>>>>> Maybe it's still an issue with older hypervisors (QEMU?)? Or it was never
>>>>>>>>> completely resolved?
>>>>>>>>
>>>>>>>> I believe Itaru is running on FVP (Fixed Virtual Platform - "fast model" -
>>>>>>>> Arm's architecture emulator). So QEMU won't be involved here. The FVP emulates
>>>>>>>> a 9p device, so perhaps the bug is in there.
>>>>>>>
>>>>>>> Very likely.
>>>>>>>
>>>>>>>>
>>>>>>>> Note that I see lots of "fallocate() failed" failures in gup_longterm when
>>>>>>>> running on our CI system. This is a completely different setup; Real HW with
>>>>>>>> Linux running bare metal using an NFS rootfs. I'm not sure if this is related.
>>>>>>>> Logs show it failing consistently for the "tmpfile" and "local tmpfile" test
>>>>>>>> configs. I also see a couple of these fails in the cow tests.
>>>>>>>
>>>>>>> What is the fallocate() errno you are getting? strace log would help (to see if
>>>>>>> statfs also fails already)! Likely a similar NFS issue.
>>>>>> Unfortunately this is a system I don't have access to. I've requested some of
>>>>>> this triage to be done, but its fairly low priority unfortunately.
>>>>>
>>>>> To work around these BUGs (?) elsewhere, we could simply skip the test if get_fs_type() is not able to detect the FS type. Likely that's an early indicator that the unlink() messed something up.
>>>>>
>>>>> ... doesn't feel right, though.
>>>>
>>>> I think it’s a good idea so that the mm kselftests results look reasonable.
>>
>> Yeah, but this will hide BUGs elsewhere. I suspect that in Ryan's NFS setup is
>> also a BUG lurking somewhere in the NFS implementation. But that's just a guess
>> until we have more details.
>>
>
> Ok.
>
>>>> Since you’re an expert on GUP-fast (or fast-GUP?), when you update the code, could you print out errno as well like the split_huge_page_test.c does
>>
>> While we could, I don't see much value in that for selftests. strace log is of much
>> more valuable to understand what is actually happening (e.g., fstatfs failing), and
>> quite easy to obtain.
>
> Ok.
>
>>
>>>> Thanks,
>>>> Itaru.
>>> David, attached is the straced execution log of the gup_longterm kselftest over the NFS case.
>>> I’m running the program on FVP, let me know if you need other logs or test results.
>>
>> For your run, it all looks good:
>>
>> openat(AT_FDCWD, "/tmp", O_RDWR|O_EXCL|O_TMPFILE, 0600) = 3
>> fcntl(3, F_GETFL) = 0x424002 (flags O_RDWR|O_LARGEFILE|O_TMPFILE)
>> fstatfs(3, {f_type=TMPFS_MAGIC, f_bsize=4096, f_blocks=416015, f_bfree=415997, f_bavail=415997, f_files=416015, f_ffree=416009, f_fsid={val=[0x8e6b7ce6, 0xe1737440]}, f_namelen=255, f_frsize=4096, f_flags=ST_VALID|ST_RELATIME}) = 0
>> ftruncate(3, 4096) = 0
>> fallocate(3, 0, 0, 4096) = 0
>>
>> -> TMPFS/SHMEM, works as expected
>>
>> openat(AT_FDCWD, "gup_longterm.c_tmpfile_WMLTNf", O_RDWR|O_CREAT|O_EXCL, 0600) = 3
>> unlinkat(AT_FDCWD, "gup_longterm.c_tmpfile_WMLTNf", 0) = 0
>> fstatfs(3, {f_type=NFS_SUPER_MAGIC, f_bsize=1048576, f_blocks=112200, f_bfree=27954, f_bavail=23296, f_files=7307264, f_ffree=4724815, f_fsid={val=[0, 0]}, f_namelen=255, f_frsize=1048576, f_flags=ST_VALID|ST_RELATIME}) = 0
>> ftruncate(3, 4096) = 0
>> fallocate(3, 0, 0, 4096) = 0
>>
>> -> NFS, works as expected
>>
>> Note that you get all skips (not fails), because your kernel is not compiled with CONFIG_GUP_TEST.
>>
>> ok 1 # SKIP gup_test not available
>
> I rebuilt the v6.9-rc3 kernel with that option enabled. This time SKIPs are due to “need more free huge pages”, I’ll check even on a limited memory size system preparing enough huge pages is possible.
That's expected, you have to reserve hugetlb pages before running the
test. But the important thing is that tmpfs/nfs works for you.
--
Cheers,
David / dhildenb
Powered by blists - more mailing lists