[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20240131155929.169961-1-alexghiti@rivosinc.com>
Date: Wed, 31 Jan 2024 16:59:25 +0100
From: Alexandre Ghiti <alexghiti@...osinc.com>
To: Catalin Marinas <catalin.marinas@....com>,
Will Deacon <will@...nel.org>,
Thomas Bogendoerfer <tsbogend@...ha.franken.de>,
Michael Ellerman <mpe@...erman.id.au>,
Nicholas Piggin <npiggin@...il.com>,
Christophe Leroy <christophe.leroy@...roup.eu>,
Paul Walmsley <paul.walmsley@...ive.com>,
Palmer Dabbelt <palmer@...belt.com>,
Albert Ou <aou@...s.berkeley.edu>,
Andrew Morton <akpm@...ux-foundation.org>,
Ved Shanbhogue <ved@...osinc.com>,
Matt Evans <mev@...osinc.com>,
Dylan Jhong <dylan@...estech.com>,
linux-arm-kernel@...ts.infradead.org,
linux-kernel@...r.kernel.org,
linux-mips@...r.kernel.org,
linuxppc-dev@...ts.ozlabs.org,
linux-riscv@...ts.infradead.org,
linux-mm@...ck.org
Cc: Alexandre Ghiti <alexghiti@...osinc.com>
Subject: [PATCH RFC v2 0/4] Svvptc extension to remove preventive sfence.vma
In RISC-V, after a new mapping is established, a sfence.vma needs to be
emitted for different reasons:
- if the uarch caches invalid entries, we need to invalidate it otherwise
we would trap on this invalid entry,
- if the uarch does not cache invalid entries, a reordered access could fail
to see the new mapping and then trap (sfence.vma acts as a fence).
We can actually avoid emitting those (mostly) useless and costly sfence.vma
by handling the traps instead:
- for new kernel mappings: only vmalloc mappings need to be taken care of,
other new mapping are rare and already emit the required sfence.vma if
needed.
That must be achieved very early in the exception path as explained in
patch 3, and this also fixes our fragile way of dealing with vmalloc faults.
- for new user mappings: Svvptc makes update_mmu_cache() a no-op and no
traps can happen since xRET instructions now act as fences.
Patch 1 and 2 introduce Svvptc extension probing.
It's still an RFC because Svvptc is not ratified yet.
On our uarch that does not cache invalid entries and a 6.5 kernel, the
gains are measurable:
* Kernel boot: 6%
* ltp - mmapstress01: 8%
* lmbench - lat_pagefault: 20%
* lmbench - lat_mmap: 5%
Thanks to Ved and Matt Evans for triggering the discussion that led to
this patchset!
Any feedback, test or relevant benchmark are welcome :)
Changes in v2:
- Rebase on top of 6.8-rc1
- Remove patch with runtime detection of tlb caching and debugfs patch
- Add patch that probes Svvptc
- Add patch that defines the new Svvptc dt-binding
- Leave the behaviour as-is for uarchs that cache invalid TLB entries since
I don't have any good perf numbers
- Address comments from Christoph on v1
- Fix a race condition in new_vmalloc update:
ld a2, 0(a0) <= this could load something which is != -1
not a1, a1 <= here or in the instruction after, flush_cache_vmap()
could make the whole bitmap to 1
and a1, a2, a1
sd a1, 0(a0) <= here we would clear bits that should not be cleared!
Instead, replace the whole sequence with:
amoxor.w a0, a1, (a0)
Alexandre Ghiti (4):
riscv: Add ISA extension parsing for Svvptc
dt-bindings: riscv: Add Svvptc ISA extension description
riscv: Stop emitting preventive sfence.vma for new vmalloc mappings
riscv: Stop emitting preventive sfence.vma for new userspace mappings
with Svvptc
.../devicetree/bindings/riscv/extensions.yaml | 7 ++
arch/riscv/include/asm/cacheflush.h | 18 +++-
arch/riscv/include/asm/hwcap.h | 1 +
arch/riscv/include/asm/pgtable.h | 16 +++-
arch/riscv/include/asm/thread_info.h | 5 ++
arch/riscv/kernel/asm-offsets.c | 5 ++
arch/riscv/kernel/cpufeature.c | 1 +
arch/riscv/kernel/entry.S | 84 +++++++++++++++++++
arch/riscv/mm/init.c | 2 +
arch/riscv/mm/pgtable.c | 13 +++
10 files changed, 150 insertions(+), 2 deletions(-)
--
2.39.2
Powered by blists - more mailing lists