[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <87plnktt2q.wl-maz@kernel.org>
Date: Mon, 28 Oct 2024 10:33:49 +0000
From: Marc Zyngier <maz@...nel.org>
To: "Aneesh Kumar K.V (Arm)" <aneesh.kumar@...nel.org>
Cc: linux-kernel@...r.kernel.org,
linux-arm-kernel@...ts.infradead.org,
kvmarm@...ts.linux.dev,
Suzuki K Poulose <Suzuki.Poulose@....com>,
Steven Price <steven.price@....com>,
Will Deacon <will@...nel.org>,
Catalin Marinas <catalin.marinas@....com>,
Mark Rutland <mark.rutland@....com>,
Oliver Upton <oliver.upton@...ux.dev>,
Joey Gouly <joey.gouly@....com>,
Zenghui Yu <yuzenghui@...wei.com>
Subject: Re: [PATCH 3/4] arm64: mte: update code comments
On Mon, 28 Oct 2024 09:40:13 +0000,
"Aneesh Kumar K.V (Arm)" <aneesh.kumar@...nel.org> wrote:
>
> commit d77e59a8fccd ("arm64: mte: Lock a page for MTE tag
> initialisation") updated the locking such the kernel now allows
> VM_SHARED mapping with MTE. Update the code comment to reflect this.
>
> Signed-off-by: Aneesh Kumar K.V (Arm) <aneesh.kumar@...nel.org>
> ---
> arch/arm64/kvm/mmu.c | 12 ++++++------
> 1 file changed, 6 insertions(+), 6 deletions(-)
This is a KVM patch. Please make sure you write the subject
accordingly, matching the existing conventions (in this case, this
should read something like: "KVM: arm64: MTE: Update...").
>
> diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
> index a509b63bd4dd..b5824e93cee0 100644
> --- a/arch/arm64/kvm/mmu.c
> +++ b/arch/arm64/kvm/mmu.c
> @@ -1390,11 +1390,8 @@ static int get_vma_page_shift(struct vm_area_struct *vma, unsigned long hva)
> * able to see the page's tags and therefore they must be initialised first. If
> * PG_mte_tagged is set, tags have already been initialised.
> *
> - * The race in the test/set of the PG_mte_tagged flag is handled by:
> - * - preventing VM_SHARED mappings in a memslot with MTE preventing two VMs
> - * racing to santise the same page
> - * - mmap_lock protects between a VM faulting a page in and the VMM performing
> - * an mprotect() to add VM_MTE
> + * The race in the test/set of the PG_mte_tagged flag is handled by
> + * using PG_mte_lock and PG_mte_tagged together.
How? This comment is pretty content-free. TO be useful, you should
elaborate on *how* these two are used together.
> */
> static void sanitise_mte_tags(struct kvm *kvm, kvm_pfn_t pfn,
> unsigned long size)
> @@ -1646,7 +1643,10 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
> }
>
> if (!fault_is_perm && !device && kvm_has_mte(kvm)) {
> - /* Check the VMM hasn't introduced a new disallowed VMA */
> + /*
> + * not a permission fault implies a translation fault which
> + * means mapping the page for the first time
How about an Access fault due to page ageing?
Thanks,
M.
--
Without deviation from the norm, progress is not possible.
Powered by blists - more mailing lists