[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <X7VQua7YO4isMFPU@trantor>
Date: Wed, 18 Nov 2020 16:50:01 +0000
From: Catalin Marinas <catalin.marinas@....com>
To: Steven Price <steven.price@....com>
Cc: Marc Zyngier <maz@...nel.org>, Will Deacon <will@...nel.org>,
James Morse <james.morse@....com>,
Julien Thierry <julien.thierry.kdev@...il.com>,
Suzuki K Poulose <suzuki.poulose@....com>,
kvmarm@...ts.cs.columbia.edu, linux-arm-kernel@...ts.infradead.org,
linux-kernel@...r.kernel.org, Dave Martin <Dave.Martin@....com>,
Mark Rutland <mark.rutland@....com>,
Thomas Gleixner <tglx@...utronix.de>, qemu-devel@...gnu.org,
Juan Quintela <quintela@...hat.com>,
"Dr. David Alan Gilbert" <dgilbert@...hat.com>,
Richard Henderson <richard.henderson@...aro.org>,
Peter Maydell <peter.maydell@...aro.org>,
Haibo Xu <Haibo.Xu@....com>, Andrew Jones <drjones@...hat.com>
Subject: Re: [PATCH v4 2/2] arm64: kvm: Introduce MTE VCPU feature
On Wed, Nov 18, 2020 at 04:01:20PM +0000, Steven Price wrote:
> On 17/11/2020 16:07, Catalin Marinas wrote:
> > On Mon, Oct 26, 2020 at 03:57:27PM +0000, Steven Price wrote:
> > > diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
> > > index 19aacc7d64de..38fe25310ca1 100644
> > > --- a/arch/arm64/kvm/mmu.c
> > > +++ b/arch/arm64/kvm/mmu.c
> > > @@ -862,6 +862,26 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
> > > if (vma_pagesize == PAGE_SIZE && !force_pte)
> > > vma_pagesize = transparent_hugepage_adjust(memslot, hva,
> > > &pfn, &fault_ipa);
> > > +
> > > + /*
> > > + * The otherwise redundant test for system_supports_mte() allows the
> > > + * code to be compiled out when CONFIG_ARM64_MTE is not present.
> > > + */
> > > + if (system_supports_mte() && kvm->arch.mte_enabled && pfn_valid(pfn)) {
> > > + /*
> > > + * VM will be able to see the page's tags, so we must ensure
> > > + * they have been initialised.
> > > + */
> > > + struct page *page = pfn_to_page(pfn);
> > > + long i, nr_pages = compound_nr(page);
> > > +
> > > + /* if PG_mte_tagged is set, tags have already been initialised */
> > > + for (i = 0; i < nr_pages; i++, page++) {
> > > + if (!test_and_set_bit(PG_mte_tagged, &page->flags))
> > > + mte_clear_page_tags(page_address(page));
> > > + }
> > > + }
> >
> > If this page was swapped out and mapped back in, where does the
> > restoring from swap happen?
>
> Restoring from swap happens above this in the call to gfn_to_pfn_prot()
Looking at the call chain, gfn_to_pfn_prot() ends up with
get_user_pages() using the current->mm (the VMM) and that does a
set_pte_at(), presumably restoring the tags. Does this mean that all
memory mapped by the VMM in user space should have PROT_MTE set?
Otherwise we don't take the mte_sync_tags() path in set_pte_at() and no
tags restored from swap (we do save them since when they were mapped,
PG_mte_tagged was set).
So I think the code above should be similar to mte_sync_tags(), even
calling a common function, but I'm not sure where to get the swap pte
from.
An alternative is to only enable HCR_EL2.ATA and MTE in guest if the vmm
mapped the memory with PROT_MTE.
Yet another option is to always call mte_sync_tags() from set_pte_at()
and defer the pte_tagged() or is_swap_pte() checks to the MTE code.
--
Catalin
Powered by blists - more mailing lists