lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Fri, 26 Jun 2020 17:40:51 +0100
From:   James Morse <james.morse@....com>
To:     Steven Price <steven.price@....com>
Cc:     Catalin Marinas <catalin.marinas@....com>,
        Marc Zyngier <maz@...nel.org>, Will Deacon <will@...nel.org>,
        Julien Thierry <julien.thierry.kdev@...il.com>,
        Suzuki Poulose <Suzuki.Poulose@....com>,
        "kvmarm@...ts.cs.columbia.edu" <kvmarm@...ts.cs.columbia.edu>,
        "linux-arm-kernel@...ts.infradead.org" 
        <linux-arm-kernel@...ts.infradead.org>,
        "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
        Dave P Martin <Dave.Martin@....com>,
        Mark Rutland <Mark.Rutland@....com>,
        Thomas Gleixner <tglx@...utronix.de>
Subject: Re: [RFC PATCH 2/2] arm64: kvm: Introduce MTE VCPU feature

Hi Steve,

On 17/06/2020 16:34, Steven Price wrote:
> On 17/06/2020 15:38, Catalin Marinas wrote:
>> On Wed, Jun 17, 2020 at 01:38:44PM +0100, Steven Price wrote:
>>> diff --git a/virt/kvm/arm/mmu.c b/virt/kvm/arm/mmu.c
>>> index e3b9ee268823..040a7fffaa93 100644
>>> --- a/virt/kvm/arm/mmu.c
>>> +++ b/virt/kvm/arm/mmu.c
>>> @@ -1783,6 +1783,17 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t
>>> fault_ipa,
>>>               vma_pagesize = PMD_SIZE;
>>>       }
>>>   +    if (system_supports_mte() && kvm->arch.vcpu_has_mte) {
>>> +        /*
>>> +         * VM will be able to see the page's tags, so we must ensure
>>> +         * they have been initialised.
>>> +         */
>>> +        struct page *page = pfn_to_page(pfn);
>>> +
>>> +        if (!test_and_set_bit(PG_mte_tagged, &page->flags))
>>> +            mte_clear_page_tags(page_address(page), page_size(page));
>>> +    }
>>
>> Are all the guest pages always mapped via a Stage 2 fault? It may be
>> better if we did that via kvm_set_spte_hva().

> I was under the impression that pages are always faulted into the stage 2, but I have to
> admit I'm not 100% sure about that.

I think there is only one case: VMA with VM_PFNMAP set will get pre-populated during
kvm_arch_prepare_memory_region(), but they are always made device at stage2, so MTE isn't
a concern there.


> kvm_set_spte_hva() may be more appropriate, although on first look I don't understand how
> that function deals with huge pages. Is it actually called for normal mappings or only for
> changes due to the likes of KSM?

It looks like its only called through set_pte_at_notify(), which is used by things like
KSM/COW that change a mapping, and really don't want to fault it a second time. I guess
they are only for PAGE_SIZE mappings.

Other mapping sizes would get faulted in by user_mem_abort().


I think this should happen in the same places as we clean new pages to PoC, as that is
also an additional piece of maintenance KVM has to do that the host's stage 1 doesn't.

You may be able to rename clean_dcache_guest_page() to encompass maintenance that we need
when a page is accessible to a different EL1.


Thanks,

James

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ