lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Fri, 30 Jul 2021 14:04:33 +0100
From:   Marc Zyngier <maz@...nel.org>
To:     Will Deacon <will@...nel.org>
Cc:     linux-arm-kernel@...ts.infradead.org, kvmarm@...ts.cs.columbia.edu,
        kvm@...r.kernel.org, linux-kernel@...r.kernel.org,
        qperret@...gle.com, dbrazdil@...gle.com,
        Srivatsa Vaddagiri <vatsa@...eaurora.org>,
        Shanker R Donthineni <sdonthineni@...dia.com>,
        James Morse <james.morse@....com>,
        Suzuki K Poulose <suzuki.poulose@....com>,
        Alexandru Elisei <alexandru.elisei@....com>,
        kernel-team@...roid.com
Subject: Re: [PATCH 04/16] KVM: arm64: Add MMIO checking infrastructure

On Fri, 30 Jul 2021 13:26:59 +0100,
Will Deacon <will@...nel.org> wrote:
> 
> On Wed, Jul 28, 2021 at 10:57:30AM +0100, Marc Zyngier wrote:
> > On Tue, 27 Jul 2021 19:11:08 +0100,
> > Will Deacon <will@...nel.org> wrote:
> > > On Thu, Jul 15, 2021 at 05:31:47PM +0100, Marc Zyngier wrote:
> > > > +bool kvm_install_ioguard_page(struct kvm_vcpu *vcpu, gpa_t ipa)
> > > > +{
> > > > +	struct kvm_mmu_memory_cache *memcache;
> > > > +	struct kvm_memory_slot *memslot;
> > > > +	int ret, idx;
> > > > +
> > > > +	if (!test_bit(KVM_ARCH_FLAG_MMIO_GUARD, &vcpu->kvm->arch.flags))
> > > > +		return false;
> > > > +
> > > > +	/* Must be page-aligned */
> > > > +	if (ipa & ~PAGE_MASK)
> > > > +		return false;
> > > > +
> > > > +	/*
> > > > +	 * The page cannot be in a memslot. At some point, this will
> > > > +	 * have to deal with device mappings though.
> > > > +	 */
> > > > +	idx = srcu_read_lock(&vcpu->kvm->srcu);
> > > > +	memslot = gfn_to_memslot(vcpu->kvm, ipa >> PAGE_SHIFT);
> > > > +	srcu_read_unlock(&vcpu->kvm->srcu, idx);
> > > 
> > > What does this memslot check achieve? A new memslot could be added after
> > > you've checked, no?
> > 
> > If you start allowing S2 annotations to coexist with potential memory
> > mappings, you're in for trouble. The faulting logic will happily
> > overwrite the annotation, and that's probably not what you want.
> 
> I don't disagree, but the check above appears to be racy.

Yup, the srcu_read_unlock() should be moved at the end of this
function. It's rather silly as it is currently written...

> 
> > As for new (or moving) memslots, I guess they should be checked
> > against existing annotations.
> 
> Something like that, but the devil is in the details as it will need to
> synchronize with this check somehow.

The SRCU read lock should protect us against memslots being removed
whilst we're accessing it. In a way, this is no different from taking
a page fault.

For new memslots, it is a lot less clear. There are multiple levels of
locking, more or less documented... It feels like slots_arch_lock is
the right tool for this job, but I need to page all that stuff in...

	M.

-- 
Without deviation from the norm, progress is not possible.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ