lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200320230758.GA8418@linux.intel.com>
Date:   Fri, 20 Mar 2020 16:07:58 -0700
From:   Sean Christopherson <sean.j.christopherson@...el.com>
To:     Peter Xu <peterx@...hat.com>
Cc:     Christian Borntraeger <borntraeger@...ibm.com>,
        Janosch Frank <frankja@...ux.ibm.com>,
        Paolo Bonzini <pbonzini@...hat.com>,
        David Hildenbrand <david@...hat.com>,
        Cornelia Huck <cohuck@...hat.com>, kvm@...r.kernel.org,
        linux-kernel@...r.kernel.org, Qian Cai <cai@....pw>
Subject: Re: [PATCH 1/7] KVM: Fix out of range accesses to memslots

On Fri, Mar 20, 2020 at 06:58:02PM -0400, Peter Xu wrote:
> On Fri, Mar 20, 2020 at 03:40:41PM -0700, Sean Christopherson wrote:
 > > I thought resetting lru_slot to zero would be good enough when
> > > duplicating the slots... however if we want to do finer grained reset,
> > > maybe even better to reset also those invalidated ones (since we know
> > > this information)?
> > > 
> > >    	if (slots->lru_slot >= slots->id_to_index[memslot->id])
> > >    		slots->lru_slot = 0;
> > 
> > I'd prefer to go with the more obviously correct version.  This code will
> > rarely trigger, e.g. larger slots have lower indices and are more likely to
> > be the LRU (by virtue of being the biggest), and dynamic memslot deletion
> > is usually for relatively small ranges that are being remapped by the guest.
> 
> IMHO the more obvious correct version could be unconditionally setting
> lru_slot to something invalid (e.g. -1) in kvm_dup_memslots(), then in
> the two search_memslots() skip the cache if lru_slot is invalid.
> That's IMHO easier to understand than the "!slots->used_slots" checks.
> But I've no strong opinion.

We'd still need the !slots->used_slots check.  I considered adding an
explicit check on @slot for the cache lookup, e.g.

static inline struct kvm_memory_slot *
search_memslots(struct kvm_memslots *slots, gfn_t gfn)
{
	int start = 0, end = slots->used_slots;
	int slot = atomic_read(&slots->lru_slot);
	struct kvm_memory_slot *memslots = slots->memslots;

	if (likely(slot < slots->used_slots) &&
	    gfn >= memslots[slot].base_gfn &&
	    gfn < memslots[slot].base_gfn + memslots[slot].npages)
		return &memslots[slot];


But then the back half of the function still ends up with an out-of-bounds
bug.  E.g. if used_slots == 0, then start==0, and memslots[start].base_gfn
accesses garbage.

	while (start < end) {
		slot = start + (end - start) / 2;

		if (gfn >= memslots[slot].base_gfn)
			end = slot;
		else
			start = slot + 1;
	}

	if (gfn >= memslots[start].base_gfn &&
	    gfn < memslots[start].base_gfn + memslots[start].npages) {
		atomic_set(&slots->lru_slot, start);
		return &memslots[start];
	}

	return NULL;
}

I also didn't want to invalidate the lru_slot any more than is necessary,
not that I'd expect it to make a noticeable difference even in a
pathalogical scenario, but it seemed prudent.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ