lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20130422092117.GM8997@redhat.com>
Date:	Mon, 22 Apr 2013 12:21:17 +0300
From:	Gleb Natapov <gleb@...hat.com>
To:	Xiao Guangrong <xiaoguangrong@...ux.vnet.ibm.com>
Cc:	mtosatti@...hat.com, avi.kivity@...il.com,
	linux-kernel@...r.kernel.org, kvm@...r.kernel.org
Subject: Re: [PATCH v3 00/15] KVM: MMU: fast zap all shadow pages

On Sun, Apr 21, 2013 at 10:09:29PM +0800, Xiao Guangrong wrote:
> On 04/21/2013 09:03 PM, Gleb Natapov wrote:
> > On Tue, Apr 16, 2013 at 02:32:38PM +0800, Xiao Guangrong wrote:
> >> This patchset is based on my previous two patchset:
> >> [PATCH 0/2] KVM: x86: avoid potential soft lockup and unneeded mmu reload
> >> (https://lkml.org/lkml/2013/4/1/2)
> >>
> >> [PATCH v2 0/6] KVM: MMU: fast invalid all mmio sptes
> >> (https://lkml.org/lkml/2013/4/1/134)
> >>
> >> Changlog:
> >> V3:
> >>   completely redesign the algorithm, please see below.
> >>
> > This looks pretty complicated. Is it still needed in order to avoid soft
> > lockups after "avoid potential soft lockup and unneeded mmu reload" patch?
> 
> Yes.
> 
> I discussed this point with Marcelo:
> 
> ======
> BTW, to my honest, i do not think spin_needbreak is a good way - it does
> not fix the hot-lock contention and it just occupies more cpu time to avoid
> possible soft lock-ups.
> 
> Especially, zap-all-shadow-pages can let other vcpus fault and vcpus contest
> mmu-lock, then zap-all-shadow-pages release mmu-lock and wait, other vcpus
> create page tables again. zap-all-shadow-page need long time to be finished,
> the worst case is, it can not completed forever on intensive vcpu and memory
> usage.
> 
So what about mixed approach: use generation numbers and reload roots to
quickly invalidate all shadow pages and then do kvm_mmu_zap_all_invalid().
kvm_mmu_zap_all_invalid() is a new function that invalidates only shadow
pages with stale generation number (and uses lock break technique). It
may traverse active_mmu_pages from tail to head since new shadow pages
will be added to the head of the list or it may use invalid slot rmap to
find exactly what should be invalidated.

> I still think the right way to fix this kind of thing is optimization for
> mmu-lock.
> ======
> 
> Which parts scare you? Let's find a way to optimize for it. ;). For example,
> if you do not like unmap_memslot_rmap_nolock(), we can simplify it - We can
> use walk_shadow_page_lockless_begin() and walk_shadow_page_lockless_end() to
> protect spte instead of kvm->being_unmaped_rmap.
> 

kvm->being_unmaped_rmap is particularly tricky, although looks
correct. Additional indirection with rmap ops also does not help following
the code. I'd rather have if(slot is invalid) in a couple of places where
things should be done differently. In most places it will be WARN_ON(slot
is invalid).

--
			Gleb.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ