lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:	Thu, 18 Apr 2013 14:34:34 -0300
From:	Marcelo Tosatti <mtosatti@...hat.com>
To:	Gleb Natapov <gleb@...hat.com>
Cc:	Xiao Guangrong <xiaoguangrong@...ux.vnet.ibm.com>,
	linux-kernel@...r.kernel.org, kvm@...r.kernel.org
Subject: Re: [PATCH v2 0/7] KVM: MMU: fast zap all shadow pages

On Thu, Apr 18, 2013 at 07:36:03PM +0300, Gleb Natapov wrote:
> On Thu, Apr 18, 2013 at 11:01:18AM -0300, Marcelo Tosatti wrote:
> > On Thu, Apr 18, 2013 at 12:42:39PM +0300, Gleb Natapov wrote:
> > > > > that, but if not then less code is better.
> > > > 
> > > > The number of sp->role.invalid=1 pages is small (only shadow roots). It
> > > > can grow but is bounded to a handful. No improvement visible there.
> > > > 
> > > > The number of shadow pages with old mmu_gen_number is potentially large.
> > > > 
> > > > Returning all shadow pages to the allocator is problematic because it
> > > > takes a long time (therefore the suggestion to postpone it).
> > > > 
> > > > Spreading the work to free (or reuse) those shadow pages to individual
> > > > page fault instances alleviates the mmu_lock hold time issue without
> > > > significant reduction to post kvm_mmu_zap_all operation (which has to
> > > > rebuilt all pagetables anyway).
> > > > 
> > > > You prefer to modify SLAB allocator to aggressively free these stale
> > > > shadow pages rather than kvm_mmu_get_page to reuse them?
> > > Are you saying that what makes kvm_mmu_zap_all() slow is that we return
> > > all the shadow pages to the SLAB allocator? As far as I understand what
> > > makes it slow is walking over huge number of shadow pages via various
> > > lists, actually releasing them to the SLAB is not an issue, otherwise
> > > the problem could have been solved by just moving
> > > kvm_mmu_commit_zap_page() out of the mmu_lock. If there is measurable
> > > SLAB overhead from not reusing the pages I am all for reusing them, but
> > > is this really the case or just premature optimization?
> > 
> > Actually releasing them is not a problem. Walking all pages, lists and
> > releasing in the process part of the problem ("returning them to the allocator"
> > would have been clearer with "freeing them").
> > 
> > Point is at some point you have to walk all pages and release their data
> > structures. With Xiaos scheme its possible to avoid this lengthy process
> > by either:
> > 
> > 1) reusing the pages with stale generation number
> > or
> > 2) releasing them via the SLAB shrinker more aggressively
> > 
> But is it really so? The number of allocated shadow pages are limited
> via n_max_mmu_pages mechanism, so I expect most freeing to happen in
> make_mmu_pages_available() which is called during page fault so freeing
> will be spread across page faults more or less equally. Doing
> kvm_mmu_prepare_zap_page()/kvm_mmu_commit_zap_page() and zapping unknown
> number of shadow pages during kvm_mmu_get_page() just to reuse one does
> not sound like a clear win to me.

Makes sense.

> > (another typo, i meant "SLAB shrinker" not "SLAB allocator").
> > 
> > But you seem to be concerned for 1) due to code complexity issues?
> > 
> It adds code that looks to me redundant. I may be wrong of course, if
> it is a demonstrable win I am all for it.

Ditto.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ