lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <514BBDC5.6090104@linux.vnet.ibm.com>
Date:	Fri, 22 Mar 2013 10:11:17 +0800
From:	Xiao Guangrong <xiaoguangrong@...ux.vnet.ibm.com>
To:	Marcelo Tosatti <mtosatti@...hat.com>
CC:	gleb@...hat.com, linux-kernel@...r.kernel.org, kvm@...r.kernel.org
Subject: Re: [PATCH v2 0/7] KVM: MMU: fast zap all shadow pages

On 03/22/2013 06:21 AM, Marcelo Tosatti wrote:
> On Wed, Mar 20, 2013 at 04:30:20PM +0800, Xiao Guangrong wrote:
>> Changlog:
>> V2:
>>   - do not reset n_requested_mmu_pages and n_max_mmu_pages
>>   - batch free root shadow pages to reduce vcpu notification and mmu-lock
>>     contention
>>   - remove the first patch that introduce kvm->arch.mmu_cache since we only
>>     'memset zero' on hashtable rather than all mmu cache members in this
>>     version
>>   - remove unnecessary kvm_reload_remote_mmus after kvm_mmu_zap_all
>>
>> * Issue
>> The current kvm_mmu_zap_all is really slow - it is holding mmu-lock to
>> walk and zap all shadow pages one by one, also it need to zap all guest
>> page's rmap and all shadow page's parent spte list. Particularly, things
>> become worse if guest uses more memory or vcpus. It is not good for
>> scalability.
> 
> Xiao, 
> 
> The bulk removal of shadow pages from mmu cache is nerving - it creates
> two codepaths to delete a data structure: the usual, single entry one
> and the bulk one.
> 
> There are two main usecases for kvm_mmu_zap_all(): to invalidate the
> current mmu tree (from kvm_set_memory) and to tear down all pages
> (VM shutdown).
> 
> The first usecase can use your idea of an invalid generation number
> on shadow pages. That is, increment the VM generation number, nuke the root
> pages and thats it. 
> 
> The modifications should be contained to kvm_mmu_get_page() mostly,
> correct? (would also have to keep counters to increase SLAB freeing 
> ratio, relative to number of outdated shadow pages).

Yes.

> 
> And then have codepaths that nuke shadow pages break from the spinlock,

I think this is not needed any more. We can let mmu_notify use the generation
number to invalid all shadow pages, then we only need to free them after
all vcpus down and mmu_notify unregistered - at this point, no lock contention,
we can directly free them.

> such as kvm_mmu_slot_remove_write_access does now (spin_needbreak).

BTW, to my honest, i do not think spin_needbreak is a good way - it does
not fix the hot-lock contention and it just occupies more cpu time to avoid
possible soft lock-ups.

Especially, zap-all-shadow-pages can let other vcpus fault and vcpus contest
mmu-lock, then zap-all-shadow-pages release mmu-lock and wait, other vcpus
create page tables again. zap-all-shadow-page need long time to be finished,
the worst case is, it can not completed forever on intensive vcpu and memory
usage.

I still think the right way to fix this kind of thing is optimization for
mmu-lock.

> That would also solve the current issues without using more memory 
> for pte_list_desc and without the delicate "Reset MMU cache" step.
> 
> What you think?

I agree your point, Marcelo! I will redesign it. Thank you!

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ