[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <51835087.8090605@linux.vnet.ibm.com>
Date: Fri, 03 May 2013 13:52:07 +0800
From: Xiao Guangrong <xiaoguangrong@...ux.vnet.ibm.com>
To: Marcelo Tosatti <mtosatti@...hat.com>
CC: gleb@...hat.com, avi.kivity@...il.com,
linux-kernel@...r.kernel.org, kvm@...r.kernel.org,
takuya.yoshikawa@...il.com
Subject: Re: [PATCH v4 4/6] KVM: MMU: fast invalid all shadow pages
On 05/03/2013 09:05 AM, Marcelo Tosatti wrote:
>> +
>> +/*
>> + * Fast invalid all shadow pages belong to @slot.
>> + *
>> + * @slot != NULL means the invalidation is caused the memslot specified
>> + * by @slot is being deleted, in this case, we should ensure that rmap
>> + * and lpage-info of the @slot can not be used after calling the function.
>> + *
>> + * @slot == NULL means the invalidation due to other reasons, we need
>> + * not care rmap and lpage-info since they are still valid after calling
>> + * the function.
>> + */
>> +void kvm_mmu_invalid_memslot_pages(struct kvm *kvm,
>> + struct kvm_memory_slot *slot)
>> +{
>> + spin_lock(&kvm->mmu_lock);
>> + kvm->arch.mmu_valid_gen++;
>> +
>> + /*
>> + * All shadow paes are invalid, reset the large page info,
>> + * then we can safely desotry the memslot, it is also good
>> + * for large page used.
>> + */
>> + kvm_clear_all_lpage_info(kvm);
>
> Xiao,
>
> I understood it was agreed that simple mmu_lock lockbreak while
> avoiding zapping of newly instantiated pages upon a
>
> if(spin_needbreak)
> cond_resched_lock()
>
> cycle was enough as a first step? And then later introduce root zapping
> along with measurements.
>
> https://lkml.org/lkml/2013/4/22/544
Yes, it is.
See the changelog in 0/0:
" we use lock-break technique to zap all sptes linked on the
invalid rmap, it is not very effective but good for the first step."
Thanks!
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists