[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4E0B0997.4090206@cn.fujitsu.com>
Date: Wed, 29 Jun 2011 19:16:39 +0800
From: Xiao Guangrong <xiaoguangrong@...fujitsu.com>
To: Avi Kivity <avi@...hat.com>
CC: Marcelo Tosatti <mtosatti@...hat.com>,
LKML <linux-kernel@...r.kernel.org>, KVM <kvm@...r.kernel.org>
Subject: Re: [PATCH v2 19/22] KVM: MMU: lockless walking shadow page table
On 06/29/2011 05:16 PM, Avi Kivity wrote:
> On 06/22/2011 05:35 PM, Xiao Guangrong wrote:
>> Use rcu to protect shadow pages table to be freed, so we can safely walk it,
>> it should run fastly and is needed by mmio page fault
>>
>
>> static void kvm_mmu_commit_zap_page(struct kvm *kvm,
>> struct list_head *invalid_list)
>> {
>> @@ -1767,6 +1874,14 @@ static void kvm_mmu_commit_zap_page(struct kvm *kvm,
>>
>> kvm_flush_remote_tlbs(kvm);
>>
>> + if (atomic_read(&kvm->arch.reader_counter)) {
>> + kvm_mmu_isolate_pages(invalid_list);
>> + sp = list_first_entry(invalid_list, struct kvm_mmu_page, link);
>> + list_del_init(invalid_list);
>> + call_rcu(&sp->rcu, free_pages_rcu);
>> + return;
>> + }
>> +
>
> I think we should do this unconditionally. The cost of ping-ponging the shared cache line containing reader_counter will increase with large smp counts. On the other hand, zap_page is very rare, so it can be a little slower. Also, less code paths = easier to understand.
>
On soft mmu, zap_page is very frequently, it can cause performance regression in my test.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists