lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <5199E907.4010700@linux.vnet.ibm.com>
Date:	Mon, 20 May 2013 17:12:39 +0800
From:	Xiao Guangrong <xiaoguangrong@...ux.vnet.ibm.com>
To:	Gleb Natapov <gleb@...hat.com>
CC:	avi.kivity@...il.com, mtosatti@...hat.com, pbonzini@...hat.com,
	linux-kernel@...r.kernel.org, kvm@...r.kernel.org
Subject: Re: [PATCH v6 3/7] KVM: MMU: fast invalidate all pages

On 05/19/2013 06:04 PM, Gleb Natapov wrote:

>> +		/*
>> +		 * Do not repeatedly zap a root page to avoid unnecessary
>> +		 * KVM_REQ_MMU_RELOAD, otherwise we may not be able to
>> +		 * progress:
>> +		 *    vcpu 0                        vcpu 1
>> +		 *                         call vcpu_enter_guest():
>> +		 *                            1): handle KVM_REQ_MMU_RELOAD
>> +		 *                                and require mmu-lock to
>> +		 *                                load mmu
>> +		 * repeat:
>> +		 *    1): zap root page and
>> +		 *        send KVM_REQ_MMU_RELOAD
>> +		 *
>> +		 *    2): if (cond_resched_lock(mmu-lock))
>> +		 *
>> +		 *                            2): hold mmu-lock and load mmu
>> +		 *
>> +		 *                            3): see KVM_REQ_MMU_RELOAD bit
>> +		 *                                on vcpu->requests is set
>> +		 *                                then return 1 to call
>> +		 *                                vcpu_enter_guest() again.
>> +		 *            goto repeat;
>> +		 *
>> +		 */
> I am not sure why the above scenario will prevent us from progressing.
> There is finite number of root pages with invalid generation number, so
> eventually we will zap them all and vcpu1 will stop seeing KVM_REQ_MMU_RELOAD
> request.

This patch does not "zap pages in batch", so kvm_zap_obsolete_pages() can
just zap invalid root pages and lock-break due to the lock contention on the
path of handing KVM_REQ_MMU_RELOAD.

Yes, after "zap pages in batch", this issue does not exist any more. I should
update this into that patch.

> 
> This check here prevent unnecessary KVM_REQ_MMU_RELOAD as you say, but
> this races the question, why don't we check for sp->role.invalid in
> kvm_mmu_prepare_zap_page before calling kvm_reload_remote_mmus()?
> Something like this:
> 
> diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
> index 40d7b2d..d2ae3a4 100644
> --- a/arch/x86/kvm/mmu.c
> +++ b/arch/x86/kvm/mmu.c
> @@ -2081,7 +2081,8 @@ static int kvm_mmu_prepare_zap_page(struct kvm *kvm, struct kvm_mmu_page *sp,
>  		kvm_mod_used_mmu_pages(kvm, -1);
>  	} else {
>  		list_move(&sp->link, &kvm->arch.active_mmu_pages);
> -		kvm_reload_remote_mmus(kvm);
> +		if (!sp->role.invalid)
> +			kvm_reload_remote_mmus(kvm);
>  	}
> 
>  	sp->role.invalid = 1;

Yes, it is better.

> 
> Actually we can add check for is_obsolete_sp() there too since
> kvm_mmu_invalidate_all_pages() already calls kvm_reload_remote_mmus()
> after incrementing mmu_valid_gen.

Yes, I agree.

> 
> Or do I miss something?

No, you are right. ;)

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ