[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <7067bec0-8a15-1a18-481e-e2ea79575dcf@linux.alibaba.com>
Date: Sat, 4 Sep 2021 00:25:27 +0800
From: Lai Jiangshan <laijs@...ux.alibaba.com>
To: Sean Christopherson <seanjc@...gle.com>
Cc: Lai Jiangshan <jiangshanlai@...il.com>,
linux-kernel@...r.kernel.org, Paolo Bonzini <pbonzini@...hat.com>,
Vitaly Kuznetsov <vkuznets@...hat.com>,
Wanpeng Li <wanpengli@...cent.com>,
Jim Mattson <jmattson@...gle.com>,
Joerg Roedel <joro@...tes.org>,
Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...hat.com>, Borislav Petkov <bp@...en8.de>,
x86@...nel.org, "H. Peter Anvin" <hpa@...or.com>,
Marcelo Tosatti <mtosatti@...hat.com>,
Avi Kivity <avi@...hat.com>, kvm@...r.kernel.org
Subject: Re: [PATCH 2/7] KVM: X86: Synchronize the shadow pagetable before
link it
On 2021/9/4 00:06, Sean Christopherson wrote:
>
> trace_get_page:
> diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h
> index 50ade6450ace..2ff123ec0d64 100644
> --- a/arch/x86/kvm/mmu/paging_tmpl.h
> +++ b/arch/x86/kvm/mmu/paging_tmpl.h
> @@ -704,6 +704,9 @@ static int FNAME(fetch)(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault,
> access = gw->pt_access[it.level - 2];
> sp = kvm_mmu_get_page(vcpu, table_gfn, fault->addr,
> it.level-1, false, access);
> + if (sp->unsync_children &&
> + mmu_sync_children(vcpu, sp, false))
> + return RET_PF_RETRY;
It was like my first (unsent) fix. Just return RET_PF_RETRY when break.
And then I thought that it'd be better to retry fetching directly rather than
retry guest when the conditions are still valid/unchanged to avoid all the
next guest page walking and GUP(). Although the code does not check all
conditions such as interrupt event pending. (we can add that too)
I think it is a good design to allow break mmu_lock when mmu is handling
heavy work.
> }
>
> /*
> --
>
Powered by blists - more mailing lists