[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <c8ccc8f1-300a-09be-db6b-df2a1dedd4cf@loongson.cn>
Date: Wed, 16 Aug 2023 11:44:29 +0800
From: bibo mao <maobibo@...ngson.cn>
To: Sean Christopherson <seanjc@...gle.com>,
Yan Zhao <yan.y.zhao@...el.com>
Cc: linux-mm@...ck.org, linux-kernel@...r.kernel.org,
kvm@...r.kernel.org, pbonzini@...hat.com, mike.kravetz@...cle.com,
apopple@...dia.com, jgg@...dia.com, rppt@...nel.org,
akpm@...ux-foundation.org, kevin.tian@...el.com, david@...hat.com
Subject: Re: [RFC PATCH v2 5/5] KVM: Unmap pages only when it's indeed
protected for NUMA migration
在 2023/8/16 10:43, bibo mao 写道:
>
>
> 在 2023/8/15 22:50, Sean Christopherson 写道:
>> On Tue, Aug 15, 2023, Yan Zhao wrote:
>>> On Mon, Aug 14, 2023 at 09:40:44AM -0700, Sean Christopherson wrote:
>>>>>> Note, I'm assuming secondary MMUs aren't allowed to map swap entries...
>>>>>>
>>>>>> Compile tested only.
>>>>>
>>>>> I don't find a matching end to each
>>>>> mmu_notifier_invalidate_range_start_nonblock().
>>>>
>>>> It pairs with existing call to mmu_notifier_invalidate_range_end() in change_pmd_range():
>>>>
>>>> if (range.start)
>>>> mmu_notifier_invalidate_range_end(&range);
>>> No, It doesn't work for mmu_notifier_invalidate_range_start() sent in change_pte_range(),
>>> if we only want the range to include pages successfully set to PROT_NONE.
>>
>> Precise invalidation was a non-goal for my hack-a-patch. The intent was purely
>> to defer invalidation until it was actually needed, but still perform only a
>> single notification so as to batch the TLB flushes, e.g. the start() call still
>> used the original @end.
>>
>> The idea was to play nice with the scenario where nothing in a VMA could be migrated.
>> It was complete untested though, so it may not have actually done anything to reduce
>> the number of pointless invalidations.
> For numa-balance scenery, can original page still be used by application even if pte
> is changed with PROT_NONE? If it can be used, maybe we can zap shadow mmu and flush tlb
Since there is kvm_mmu_notifier_change_pte notification when numa page is replaced with
new page, my meaning that can original page still be used by application even if pte
is changed with PROT_NONE and before replaced with new page?
And for primary mmu, tlb is flushed after pte is changed with PROT_NONE and
after mmu_notifier_invalidate_range_end notification for secondary mmu.
Regards
Bibo Mao
> in notification mmu_notifier_invalidate_range_end with precised range, the range can
> be cross-range between range mmu_gather and mmu_notifier_range.
>
> Regards
> Bibo Mao
>>
>>>> diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
>>>> index 9e4cd8b4a202..f29718a16211 100644
>>>> --- a/arch/x86/kvm/mmu/mmu.c
>>>> +++ b/arch/x86/kvm/mmu/mmu.c
>>>> @@ -4345,6 +4345,9 @@ static int kvm_faultin_pfn(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault,
>>>> if (unlikely(!fault->slot))
>>>> return kvm_handle_noslot_fault(vcpu, fault, access);
>>>>
>>>> + if (mmu_invalidate_retry_hva(vcpu->kvm, fault->mmu_seq, fault->hva))
>>>> + return RET_PF_RETRY;
>>>> +
>>> This can effectively reduce the remote flush IPIs a lot!
>>> One Nit is that, maybe rmb() or READ_ONCE() is required for kvm->mmu_invalidate_range_start
>>> and kvm->mmu_invalidate_range_end.
>>> Otherwise, I'm somewhat worried about constant false positive and retry.
>>
>> If anything, this needs a READ_ONCE() on mmu_invalidate_in_progress. The ranges
>> aren't touched when when mmu_invalidate_in_progress goes to zero, so ensuring they
>> are reloaded wouldn't do anything. The key to making forward progress is seeing
>> that there is no in-progress invalidation.
>>
>> I did consider adding said READ_ONCE(), but practically speaking, constant false
>> positives are impossible. KVM will re-enter the guest when retrying, and there
>> is zero chance of the compiler avoiding reloads across VM-Enter+VM-Exit.
>>
>> I suppose in theory we might someday differentiate between "retry because a different
>> vCPU may have fixed the fault" and "retry because there's an in-progress invalidation",
>> and not bother re-entering the guest for the latter, e.g. have it try to yield
>> instead.
>>
>> All that said, READ_ONCE() on mmu_invalidate_in_progress should effectively be a
>> nop, so it wouldn't hurt to be paranoid in this case.
>>
>> Hmm, at that point, it probably makes sense to add a READ_ONCE() for mmu_invalidate_seq
>> too, e.g. so that a sufficiently clever compiler doesn't completely optimize away
>> the check. Losing the check wouldn't be problematic (false negatives are fine,
>> especially on that particular check), but the generated code would *look* buggy.
Powered by blists - more mailing lists