[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <jpg1srvtf4h.fsf@linux.bootlegged.copy>
Date: Thu, 11 May 2017 14:36:30 -0400
From: Bandan Das <bsd@...hat.com>
To: "Huang\, Kai" <kai.huang@...ux.intel.com>
Cc: kvm@...r.kernel.org, pbonzini@...hat.com,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH v2 1/3] kvm: x86: Add a hook for arch specific dirty logging emulation
"Huang, Kai" <kai.huang@...ux.intel.com> writes:
...
> Hi Bandan,
>
> I was just suggesting. You and Paolo still make the decision :)
Sure Kai, I don't mind the name change at all.
The maintainer has already picked this up and I don't think
the function name change is worth submitting a follow up.
Thank you very much for the review! :)
Bandan
> Thanks,
> -Kai
>>
>> Bandan
>>
>>> Thanks,
>>> -Kai
>>>
>>>> +
>>>> /* pmu operations of sub-arch */
>>>> const struct kvm_pmu_ops *pmu_ops;
>>>>
>>>> diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
>>>> index 5586765..5d3376f 100644
>>>> --- a/arch/x86/kvm/mmu.c
>>>> +++ b/arch/x86/kvm/mmu.c
>>>> @@ -1498,6 +1498,21 @@ void kvm_arch_mmu_enable_log_dirty_pt_masked(struct kvm *kvm,
>>>> kvm_mmu_write_protect_pt_masked(kvm, slot, gfn_offset, mask);
>>>> }
>>>>
>>>> +/**
>>>> + * kvm_arch_write_log_dirty - emulate dirty page logging
>>>> + * @vcpu: Guest mode vcpu
>>>> + *
>>>> + * Emulate arch specific page modification logging for the
>>>> + * nested hypervisor
>>>> + */
>>>> +int kvm_arch_write_log_dirty(struct kvm_vcpu *vcpu)
>>>> +{
>>>> + if (kvm_x86_ops->write_log_dirty)
>>>> + return kvm_x86_ops->write_log_dirty(vcpu);
>>>> +
>>>> + return 0;
>>>> +}
>>>
>>> kvm_nested_arch_write_log_dirty?
>>>
>>>> +
>>>> bool kvm_mmu_slot_gfn_write_protect(struct kvm *kvm,
>>>> struct kvm_memory_slot *slot, u64 gfn)
>>>> {
>>>> diff --git a/arch/x86/kvm/mmu.h b/arch/x86/kvm/mmu.h
>>>> index d8ccb32..2797580 100644
>>>> --- a/arch/x86/kvm/mmu.h
>>>> +++ b/arch/x86/kvm/mmu.h
>>>> @@ -202,4 +202,5 @@ void kvm_mmu_gfn_disallow_lpage(struct kvm_memory_slot *slot, gfn_t gfn);
>>>> void kvm_mmu_gfn_allow_lpage(struct kvm_memory_slot *slot, gfn_t gfn);
>>>> bool kvm_mmu_slot_gfn_write_protect(struct kvm *kvm,
>>>> struct kvm_memory_slot *slot, u64 gfn);
>>>> +int kvm_arch_write_log_dirty(struct kvm_vcpu *vcpu);
>>>> #endif
>>>> diff --git a/arch/x86/kvm/paging_tmpl.h b/arch/x86/kvm/paging_tmpl.h
>>>> index 314d207..5624174 100644
>>>> --- a/arch/x86/kvm/paging_tmpl.h
>>>> +++ b/arch/x86/kvm/paging_tmpl.h
>>>> @@ -226,6 +226,10 @@ static int FNAME(update_accessed_dirty_bits)(struct kvm_vcpu *vcpu,
>>>> if (level == walker->level && write_fault &&
>>>> !(pte & PT_GUEST_DIRTY_MASK)) {
>>>> trace_kvm_mmu_set_dirty_bit(table_gfn, index, sizeof(pte));
>>>> +#if PTTYPE == PTTYPE_EPT
>>>> + if (kvm_arch_write_log_dirty(vcpu))
>>>> + return -EINVAL;
>>>> +#endif
>>>> pte |= PT_GUEST_DIRTY_MASK;
>>>> }
>>>> if (pte == orig_pte)
>>>>
>>
Powered by blists - more mailing lists