[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ab4a234e-e8fc-d105-1cf6-7cb07a077182@linux.intel.com>
Date: Thu, 11 May 2017 11:23:16 +1200
From: "Huang, Kai" <kai.huang@...ux.intel.com>
To: Bandan Das <bsd@...hat.com>
Cc: kvm@...r.kernel.org, pbonzini@...hat.com,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH v2 1/3] kvm: x86: Add a hook for arch specific dirty
logging emulation
On 5/11/2017 3:53 AM, Bandan Das wrote:
> Hi Kai,
>
> "Huang, Kai" <kai.huang@...ux.intel.com> writes:
>
>> On 5/6/2017 7:25 AM, Bandan Das wrote:
>>> When KVM updates accessed/dirty bits, this hook can be used
>>> to invoke an arch specific function that implements/emulates
>>> dirty logging such as PML.
>>>
>>> Signed-off-by: Bandan Das <bsd@...hat.com>
>>> ---
>>> arch/x86/include/asm/kvm_host.h | 2 ++
>>> arch/x86/kvm/mmu.c | 15 +++++++++++++++
>>> arch/x86/kvm/mmu.h | 1 +
>>> arch/x86/kvm/paging_tmpl.h | 4 ++++
>>> 4 files changed, 22 insertions(+)
>>>
>>> diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
>>> index f5bddf92..9c761fe 100644
>>> --- a/arch/x86/include/asm/kvm_host.h
>>> +++ b/arch/x86/include/asm/kvm_host.h
>>> @@ -1020,6 +1020,8 @@ struct kvm_x86_ops {
>>> void (*enable_log_dirty_pt_masked)(struct kvm *kvm,
>>> struct kvm_memory_slot *slot,
>>> gfn_t offset, unsigned long mask);
>>> + int (*write_log_dirty)(struct kvm_vcpu *vcpu);
>>
>> Hi,
>>
>> Thanks for adding PML to nested support!
>>
>> IMHO this callback is only used for write L2's dirty gpa to L1's PML
>> buffer, so probably it's better to change the name to something like:
>> nested_write_log_dirty.
>
> The name was meant more to signify what it does: i.e. write dirty log rather
> than where in the hierarchy it's being used :) But I guess, a nested_ prefix
> doesn't hurt either.
Hi Bandan,
I was just suggesting. You and Paolo still make the decision :)
Thanks,
-Kai
>
> Bandan
>
>> Thanks,
>> -Kai
>>
>>> +
>>> /* pmu operations of sub-arch */
>>> const struct kvm_pmu_ops *pmu_ops;
>>>
>>> diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
>>> index 5586765..5d3376f 100644
>>> --- a/arch/x86/kvm/mmu.c
>>> +++ b/arch/x86/kvm/mmu.c
>>> @@ -1498,6 +1498,21 @@ void kvm_arch_mmu_enable_log_dirty_pt_masked(struct kvm *kvm,
>>> kvm_mmu_write_protect_pt_masked(kvm, slot, gfn_offset, mask);
>>> }
>>>
>>> +/**
>>> + * kvm_arch_write_log_dirty - emulate dirty page logging
>>> + * @vcpu: Guest mode vcpu
>>> + *
>>> + * Emulate arch specific page modification logging for the
>>> + * nested hypervisor
>>> + */
>>> +int kvm_arch_write_log_dirty(struct kvm_vcpu *vcpu)
>>> +{
>>> + if (kvm_x86_ops->write_log_dirty)
>>> + return kvm_x86_ops->write_log_dirty(vcpu);
>>> +
>>> + return 0;
>>> +}
>>
>> kvm_nested_arch_write_log_dirty?
>>
>>> +
>>> bool kvm_mmu_slot_gfn_write_protect(struct kvm *kvm,
>>> struct kvm_memory_slot *slot, u64 gfn)
>>> {
>>> diff --git a/arch/x86/kvm/mmu.h b/arch/x86/kvm/mmu.h
>>> index d8ccb32..2797580 100644
>>> --- a/arch/x86/kvm/mmu.h
>>> +++ b/arch/x86/kvm/mmu.h
>>> @@ -202,4 +202,5 @@ void kvm_mmu_gfn_disallow_lpage(struct kvm_memory_slot *slot, gfn_t gfn);
>>> void kvm_mmu_gfn_allow_lpage(struct kvm_memory_slot *slot, gfn_t gfn);
>>> bool kvm_mmu_slot_gfn_write_protect(struct kvm *kvm,
>>> struct kvm_memory_slot *slot, u64 gfn);
>>> +int kvm_arch_write_log_dirty(struct kvm_vcpu *vcpu);
>>> #endif
>>> diff --git a/arch/x86/kvm/paging_tmpl.h b/arch/x86/kvm/paging_tmpl.h
>>> index 314d207..5624174 100644
>>> --- a/arch/x86/kvm/paging_tmpl.h
>>> +++ b/arch/x86/kvm/paging_tmpl.h
>>> @@ -226,6 +226,10 @@ static int FNAME(update_accessed_dirty_bits)(struct kvm_vcpu *vcpu,
>>> if (level == walker->level && write_fault &&
>>> !(pte & PT_GUEST_DIRTY_MASK)) {
>>> trace_kvm_mmu_set_dirty_bit(table_gfn, index, sizeof(pte));
>>> +#if PTTYPE == PTTYPE_EPT
>>> + if (kvm_arch_write_log_dirty(vcpu))
>>> + return -EINVAL;
>>> +#endif
>>> pte |= PT_GUEST_DIRTY_MASK;
>>> }
>>> if (pte == orig_pte)
>>>
>
Powered by blists - more mailing lists