[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <207465973.3347707.1502523303096.JavaMail.zimbra@redhat.com>
Date: Sat, 12 Aug 2017 03:35:03 -0400 (EDT)
From: Paolo Bonzini <pbonzini@...hat.com>
To: Peng Hao <peng.hao2@....com.cn>
Cc: rkrcmar@...hat.com, kvm@...r.kernel.org,
linux-kernel@...r.kernel.org
Subject: Re: [RFC PATCH] kvm: x86: reduce rtc 0x70 access vm-exit time
----- Original Message -----
> From: "Peng Hao" <peng.hao2@....com.cn>
> To: pbonzini@...hat.com, rkrcmar@...hat.com
> Cc: kvm@...r.kernel.org, linux-kernel@...r.kernel.org, "Peng Hao" <peng.hao2@....com.cn>
> Sent: Saturday, August 12, 2017 2:06:51 PM
> Subject: [RFC PATCH] kvm: x86: reduce rtc 0x70 access vm-exit time
>
> some versions of windows guest access rtc frequently because of
> rtc as system tick.guest access rtc like this: write register index
> to 0x70, then write or read data from 0x71. writing 0x70 port is
> just as index and do nothing else. So we can use coalesced mmio to
> handle this scene to reduce VM-EXIT time.
> without my patch, get the vm-exit time of accessing rtc 0x70 using
> perf tools: (guest OS : windows 7 64bit)
> IO Port Access Samples Samples% Time% Min Time Max Time Avg time
> 0x70:POUT 86 30.99% 74.59% 9us 29us 10.75us (+-
> 3.41%)
>
> with my patch
> IO Port Access Samples Samples% Time% Min Time Max Time Avg time
> 0x70:POUT 106 32.02% 29.47% 0us 10us 1.57us (+-
> 7.38%)
>
> the patch is a part of optimizing rtc 0x70 port access. Another is in
> qemu.
Looks good as a proof of concept. However, you need documentation changes,
and you need to expose this using a capability. Also, the "pad" field can
be renamed to "pio".
Paolo
> Signed-off-by: Peng Hao <peng.hao2@....com.cn>
> ---
> virt/kvm/coalesced_mmio.c | 14 +++++++++++---
> 1 file changed, 11 insertions(+), 3 deletions(-)
>
> diff --git a/virt/kvm/coalesced_mmio.c b/virt/kvm/coalesced_mmio.c
> index 571c1ce..f640c2f 100644
> --- a/virt/kvm/coalesced_mmio.c
> +++ b/virt/kvm/coalesced_mmio.c
> @@ -82,6 +82,7 @@ static int coalesced_mmio_write(struct kvm_vcpu *vcpu,
> ring->coalesced_mmio[ring->last].phys_addr = addr;
> ring->coalesced_mmio[ring->last].len = len;
> memcpy(ring->coalesced_mmio[ring->last].data, val, len);
> + ring->coalesced_mmio[ring->last].pad = dev->zone.pad;
> smp_wmb();
> ring->last = (ring->last + 1) % KVM_COALESCED_MMIO_MAX;
> spin_unlock(&dev->kvm->ring_lock);
> @@ -148,8 +149,12 @@ int kvm_vm_ioctl_register_coalesced_mmio(struct kvm
> *kvm,
> dev->zone = *zone;
>
> mutex_lock(&kvm->slots_lock);
> - ret = kvm_io_bus_register_dev(kvm, KVM_MMIO_BUS, zone->addr,
> - zone->size, &dev->dev);
> + if (zone->pad == 0)
> + ret = kvm_io_bus_register_dev(kvm, KVM_MMIO_BUS, zone->addr,
> + zone->size, &dev->dev);
> + else
> + ret = kvm_io_bus_register_dev(kvm, KVM_PIO_BUS, zone->addr,
> + zone->size, &dev->dev);
> if (ret < 0)
> goto out_free_dev;
> list_add_tail(&dev->list, &kvm->coalesced_zones);
> @@ -173,7 +178,10 @@ int kvm_vm_ioctl_unregister_coalesced_mmio(struct kvm
> *kvm,
>
> list_for_each_entry_safe(dev, tmp, &kvm->coalesced_zones, list)
> if (coalesced_mmio_in_range(dev, zone->addr, zone->size)) {
> - kvm_io_bus_unregister_dev(kvm, KVM_MMIO_BUS, &dev->dev);
> + if (zone->pad == 0)
> + kvm_io_bus_unregister_dev(kvm, KVM_MMIO_BUS, &dev->dev);
> + else
> + kvm_io_bus_unregister_dev(kvm, KVM_PIO_BUS, &dev->dev);
> kvm_iodevice_destructor(&dev->dev);
> }
>
> --
> 1.8.3.1
>
>
>
Powered by blists - more mailing lists