[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <BLU436-SMTP11063B4FF1EBA21F57E3C79806D0@phx.gbl>
Date: Sun, 30 Aug 2015 07:55:02 +0800
From: Wanpeng Li <wanpeng.li@...mail.com>
To: Peter Kieser <peter@...ser.ca>
CC: Paolo Bonzini <pbonzini@...hat.com>,
David Matlack <dmatlack@...gle.com>, kvm <kvm@...r.kernel.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH v4 0/3] KVM: Dynamic Halt-Polling
On 8/30/15 6:26 AM, Peter Kieser wrote:
> Thanks, Wanpeng. Applied this to Linux 3.18 and seeing much higher CPU
> usage (200%) for qemu 2.4.0 process on a Windows 10 x64 guest. qemu
> parameters:
Thanks for the report. If Paolo's patch "kvm: add halt_poll_ns module
parameter" is applied on your 3.18? Btw, do you test the linux guest?
Regards,
Wanpeng Li
>
> qemu-system-x86_64 -enable-kvm -name arwan-20150704 -S -machine
> pc-q35-2.2,accel=kvm,usb=off -cpu
> Haswell,hv_time,hv_relaxed,hv_vapic,hv_spinlocks=0x1000 -m 8192
> -realtime mlock=off -smp 4,sockets=4,cores=1,threads=1 -uuid
> 7c2fc02d-2798-4fc9-ad04-db5f1af92723 -no-user-config -nodefaults
> -chardev
> socket,id=charmonitor,path=/var/lib/libvirt/qemu/arwan-20150704.monitor,server,nowait
> -mon chardev=charmonitor,id=monitor,mode=control -rtc base=localtime
> -no-shutdown -boot strict=on -device
> i82801b11-bridge,id=pci.1,bus=pcie.0,addr=0x1e -device
> pci-bridge,chassis_nr=2,id=pci.2,bus=pci.1,addr=0x1 -device
> nec-usb-xhci,id=usb1,bus=pci.2,addr=0x4 -device
> virtio-serial-pci,id=virtio-serial0,bus=pci.2,addr=0x5 -drive
> file=/dev/mapper/crypt-arwan-20150704,if=none,id=drive-virtio-disk0,format=raw,cache=none,discard=unmap,aio=native
> -device
> virtio-blk-pci,scsi=off,bus=pci.2,addr=0x3,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=2
> -drive
> file=/usr/share/virtio-win/virtio-win.iso,if=none,media=cdrom,id=drive-sata0-0-2,readonly=on,format=raw
> -device
> ide-cd,bus=ide.2,drive=drive-sata0-0-2,id=sata0-0-2,bootindex=1
> -netdev tap,fds=31:32:33:34,id=hostnet0,vhost=on,vhostfds=35:36:37:38
> -device
> virtio-net-pci,guest_csum=off,guest_tso4=off,guest_tso6=off,mq=on,vectors=10,netdev=hostnet0,id=net0,mac=52:54:00:f3:6b:c4,bus=pci.2,addr=0x2
> -chardev
> socket,id=charchannel0,path=/var/lib/libvirt/qemu/channel/target/arwan-20150704.org.qemu.guest_agent.0,server,nowait
> -device
> virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=org.qemu.guest_agent.0
> -chardev spicevmc,id=charchannel1,name=vdagent -device
> virtserialport,bus=virtio-serial0.0,nr=2,chardev=charchannel1,id=channel1,name=com.redhat.spice.0
> -vnc 127.0.0.1:4 -device
> qxl-vga,id=video0,ram_size=67108864,vram_size=67108864,vgamem_mb=16,bus=pcie.0,addr=0x1
> -device virtio-balloon-pci,id=balloon0,bus=pci.2,addr=0x1 -msg
> timestamp=on
>
> I revert patch, qemu shows 17% CPU usage on host. Thoughts?
>
> -Peter
>
> On 2015-08-29 3:21 PM, Wanpeng Li wrote:
>> Hi Peter,
>> On 8/30/15 5:18 AM, Peter Kieser wrote:
>>> Hi Wanpeng,
>>>
>>> Do I need to set any module parameters to use your patch, or should
>>> halt_poll_ns automatically tune with just your patch series applied?
>>>
>>
>> You don't need any module parameters.
>>
>> Regards,
>> Wanpeng Li
>>
>>> Thanks.
>>>
>>> On 2015-08-27 2:47 AM, Wanpeng Li wrote:
>>>> v3 -> v4:
>>>> * bring back grow vcpu->halt_poll_ns when interrupt arrives and
>>>> shrinks
>>>> when idle VCPU is detected
>>>>
>>>> v2 -> v3:
>>>> * grow/shrink vcpu->halt_poll_ns by *halt_poll_ns_grow or
>>>> /halt_poll_ns_shrink
>>>> * drop the macros and hard coding the numbers in the param
>>>> definitions
>>>> * update the comments "5-7 us"
>>>> * remove halt_poll_ns_max and use halt_poll_ns as the max
>>>> halt_poll_ns time,
>>>> vcpu->halt_poll_ns start at zero
>>>> * drop the wrappers
>>>> * move the grow/shrink logic before "out:" w/ "if (waited)"
>>>>
>>>> v1 -> v2:
>>>> * change kvm_vcpu_block to read halt_poll_ns from the vcpu
>>>> instead of
>>>> the module parameter
>>>> * use the shrink/grow matrix which is suggested by David
>>>> * set halt_poll_ns_max to 2ms
>>>>
>>>> There is a downside of halt_poll_ns since poll is still happen for
>>>> idle
>>>> VCPU which can waste cpu usage. This patchset add the ability to
>>>> adjust
>>>> halt_poll_ns dynamically, grows halt_poll_ns if an interrupt
>>>> arrives and
>>>> shrinks halt_poll_ns when idle VCPU is detected.
>>>>
>>>> There are two new kernel parameters for changing the halt_poll_ns:
>>>> halt_poll_ns_grow and halt_poll_ns_shrink.
>>>>
>>>>
>>>> Test w/ high cpu overcommit ratio, pin vCPUs, and the halt_poll_ns of
>>>> halt-poll is the default 500000ns, the max halt_poll_ns of dynamic
>>>> halt-poll is 2ms. Then watch the %C0 in the dump of Powertop tool.
>>>> The test method is almost from David.
>>>>
>>>> +-----------------+----------------+-------------------+
>>>> | | | |
>>>> | w/o halt-poll | w/ halt-poll | dynamic halt-poll |
>>>> +-----------------+----------------+-------------------+
>>>> | | | |
>>>> | ~0.9% | ~1.8% | ~1.2% |
>>>> +-----------------+----------------+-------------------+
>>>> The always halt-poll
>>>> will increase ~0.9% cpu usage for idle vCPUs and the
>>>> dynamic halt-poll drop it to ~0.3% which means that reduce the 67%
>>>> overhead
>>>> introduced by always halt-poll.
>>>>
>>>> Wanpeng Li (3):
>>>> KVM: make halt_poll_ns per-VCPU
>>>> KVM: dynamic halt_poll_ns adjustment
>>>> KVM: trace kvm_halt_poll_ns grow/shrink
>>>>
>>>> include/linux/kvm_host.h | 1 +
>>>> include/trace/events/kvm.h | 30 ++++++++++++++++++++++++++++
>>>> virt/kvm/kvm_main.c | 50
>>>> +++++++++++++++++++++++++++++++++++++++++++---
>>>> 3 files changed, 78 insertions(+), 3 deletions(-)
>>>
>>
>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists