[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <CA+scX6mFHMwmFDFoStvghHf8piHB4LV+Zq7CiiuzRWUoSTKESA@mail.gmail.com>
Date: Fri, 9 Dec 2016 08:00:25 -0500
From: Weiwei Jia <harrynjit@...il.com>
To: Pankaj Gupta <pagupta@...hat.com>
Cc: qemu-devel@...gnu.org, mingo@...hat.com, efault@....de,
dmitry adamushko <dmitry.adamushko@...il.com>,
vatsa@...ux.vnet.ibm.com, tglx@...utronix.de, pzijlstr@...hat.com,
linux-kernel@...r.kernel.org, kvm@...r.kernel.org
Subject: Re: Timeslice of vCPU thread in QEMU/KVM is not stable
Hi Pankaj Gupta,
Thanks for your reply. I have found the problem after debug Linux
Kernel. The problem is once there is I/O thread upon vCPU2 thread of
VM1, there will be some mutex (synchronization) produced so that it
will be preempted by vCPU2 thread of VM2. After I set
"/proc/sys/kernel/sched_wakeup_granularity_ns" to be the default value
(3 milliseconds), the timeslice is stable again even though there is
I/O thread upon vCPU2 thread of VM1. That means, vCPU2 thread of VM2
can not preempt vCPU2 thread of VM1 because
"/proc/sys/kernel/sched_wakeup_granularity_ns" is 3 milliseconds.
Thank you again :)
Best Regards,
Harry
On Fri, Dec 9, 2016 at 3:07 AM, Pankaj Gupta <pagupta@...hat.com> wrote:
> Hello,
>
>>
>> Hi everyone,
>>
>> I am testing the timeslice of vCPU thread in QEMU/KVM. In principle,
>> the timeslice should be stable under following workload but it is
>> unstable after I do experiments with following workload. I appreciate
>> it if you can give me some suggestions. Thanks in advance.
>>
>> Workload settings:
>> In VMM, there are 6 pCPUs which are pCPU0, pCPU1, pCPU2, pCPU3, pCPU4,
>> pCPU5. There are two Kernel Virtual Machines (VM1 and VM2) upon VMM.
>> In each VM, there are 5 vritual CPUs (vCPU0, vCPU1, vCPU2, vCPU3,
>> vCPU4). vCPU0 in VM1 and vCPU0 in VM2 are pinned to pCPU0 and pCPU5
>> separately to handle interrupts dedicatedly. vCPU1 in VM1 and vCPU1 in
>> VM2 are pinned to pCPU1; vCPU2 in VM1 and vCPU2 in VM2 are pinned to
>> pCPU2; vCPU3 in VM1 and vCPU3 in VM2 are pinned to pCPU3; vCPU4 in VM1
>> and vCPU4 in VM2 are pinned to pCPU4. There is one CPU intensive
>> thread (while(1){i++}) upon each vCPU in VM1 and VM2 to avoid the vCPU
>> to be idle. In VM1, I start one I/O thread on vCPU2, which the I/O
>> thread reads 4KB from disk each time (reads 8GB in total). The I/O
>> scheduler in VM1 and VM2 is Noop. The I/O scheduler in VMM is CFQ.
>> "/proc/sys/kernel/sched_min_granularity_ns" is set to be 100
>> microseconds in VM1 and VM2. "/proc/sys/kernel/sched_latency_ns" is
>> set to be 100 microseconds in VM1 and VM2.
>> "/proc/sys/kernel/sched_wakeup_granularity_ns" is set to be 0
>> microseconds in VM1 and VM2.
>> "/proc/sys/kernel/sched_min_granularity_ns" is set to be 2.25
>> milliseconds in VMM. "/proc/sys/kernel/sched_latency_ns" is set to be
>> 18 milliseconds in VMM. "/proc/sys/kernel/sched_wakeup_granularity_ns"
>> is set to be 0 microseconds in VMM. I also pinned the I/O worker
>> threads started by QEMU to pCPU5. The process scheduling class I use
>> is CFS.
>>
>> Linux Kernel version for VMM is: 3.16.39
>> Linux Kernel version for VM1 and VM2 is: 4.7.4
>> QEMU emulator version is: 2.0.0
>>
>> I test the timeslice of vCPU2 thread of VM1 in VMM according to above
>> workload settings and the experiment shows that the timeslice is not
>> stable. I also find that after the I/O thread on vCPU2 in VM1 is
>> finished, the timeslice of vCPU2 thread of VM1 will be stable. From
>> the experiment, it seems that the unstable timeslice of vCPU2 thread
>> of VM1 is caused by the I/O thread on it in VM1. However, I think the
>> I/O thread on vCPU2 in VM1 should not affect its timeslice since each
>> vCPU in VM1 and VM2 has one CPU intensive thread (while(1){i++}).
>> Please give me some suggestions if you have. Thank you.
>
> I think you need to check what else is scheduling on pCPU2(physical cpu).
> If you want to avoid any other task to be scheduled at pCPU2, you need
> to isolate the pCPU2 core to avoid scheduler to run any other task on it.
>
>>
>> Best,
>> Harry
>> --
>> To unsubscribe from this list: send the line "unsubscribe kvm" in
>> the body of a message to majordomo@...r.kernel.org
>> More majordomo info at http://vger.kernel.org/majordomo-info.html
>>
Powered by blists - more mailing lists