[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <51CAAA26.4090204@linux.vnet.ibm.com>
Date: Wed, 26 Jun 2013 14:15:26 +0530
From: Raghavendra K T <raghavendra.kt@...ux.vnet.ibm.com>
To: habanero@...ux.vnet.ibm.com
CC: gleb@...hat.com, mingo@...hat.com, jeremy@...p.org, x86@...nel.org,
konrad.wilk@...cle.com, hpa@...or.com, pbonzini@...hat.com,
linux-doc@...r.kernel.org, xen-devel@...ts.xensource.com,
peterz@...radead.org, mtosatti@...hat.com,
stefano.stabellini@...citrix.com, andi@...stfloor.org,
attilio.rao@...rix.com, ouyang@...pitt.edu, gregkh@...e.de,
agraf@...e.de, chegu_vinod@...com, torvalds@...ux-foundation.org,
avi.kivity@...il.com, tglx@...utronix.de, kvm@...r.kernel.org,
linux-kernel@...r.kernel.org, stephan.diestelhorst@....com,
riel@...hat.com, drjones@...hat.com,
virtualization@...ts.linux-foundation.org,
srivatsa.vaddagiri@...il.com
Subject: Re: [PATCH RFC V9 0/19] Paravirtualized ticket spinlocks
On 06/25/2013 08:20 PM, Andrew Theurer wrote:
> On Sun, 2013-06-02 at 00:51 +0530, Raghavendra K T wrote:
>> This series replaces the existing paravirtualized spinlock mechanism
>> with a paravirtualized ticketlock mechanism. The series provides
>> implementation for both Xen and KVM.
>>
>> Changes in V9:
>> - Changed spin_threshold to 32k to avoid excess halt exits that are
>> causing undercommit degradation (after PLE handler improvement).
>> - Added kvm_irq_delivery_to_apic (suggested by Gleb)
>> - Optimized halt exit path to use PLE handler
>>
>> V8 of PVspinlock was posted last year. After Avi's suggestions to look
>> at PLE handler's improvements, various optimizations in PLE handling
>> have been tried.
>
> Sorry for not posting this sooner. I have tested the v9 pv-ticketlock
> patches in 1x and 2x over-commit with 10-vcpu and 20-vcpu VMs. I have
> tested these patches with and without PLE, as PLE is still not scalable
> with large VMs.
>
Hi Andrew,
Thanks for testing.
> System: x3850X5, 40 cores, 80 threads
>
>
> 1x over-commit with 10-vCPU VMs (8 VMs) all running dbench:
> ----------------------------------------------------------
> Total
> Configuration Throughput(MB/s) Notes
>
> 3.10-default-ple_on 22945 5% CPU in host kernel, 2% spin_lock in guests
> 3.10-default-ple_off 23184 5% CPU in host kernel, 2% spin_lock in guests
> 3.10-pvticket-ple_on 22895 5% CPU in host kernel, 2% spin_lock in guests
> 3.10-pvticket-ple_off 23051 5% CPU in host kernel, 2% spin_lock in guests
> [all 1x results look good here]
Yes. The 1x results look too close
>
>
> 2x over-commit with 10-vCPU VMs (16 VMs) all running dbench:
> -----------------------------------------------------------
> Total
> Configuration Throughput Notes
>
> 3.10-default-ple_on 6287 55% CPU host kernel, 17% spin_lock in guests
> 3.10-default-ple_off 1849 2% CPU in host kernel, 95% spin_lock in guests
> 3.10-pvticket-ple_on 6691 50% CPU in host kernel, 15% spin_lock in guests
> 3.10-pvticket-ple_off 16464 8% CPU in host kernel, 33% spin_lock in guests
I see 6.426% improvement with ple_on
and 161.87% improvement with ple_off. I think this is a very good sign
for the patches
> [PLE hinders pv-ticket improvements, but even with PLE off,
> we still off from ideal throughput (somewhere >20000)]
>
Okay, The ideal throughput you are referring is getting around atleast
80% of 1x throughput for over-commit. Yes we are still far away from
there.
>
> 1x over-commit with 20-vCPU VMs (4 VMs) all running dbench:
> ----------------------------------------------------------
> Total
> Configuration Throughput Notes
>
> 3.10-default-ple_on 22736 6% CPU in host kernel, 3% spin_lock in guests
> 3.10-default-ple_off 23377 5% CPU in host kernel, 3% spin_lock in guests
> 3.10-pvticket-ple_on 22471 6% CPU in host kernel, 3% spin_lock in guests
> 3.10-pvticket-ple_off 23445 5% CPU in host kernel, 3% spin_lock in guests
> [1x looking fine here]
>
I see ple_off is little better here.
>
> 2x over-commit with 20-vCPU VMs (8 VMs) all running dbench:
> ----------------------------------------------------------
> Total
> Configuration Throughput Notes
>
> 3.10-default-ple_on 1965 70% CPU in host kernel, 34% spin_lock in guests
> 3.10-default-ple_off 226 2% CPU in host kernel, 94% spin_lock in guests
> 3.10-pvticket-ple_on 1942 70% CPU in host kernel, 35% spin_lock in guests
> 3.10-pvticket-ple_off 8003 11% CPU in host kernel, 70% spin_lock in guests
> [quite bad all around, but pv-tickets with PLE off the best so far.
> Still quite a bit off from ideal throughput]
This is again a remarkable improvement (307%).
This motivates me to add a patch to disable ple when pvspinlock is on.
probably we can add a hypercall that disables ple in kvm init patch.
but only problem I see is what if the guests are mixed.
(i.e one guest has pvspinlock support but other does not. Host
supports pv)
/me thinks
>
> In summary, I would state that the pv-ticket is an overall win, but the
> current PLE handler tends to "get in the way" on these larger guests.
>
> -Andrew
>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists