[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <554709BB.7090400@suse.com>
Date: Mon, 04 May 2015 07:55:07 +0200
From: Juergen Gross <jgross@...e.com>
To: Jeremy Fitzhardinge <jeremy@...p.org>,
linux-kernel@...r.kernel.org, x86@...nel.org, hpa@...or.com,
tglx@...utronix.de, mingo@...hat.com,
xen-devel@...ts.xensource.com, konrad.wilk@...cle.com,
david.vrabel@...rix.com, boris.ostrovsky@...cle.com,
chrisw@...s-sol.org, akataria@...are.com, rusty@...tcorp.com.au,
virtualization@...ts.linux-foundation.org, gleb@...nel.org,
pbonzini@...hat.com, kvm@...r.kernel.org
Subject: Re: [PATCH 0/6] x86: reduce paravirtualized spinlock overhead
On 04/30/2015 06:39 PM, Jeremy Fitzhardinge wrote:
> On 04/30/2015 03:53 AM, Juergen Gross wrote:
>> Paravirtualized spinlocks produce some overhead even if the kernel is
>> running on bare metal. The main reason are the more complex locking
>> and unlocking functions. Especially unlocking is no longer just one
>> instruction but so complex that it is no longer inlined.
>>
>> This patch series addresses this issue by adding two more pvops
>> functions to reduce the size of the inlined spinlock functions. When
>> running on bare metal unlocking is again basically one instruction.
>
> Out of curiosity, is there a measurable difference?
I did a small measurement of the pure locking functions on bare metal
without and with my patches.
spin_lock() for the first time (lock and code not in cache) dropped from
about 600 to 500 cycles.
spin_unlock() for first time dropped from 145 to 87 cycles.
spin_lock() in a loop dropped from 48 to 45 cycles.
spin_unlock() in the same loop dropped from 24 to 22 cycles.
Juergen
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists