[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4A0DB988.6000009@goop.org>
Date: Fri, 15 May 2009 11:50:48 -0700
From: Jeremy Fitzhardinge <jeremy@...p.org>
To: Jan Beulich <JBeulich@...ell.com>
CC: Ingo Molnar <mingo@...e.hu>, Jun Nakajima <jun.nakajima@...el.com>,
Xiaohui Xin <xiaohui.xin@...el.com>, Xin Li <xin.li@...el.com>,
Xen-devel <xen-devel@...ts.xensource.com>,
Nick Piggin <npiggin@...e.de>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
"H. Peter Anvin" <hpa@...or.com>
Subject: Re: [Xen-devel] Performance overhead of paravirt_ops on nativeidentified
Jan Beulich wrote:
> A patch for the pv-ops kernel would require some time. What I can give you
> right away - just for reference - are the sources we currently use in our kernel:
> attached.
Hm, I see. Putting a call out to a pv-ops function in the ticket lock
slow path looks pretty straightforward. The need for an extra lock on
the contended unlock side is a bit unfortunate; have you measured to see
what hit that has? Seems to me like you could avoid the problem by
using per-cpu storage rather than stack storage (though you'd need to
copy the per-cpu data to stack when handling a nested spinlock).
What's the thinking behind the xen_spin_adjust() stuff?
> static __always_inline void __ticket_spin_lock(raw_spinlock_t *lock) {
> unsigned int token, count; bool free; __ticket_spin_lock_preamble; if
> (unlikely(!free)) token = xen_spin_adjust(lock, token); do { count = 1
> << 10; __ticket_spin_lock_body; } while (unlikely(!count) &&
> !xen_spin_wait(lock, token)); }
How does this work? Doesn't it always go into the slowpath loop even if
the preamble got the lock with no contention?
J
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists