[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <5532845E.3040107@redhat.com>
Date: Sat, 18 Apr 2015 18:20:46 +0200
From: Paolo Bonzini <pbonzini@...hat.com>
To: Andy Lutomirski <luto@...capital.net>,
Linus Torvalds <torvalds@...ux-foundation.org>
CC: John Stultz <john.stultz@...aro.org>,
Marcelo Tosatti <mtosatti@...hat.com>,
Peter Zijlstra <peterz@...radead.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
Gleb Natapov <gleb@...nel.org>, kvm list <kvm@...r.kernel.org>,
Ralf Baechle <ralf@...ux-mips.org>,
Andrew Lutomirski <luto@...nel.org>
Subject: Re: [GIT PULL] First batch of KVM changes for 4.1
On 18/04/2015 00:25, Andy Lutomirski wrote:
>> Isn't the *whole* point of pvclock_clocksource_read() to be a native
>> rdtsc with scaling? How does it cause that kind of insane pain?
It's possible that your machine ends up with PVCLOCK_TSC_STABLE_BIT
clear, so you get an atomic cmpxchg in addition (and associated
cacheline bouncing, since anything reading the clocksource in the kernel
will cause that variable to bounce). But that's not too common on
recent machines.
Is the vsyscall faster for you or does it degenerate to the syscall? If
so, you have PVCLOCK_TSC_STABLE_BIT clear.
> An unnecessarily complicated protocol, a buggy host implementation,
> and an unnecessarily complicated guest implementation :(
pvclock_clocksource_read() itself is not scary and need not worry about
the buggy host implementation (preempt_disable makes things easy). It's
the vDSO stuff that has the nice things.
There's a few micro-optimizations that we could do (the guest
implementation _is_ unnecessarily baroque), but it may not be enough if
the rdtsc_barrier()s (lfence) are the performance killers. Will look
closely on Monday.
Paolo
>> Oh well. Some paravirt person would need to look and care.
>
> The code there is a bit scary.
>
> --Andy
>
>>
>> Linus
>
>
>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists