[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <55D38E37.5060709@redhat.com>
Date: Tue, 18 Aug 2015 12:57:43 -0700
From: Paolo Bonzini <pbonzini@...hat.com>
To: Avi Kivity <avi.kivity@...il.com>,
Radim Krčmář <rkrcmar@...hat.com>,
linux-kernel@...r.kernel.org
Cc: kvm@...r.kernel.org
Subject: Re: [PATCH v2 4/5] KVM: add KVM_USER_EXIT vcpu ioctl for userspace
exit
On 18/08/2015 11:30, Avi Kivity wrote:
>> KVM_USER_EXIT in practice should be so rare (at least with in-kernel
>> LAPIC) that I don't think this matters. KVM_USER_EXIT is relatively
>> uninteresting, it only exists to provide an alternative to signals that
>> doesn't require expensive atomics on each and every KVM_RUN. :(
>
> Ah, so the idea is to remove the cost of changing the signal mask?
Yes, it's explained in the cover letter.
> Yes, although it looks like a thread-local operation, it takes a
> process-wide lock.
IIRC the lock was only task-wide and uncontended. Problem is, it's on
the node that created the thread rather than the node that is running
it, and inter-node atomics are really, really slow.
For guests spanning >1 host NUMA nodes it's not really practical to
ensure that the thread is created on the right node. Even for guests
that fit into 1 host node, if you rely on AutoNUMA the VCPUs are created
too early for AutoNUMA to have any effect. And newer machines have
frighteningly small nodes (two nodes per socket, so it's something like
7 pCPUs if you don't have hyper-threading enabled). True, the NUMA
penalty within the same socket is not huge, but it still costs a few
thousand clock cycles on vmexit.flat and this feature sweeps it away
completely.
> I expect most user wakeups are via irqfd, so indeed the performance of
> KVM_USER_EXIT is uninteresting.
Yup, either irqfd or KVM_SET_SIGNAL_MSI.
Paolo
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists