[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4F314B2A.4000709@redhat.com>
Date: Tue, 07 Feb 2012 18:02:50 +0200
From: Avi Kivity <avi@...hat.com>
To: Anthony Liguori <anthony@...emonkey.ws>
CC: Rob Earhart <earhart@...gle.com>,
linux-kernel <linux-kernel@...r.kernel.org>,
KVM list <kvm@...r.kernel.org>,
qemu-devel <qemu-devel@...gnu.org>
Subject: Re: [Qemu-devel] [RFC] Next gen kvm api
On 02/07/2012 05:17 PM, Anthony Liguori wrote:
> On 02/07/2012 06:03 AM, Avi Kivity wrote:
>> On 02/06/2012 09:11 PM, Anthony Liguori wrote:
>>>
>>> I'm not so sure. ioeventfds and a future mmio-over-socketpair have
>>> to put the
>>> kthread to sleep while it waits for the other end to process it.
>>> This is
>>> effectively equivalent to a heavy weight exit. The difference in
>>> cost is
>>> dropping to userspace which is really neglible these days (< 100
>>> cycles).
>>
>> On what machine did you measure these wonderful numbers?
>
> A syscall is what I mean by "dropping to userspace", not the cost of a
> heavy weight exit.
Ah. But then ioeventfd has that as well, unless the other end is in the
kernel too.
> I think a heavy weight exit is still around a few thousand cycles.
>
> Any nehalem class or better processor should have a syscall cost of
> around that unless I'm wildly mistaken.
>
That's what I remember too.
>>
>> But I agree a heavyweight exit is probably faster than a double
>> context switch
>> on a remote core.
>
> I meant, if you already need to take a heavyweight exit (and you do to
> schedule something else on the core), than the only additional cost is
> taking a syscall return to userspace *first* before scheduling another
> process. That overhead is pretty low.
Yeah.
--
I have a truly marvellous patch that fixes the bug which this
signature is too narrow to contain.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists