[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <49D8E018.8050907@codemonkey.ws>
Date: Sun, 05 Apr 2009 11:45:12 -0500
From: Anthony Liguori <anthony@...emonkey.ws>
To: Avi Kivity <avi@...hat.com>
CC: Rusty Russell <rusty@...tcorp.com.au>,
Gregory Haskins <ghaskins@...ell.com>,
linux-kernel@...r.kernel.org, agraf@...e.de, pmullaney@...ell.com,
pmorreale@...ell.com, netdev@...r.kernel.org, kvm@...r.kernel.org
Subject: Re: [RFC PATCH 00/17] virtual-bus
Avi Kivity wrote:
> Anthony Liguori wrote:
>>
>> What we need is:
>>
>> 1) Lockless MMIO/PIO dispatch (there should be two IO registration
>> interfaces, a new lockless one and the legacy one)
>
> Not sure exactly how much this is needed, since when there is no
> contention, locks are almost free (there's the atomic and cacheline
> bounce, but no syscall).
There should be no contention but I strongly suspect there is more often
than we think. The IO thread can potentially hold the lock for a very
long period of time. Take into consideration things like qcow2 metadata
read/write, VNC server updates, etc..
> For any long operations, we should drop the lock (of course we need
> some kind of read/write lock or rcu to avoid hotunplug or
> reconfiguration).
>
>> 2) A virtio-net thread that's independent of the IO thread.
>
> Yes -- that saves us all the select() prologue (calculating new
> timeout) and the select() itself.
In an ideal world, we could do the submission via io_submit in the VCPU
context, not worry about the copy latency (because we're zero copy).
Then our packet transmission latency is consistently low because the
path is consistent and lockless. This is why dropping the lock is so
important, it's not enough to usually have low latency. We need to try
and have latency as low as possible as often as possible.
Regards,
Anthony Liguori
>
>
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists