[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <AANLkTinmqRLyGUtyfkCnrgkBsh2frQ3oO8hd6uE2O0he@mail.gmail.com>
Date: Wed, 9 Feb 2011 07:43:45 +0000
From: Stefan Hajnoczi <stefanha@...il.com>
To: "Michael S. Tsirkin" <mst@...hat.com>
Cc: Rusty Russell <rusty@...tcorp.com.au>,
Krishna Kumar2 <krkumar2@...ibm.com>,
David Miller <davem@...emloft.net>, kvm@...r.kernel.org,
Shirley Ma <mashirle@...ibm.com>, netdev@...r.kernel.org,
steved@...ibm.com
Subject: Re: Network performance with small packets
On Wed, Feb 9, 2011 at 1:55 AM, Michael S. Tsirkin <mst@...hat.com> wrote:
> On Wed, Feb 09, 2011 at 12:09:35PM +1030, Rusty Russell wrote:
>> On Wed, 9 Feb 2011 11:23:45 am Michael S. Tsirkin wrote:
>> > On Wed, Feb 09, 2011 at 11:07:20AM +1030, Rusty Russell wrote:
>> > > On Wed, 2 Feb 2011 03:12:22 pm Michael S. Tsirkin wrote:
>> > > > On Wed, Feb 02, 2011 at 10:09:18AM +0530, Krishna Kumar2 wrote:
>> > > > > > "Michael S. Tsirkin" <mst@...hat.com> 02/02/2011 03:11 AM
>> > > > > >
>> > > > > > On Tue, Feb 01, 2011 at 01:28:45PM -0800, Shirley Ma wrote:
>> > > > > > > On Tue, 2011-02-01 at 23:21 +0200, Michael S. Tsirkin wrote:
>> > > > > > > > Confused. We compare capacity to skb frags, no?
>> > > > > > > > That's sg I think ...
>> > > > > > >
>> > > > > > > Current guest kernel use indirect buffers, num_free returns how many
>> > > > > > > available descriptors not skb frags. So it's wrong here.
>> > > > > > >
>> > > > > > > Shirley
>> > > > > >
>> > > > > > I see. Good point. In other words when we complete the buffer
>> > > > > > it was indirect, but when we add a new one we
>> > > > > > can not allocate indirect so we consume.
>> > > > > > And then we start the queue and add will fail.
>> > > > > > I guess we need some kind of API to figure out
>> > > > > > whether the buf we complete was indirect?
>> > >
>> > > I've finally read this thread... I think we need to get more serious
>> > > with our stats gathering to diagnose these kind of performance issues.
>> > >
>> > > This is a start; it should tell us what is actually happening to the
>> > > virtio ring(s) without significant performance impact...
>> > >
>> > > Subject: virtio: CONFIG_VIRTIO_STATS
>> > >
>> > > For performance problems we'd like to know exactly what the ring looks
>> > > like. This patch adds stats indexed by how-full-ring-is; we could extend
>> > > it to also record them by how-used-ring-is if we need.
>> > >
>> > > Signed-off-by: Rusty Russell <rusty@...tcorp.com.au>
>> >
>> > Not sure whether the intent is to merge this. If yes -
>> > would it make sense to use tracing for this instead?
>> > That's what kvm does.
>>
>> Intent wasn't; I've not used tracepoints before, but maybe we should
>> consider a longer-term monitoring solution?
>>
>> Patch welcome!
>>
>> Cheers,
>> Rusty.
>
> Sure, I'll look into this.
There are several virtio trace events already in QEMU today (see the
trace-events file):
virtqueue_fill(void *vq, const void *elem, unsigned int len, unsigned
int idx) "vq %p elem %p len %u idx %u"
virtqueue_flush(void *vq, unsigned int count) "vq %p count %u"
virtqueue_pop(void *vq, void *elem, unsigned int in_num, unsigned int
out_num) "vq %p elem %p in_num %u out_num %u"
virtio_queue_notify(void *vdev, int n, void *vq) "vdev %p n %d vq %p"
virtio_irq(void *vq) "vq %p"
virtio_notify(void *vdev, void *vq) "vdev %p vq %p"
These can be used by building QEMU with a suitable tracing backend
like SystemTap (see docs/tracing.txt).
Inside the guest I've used dynamic ftrace in the past, although static
tracepoints would be nice.
Stefan
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists