[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20121023151442.GA27478@redhat.com>
Date: Tue, 23 Oct 2012 17:14:43 +0200
From: "Michael S. Tsirkin" <mst@...hat.com>
To: Sasha Levin <levinsasha928@...il.com>
Cc: Rusty Russell <rusty@...tcorp.com.au>,
Thomas Lendacky <tahm@...ux.vnet.ibm.com>,
virtualization@...ts.linux-foundation.org,
linux-kernel@...r.kernel.org, avi@...hat.com, kvm@...r.kernel.org
Subject: Re: [PATCH v2 2/2] virtio-ring: Allocate indirect buffers from cache
when possible
On Wed, Sep 12, 2012 at 12:44:47PM +0200, Sasha Levin wrote:
> On 09/12/2012 08:13 AM, Rusty Russell wrote:
> > The real question is now whether we'd want a separate indirect cache for
> > the 3 case (so num above should be a bitmap?), or reuse the same one, or
> > not use it at all?
> >
> > Benchmarking will tell...
>
> Since there are no specific decisions about actual values, I'll just modify the
> code to use cache per-vq instead of per-device.
>
>
> Thanks,
> Sasha
One wonders whether we can still use the slab caches
and improve the locality by aligning the size.
Something like the below - this passed basic testing but
didn't measure performance yet.
virtio: align size for indirect buffers
Improve locality for indirect buffer allocations
and avoid false cache sharing by aligning
allocations to cache line size.
Signed-off-by: Michael S. Tsirkin <mst@...hat.com>
diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c
index 2fc85f2..93e6c3a 100644
--- a/drivers/virtio/virtio_ring.c
+++ b/drivers/virtio/virtio_ring.c
@@ -119,7 +119,8 @@ static int vring_add_indirect(struct vring_virtqueue *vq,
unsigned head;
int i;
- desc = kmalloc((out + in) * sizeof(struct vring_desc), GFP_ATOMIC);
+ desc = kmalloc(L1_CACHE_ALIGN((out + in) * sizeof(struct vring_desc)),
+ GFP_ATOMIC);
if (!desc)
return vq->vring.num;
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists