[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <OFBE9E2C99.648EFB19-ON652577C8.0035FEDD-652577C8.0036AD51@in.ibm.com>
Date: Tue, 26 Oct 2010 15:31:39 +0530
From: Krishna Kumar2 <krkumar2@...ibm.com>
To: "Michael S. Tsirkin" <mst@...hat.com>
Cc: anthony@...emonkey.ws, arnd@...db.de, avi@...hat.com,
davem@...emloft.net, eric.dumazet@...il.com, kvm@...r.kernel.org,
netdev@...r.kernel.org, netdev-owner@...r.kernel.org,
rusty@...tcorp.com.au
Subject: Re: [v3 RFC PATCH 0/4] Implement multiqueue virtio-net
> "Michael S. Tsirkin" <mst@...hat.com>
>
> On Tue, Oct 26, 2010 at 02:38:53PM +0530, Krishna Kumar2 wrote:
> > Results for UDP BW tests (unidirectional, sum across
> > 3 iterations, each iteration of 45 seconds, default
> > netperf, vhosts bound to cpus 0-3; no other tuning):
>
> Is binding vhost threads to CPUs really required?
> What happens if we let the scheduler do its job?
Nothing drastic, I remember BW% and SD% both improved a
bit as a result of binding. I started binding vhost thread
after Avi suggested it in response to my v1 patch (he
suggested some more that I haven't done), and have been
doing only this tuning ever since. This is part of his
mail for the tuning:
> vhost:
> thread #0: CPU0
> thread #1: CPU1
> thread #2: CPU2
> thread #3: CPU3
I simply bound each thread to CPU0-3 instead.
Thanks,
- KK
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists