[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20150728000221-mutt-send-email-mst@redhat.com>
Date: Tue, 28 Jul 2015 00:07:19 +0300
From: "Michael S. Tsirkin" <mst@...hat.com>
To: Bandan Das <bsd@...hat.com>
Cc: Eyal Moscovici <EYALMO@...ibm.com>, cgroups@...r.kernel.org,
jasowang@...hat.com, kvm@...r.kernel.org,
linux-kernel@...r.kernel.org, netdev@...r.kernel.org,
Razya Ladelsky <RAZYA@...ibm.com>
Subject: Re: [RFC PATCH 0/4] Shared vhost design
On Mon, Jul 27, 2015 at 03:48:19PM -0400, Bandan Das wrote:
> Eyal Moscovici <EYALMO@...ibm.com> writes:
>
> > Hi,
> >
> > The test showed the same relative numbers as we got in our internal
> > testing. I was wondering about the configuration in regards to NUMA. From
> Thanks for confirming.
>
> > our testing we saw that if the VMs are spread across 2 NUMA nodes then
> > having a shared vhost thread per node performs better then having the two
> > threads in the same core.
>
> IIUC, this is similar to my test setup and observations i.e
> > 14* 1173.8 1216.9
>
> In this case, there's a shared vhost thread on CPU 14 for numa node 0
> and another on CPU 15 for numa node 1. Guests running on CPUs 0,2,4,6,8,10,12
> are serviced by vhost-0 that runs on CPU 14 and guests running on CPUs 1,3,5,7,9,11,13
> get serviced by vhost-1 (Numa node 1). I tried some other configurations but
> this configuration gave me the best results.
>
>
> Eyal, I think it makes sense to add polling on top of these patches and
> get numbers for them too. Thoughts ?
>
> Bandan
So simple polling by vhost is kind of ok for some guests, but I think to
really make it work for a reasonably wide selection of guests/workloads
you need to combine it with 1. polling the NIC - it makes no sense to me
to only poll one side of the equation; and probably 2. - polling in
guest.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists