lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Mon, 20 Feb 2012 20:03:18 -0800 From: Shirley Ma <mashirle@...ibm.com> To: "Michael S. Tsirkin" <mst@...hat.com> Cc: Anthony Liguori <aliguori@...ibm.com>, Tom Lendacky <toml@...ibm.com>, netdev@...r.kernel.org, Cristian Viana <vianac@...ibm.com> Subject: Re: [PATCH 1/2] vhost: allow multiple workers threads On Tue, 2012-02-21 at 05:21 +0200, Michael S. Tsirkin wrote: > On Mon, Feb 20, 2012 at 05:04:10PM -0800, Shirley Ma wrote: > > On Mon, 2012-02-20 at 23:00 +0200, Michael S. Tsirkin wrote: > > > > > > The point was really to avoid scheduler overhead > > > as with tcp, tx and rx tend to run on the same cpu. > > > > We have tried different approaches in the past, like splitting vhost > > thread to separate TX, RX threads; create per cpu vhost thread > instead > > of creating per VM per virtio_net vhost thread... > > > > We think per cpu vhost thread is a better approach based on the data > we > > have collected. It will reduce both vhost resource and scheduler > > overhead. It will not depend on host scheduler, has less various. > The > > patch is under testing, we hope we can post it soon. > > > > Thanks > > Shirley > > Yes, great, this is definitely interesting. I actually started with > a per-cpu one - it did not perform well but I did not > figure out why, switching to a single thread fixed it > and I did not dig into it. The patch includes per cpu vhost thread & vhost NUMA aware scheduling It is very interesting. We are collecting performance data with different workloads (streams, request/response) related to which VCPU runs on which CPU, which vhost cpu thread is being scheduled, and which NIC TX/RX queues is being used. The performance were different when using different vhost scheduling approach for both TX/RX worker. The results seems pretty good: like 60 UDP_RRs, the results event more than doubled in our lab. However the TCP_RRs results couldn't catch up UDP_RRs. Thanks Shirley -- To unsubscribe from this list: send the line "unsubscribe netdev" in the body of a message to majordomo@...r.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists