[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20120705233816.3ec0b827@nehalam.linuxnetplumber.net>
Date: Thu, 5 Jul 2012 23:38:16 -0700
From: Stephen Hemminger <shemminger@...tta.com>
To: Jason Wang <jasowang@...hat.com>
Cc: Sasha Levin <levinsasha928@...il.com>, krkumar2@...ibm.com,
habanero@...ux.vnet.ibm.com, mashirle@...ibm.com,
kvm@...r.kernel.org, mst@...hat.com, netdev@...r.kernel.org,
linux-kernel@...r.kernel.org,
virtualization@...ts.linux-foundation.org, edumazet@...gle.com,
tahm@...ux.vnet.ibm.com, jwhan@...ewood.snu.ac.kr,
davem@...emloft.net, sri@...ibm.com
Subject: Re: [net-next RFC V5 5/5] virtio_net: support negotiating the
number of queues through ctrl vq
On Fri, 06 Jul 2012 11:20:06 +0800
Jason Wang <jasowang@...hat.com> wrote:
> On 07/05/2012 08:51 PM, Sasha Levin wrote:
> > On Thu, 2012-07-05 at 18:29 +0800, Jason Wang wrote:
> >> @@ -1387,6 +1404,10 @@ static int virtnet_probe(struct virtio_device *vdev)
> >> if (virtio_has_feature(vdev, VIRTIO_NET_F_CTRL_VQ))
> >> vi->has_cvq = true;
> >>
> >> + /* Use single tx/rx queue pair as default */
> >> + vi->num_queue_pairs = 1;
> >> + vi->total_queue_pairs = num_queue_pairs;
> > The code is using this "default" even if the amount of queue pairs it
> > wants was specified during initialization. This basically limits any
> > device to use 1 pair when starting up.
> >
>
> Yes, currently the virtio-net driver would use 1 txq/txq by default
> since multiqueue may not outperform in all kinds of workload. So it's
> better to keep it as default and let user enable multiqueue by ethtool -L.
>
I would prefer that the driver sized number of queues based on number
of online CPU's. That is what real hardware does. What kind of workload
are you doing? If it is some DBMS benchmark then maybe the issue is that
some CPU's need to be reserved.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists