[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <51C12592.6050503@redhat.com>
Date: Wed, 19 Jun 2013 11:29:22 +0800
From: Jason Wang <jasowang@...hat.com>
To: Jerry Chu <hkchu@...gle.com>
CC: "netdev@...r.kernel.org" <netdev@...r.kernel.org>,
"Michael S. Tsirkin" <mst@...hat.com>
Subject: Re: qlen check in tun.c
On 06/19/2013 10:31 AM, Jerry Chu wrote:
> In tun_net_xmit() the max qlen is computed as
> dev->tx_queue_len / tun->numqueues. For multi-queue configuration the
> latter may be way too small, forcing one to adjust txqueuelen based
> on number of queues created. (Well the default txqueuelen of
> 500/TUN_READQ_SIZE already seems too small even for single queue.)
Hi Jerry:
Do you have some test result of this? Anyway, tun allows userspace to
adjust this value based on its requirement.
>
> Wouldn't it be better to simply use dev->tx_queue_len to cap the qlen of
> each queue? This also seems to be more consistent with h/w multi-queues.
Make sense. Michael, any ideas on this?
>
> Also is there any objection to increase MAX_TAP_QUEUES from 8 to 16?
> Yes it will take up more space in struct tun_struct. But we are
> hitting the perf limit of 8 queues.
Not only the tun_struct, another issue of this is sizeof(netdev_queue)
which is 320 currently, if we use 16, it may be greater than 4096 which
lead high order page allocation. Need a solution such as flex array or
array of pointers.
Btw, I have draft patch on both, will post as rfc.
Thanks
> Thanks,
>
> Jerry
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists