lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 19 Jun 2013 12:39:34 -0700
From:	Jerry Chu <hkchu@...gle.com>
To:	Jason Wang <jasowang@...hat.com>
Cc:	"netdev@...r.kernel.org" <netdev@...r.kernel.org>,
	"Michael S. Tsirkin" <mst@...hat.com>
Subject: Re: qlen check in tun.c

Hi Jason,

On Tue, Jun 18, 2013 at 8:29 PM, Jason Wang <jasowang@...hat.com> wrote:
> On 06/19/2013 10:31 AM, Jerry Chu wrote:
>> In tun_net_xmit() the max qlen is computed as
>> dev->tx_queue_len / tun->numqueues. For multi-queue configuration the
>> latter may be way too small, forcing one to adjust txqueuelen based
>> on number of queues created. (Well the default txqueuelen of
>> 500/TUN_READQ_SIZE already seems too small even for single queue.)
>
> Hi Jerry:
>
> Do you have some test result of this? Anyway, tun allows userspace to
> adjust this value based on its requirement.

Sure, but the default size of 500 is just way too small. queue overflows even
with a simple single-stream throughput test through Openvswitch due to CPU
scheduler anomaly. On our loaded multi-stream test even 8192 can't prevent
queue overflow. But then with 8192 we'll be deep into the "buffer
bloat" territory.
We haven't figured out an optimal strategy for thruput vs latency, but
suffice to
say 500 is too small.

Jerry

>>
>> Wouldn't it be better to simply use dev->tx_queue_len to cap the qlen of
>> each queue? This also seems to be more consistent with h/w multi-queues.
>
> Make sense. Michael, any ideas on this?
>>
>> Also is there any objection to increase MAX_TAP_QUEUES from 8 to 16?
>> Yes it will take up more space in struct tun_struct. But we are
>> hitting the perf limit of 8 queues.
>
> Not only the tun_struct, another issue of this is sizeof(netdev_queue)
> which is 320 currently, if we use 16, it may be greater than 4096 which
> lead high order page allocation. Need a solution such as flex array or
> array of pointers.
>
> Btw, I have draft patch on both, will post as rfc.
>
> Thanks
>> Thanks,
>>
>> Jerry
>
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ