lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:	Wed, 16 Sep 2009 11:04:49 -0700
From:	"Philip A. Prindeville" <philipp_subx@...fish-solutions.com>
To:	Karl Hiramoto <karl@...amoto.org>
CC:	David Miller <davem@...emloft.net>, netdev@...r.kernel.org,
	linux-atm-general@...ts.sourceforge.net
Subject: Re: [Linux-ATM-General] [PATCH] atm/br2684: netif_stop_queue() when
 atm device busy and netif_wake_queue() when we can send packets again.

On 09/15/2009 07:57 AM, Karl Hiramoto wrote:
> Karl Hiramoto wrote:
>   
>> David Miller wrote:
>>   
>>     
>>> From: Karl Hiramoto <karl@...amoto.org>
>>> Date: Thu, 10 Sep 2009 23:30:44 +0200
>>>
>>>   
>>>     
>>>       
>>>> I'm not really sure if or how many packets to upper layers buffer.
>>>>     
>>>>       
>>>>         
>>> This is determined by ->tx_queue_len, so whatever value is being
>>> set for ATM network devices is what the core will use for backlog
>>> limiting while the device's TX queue is stopped.
>>>     
>>>       
>> I tried varying tx_queue_len by 10, 100,  and 1000x, but it didn't seem 
>> to help much.  Whenever the atm dev called netif_wake_queue() it seems 
>> like the driver still starves for packets  and still takes time to get 
>> going again.
>>
>>
>> It seem like when the driver calls netif_wake_queue() it's TX hardware 
>> queue is nearly full, but it has space to accept new packets.  The TX 
>> hardware queue has time to empty, devices starves for packets(goes 
>> idle), then finally a packet comes in from the upper networking 
>> layers.   I'm not really sure at the moment where the problem lies to my 
>> maximum throughput dropping.
>>
>> I did try changing sk_sndbuf to 256K but that didn't seem to help either.
>>
>> --
>>     
> Actually i think i spoke too soon,  tuning TCP parameters, txqueuelen on 
> all machines the server, router and client it seems my performance came 
> back.
>
> --
> Karl
>   

So what size are you currently using?

Out-of-the-box build, 2.6.27.29 seems to set it to 1000.

-Philip

--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ