lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Fri, 20 Jun 2008 14:52:33 -0400
From:	Bill Fink <billfink@...dspring.com>
To:	Krishna Kumar2 <krkumar2@...ibm.com>
Cc:	David Miller <davem@...emloft.net>, mchan@...adcom.com,
	netdev@...r.kernel.org, vinay@...ux.vnet.ibm.com
Subject: Re: [PATCH 3/3]: tg3: Manage TX backlog in-driver.

I have a general question about this new tx queueing model, which
I haven't seen discussed to this point.

Although hopefully not frequent events, if the tx queue is kept in
the driver rather than the network midlayer, what are the ramifications
of a routing change which requires changing the output to a new interface,
considering for example that on our 10-GigE interfaces we typically set
txqueuelen to 10000.

Similarly, what are the ramifications of such a change to the bonding
driver (either in a load balancing or active/backup scenario) when one
of the interfaces fails (again hopefully a rare event).

Just wanting to get a better understanding of any possible impacts of
the new model, recognizing that as with most significant changes there
will be both positive and negative efects, with the negative hopefully
kept to a minimum possible.

						-Thanks

						-Bill



On Fri, 20 Jun 2008, Krishna Kumar2 wrote:

> Great, and this looks cool for batching too :)
> 
> Couple of comments:
> 
> 1. The modified drivers has a backlog of upto tx_queue_len skbs
>     compared to unmodified drivers which had tx_queue_len+q->limit.
>     Won't this result in a performance hit since packet drops will
>     take place earlier?
> 
> 2. __tg3_xmit_backlog() should check for not running too long. This
>     also means calling netif_schedule() if tx_backlog is !empty, to
>     avoid rotting packets in the backlog queue.
> 
> Thanks,
> 
> - KK
> 
> David Miller <davem@...emloft.net> wrote on 06/19/2008 04:40:24 PM:
> 
> >
> > tg3: Manage TX backlog in-driver.
> >
> > We no longer stop and wake the generic device queue.
> > Instead we manage the backlog inside of the driver,
> > and the mid-layer thinks that the device can always
> > receive packets.
> >
> > Signed-off-by: David S. Miller <davem@...emloft.net>
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ