lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-Id: <200708060951.12408.ossthema@de.ibm.com>
Date:	Mon, 6 Aug 2007 09:51:11 +0200
From:	Jan-Bernd Themann <ossthema@...ibm.com>
To:	Jörn Engel <joern@...fs.org>
Cc:	David Miller <davem@...emloft.net>,
	Christoph Raisch <raisch@...ibm.com>,
	Jan-Bernd Themann <themann@...ibm.com>,
	linux-kernel <linux-kernel@...r.kernel.org>,
	linux-ppc <linuxppc-dev@...abs.org>,
	Marcus Eder <meder@...ibm.com>,
	Thomas Klein <tklein@...ibm.com>,
	netdev <netdev@...r.kernel.org>,
	Andrew Gallatin <gallatin@...i.com>,
	Jeff Garzik <jeff@...zik.org>,
	Stefan Roscher <stefan.roscher@...ibm.com>
Subject: Re: [PATCH 1/1] lro: Generic Large Receive Offload for TCP traffic

Hi Jörn

On Friday 03 August 2007 15:41, Jörn Engel wrote:
> On Fri, 3 August 2007 14:41:19 +0200, Jan-Bernd Themann wrote:
> > 
> > This patch provides generic Large Receive Offload (LRO) functionality
> > for IPv4/TCP traffic.
> > 
> > LRO combines received tcp packets to a single larger tcp packet and 
> > passes them then to the network stack in order to increase performance
> > (throughput). The interface supports two modes: Drivers can either pass
> > SKBs or fragment lists to the LRO engine. 
> 
> Maybe this is a stupid question, but why is LRO done at the device
> driver level?
> 
> If it is a unversal performance benefit, I would have expected it to be
> done generically, i.e. have all packets moved into network layer pass
> through LRO instead.

The driver seems to be the right place:
-  There is the "page mode" interface that accepts fragment lists instead of
   SKBs and does generate SKBs only in the end (see Andrew Gallatins 
   mails where he described the advantages of this approach)

-  Some drivers (in particular for 10G NICs which actually could benefit
   from LRO) have multiple HW receive queues that do some sort of sorting,
   thus using one lro_mgr per queue increases the likelyhood of beeing able
   to do efficient LRO.
   

> > +void lro_flush_pkt(struct net_lro_mgr *lro_mgr,
> > +		   struct iphdr *iph, struct tcphdr *tcph);

> In particular this bit looks like it should be driven by a timeout,
> which would be settable via /proc/sys/net/core/lro_timeout or similar.

No, this function is needed for "page mode" as some HW provides
extra handling for small packets where packets are not stored in preallocated 
pages but in extra queues. Thus the driver needs a way to flush old sessions
for this connection and handle these packets in a different way (for example 
create a SKB and copy the data there).

Timeouts are not used at all. Experiments showed that flushing at the end 
of a NAPI poll round seems to be sufficient (see Andrew's test results)
and does not affect the latency too badly.

Regards,
Jan-Bernd
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ