lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Sat, 07 Feb 2009 00:10:41 -0800 (PST)
From:	David Miller <davem@...emloft.net>
To:	blaschka@...ux.vnet.ibm.com
Cc:	netdev@...r.kernel.org
Subject: Re: TX pre-headers...

From: Frank Blaschka <blaschka@...ux.vnet.ibm.com>
Date: Fri, 06 Feb 2009 13:02:13 +0100

> Absolutely yes, this would help the s/390 qeth drivers too

Well, I did some research and it seems all of the cases we could
actually solve with such a scheme need at a maximum 32 bytes.

We already ensure 16 bytes, via NET_SKB_PAD.

So instead of all of this complex "who has the largest TX header size
and what is it" code, we can simply increase NET_SKB_PAD to 32.

You still need that headroom check there, simply because tunneling and
other device stacking situations can cause the headroom to be depleted
before your device sees the packet.

Any objections? :-)

diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h
index 08670d0..5eba400 100644
--- a/include/linux/skbuff.h
+++ b/include/linux/skbuff.h
@@ -1287,7 +1287,7 @@ static inline int skb_network_offset(const struct sk_buff *skb)
  * The networking layer reserves some headroom in skb data (via
  * dev_alloc_skb). This is used to avoid having to reallocate skb data when
  * the header has to grow. In the default case, if the header has to grow
- * 16 bytes or less we avoid the reallocation.
+ * 32 bytes or less we avoid the reallocation.
  *
  * Unfortunately this headroom changes the DMA alignment of the resulting
  * network packet. As for NET_IP_ALIGN, this unaligned DMA is expensive
@@ -1295,11 +1295,11 @@ static inline int skb_network_offset(const struct sk_buff *skb)
  * perhaps setting it to a cacheline in size (since that will maintain
  * cacheline alignment of the DMA). It must be a power of 2.
  *
- * Various parts of the networking layer expect at least 16 bytes of
+ * Various parts of the networking layer expect at least 32 bytes of
  * headroom, you should not reduce this.
  */
 #ifndef NET_SKB_PAD
-#define NET_SKB_PAD	16
+#define NET_SKB_PAD	32
 #endif
 
 extern int ___pskb_trim(struct sk_buff *skb, unsigned int len);
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ