[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-Id: <05374f1f2dbd78bc76cf19991bd6a6991d031689.1543967847.git.sbrivio@redhat.com>
Date: Wed, 5 Dec 2018 01:13:51 +0100
From: Stefano Brivio <sbrivio@...hat.com>
To: "David S. Miller" <davem@...emloft.net>
Cc: Jianlin Shi <jishi@...hat.com>, Hangbin Liu <liuhangbin@...il.com>,
Eric Dumazet <edumazet@...gle.com>,
Stephen Hemminger <stephen@...workplumber.org>,
netdev@...r.kernel.org
Subject: [PATCH net 2/2] neighbour: BUG_ON() writing before skb->head in neigh_hh_output()
While skb_push() makes the kernel panic if the skb headroom is less than
the unaligned hardware header size in neigh_hh_output(), it will proceed
silently in case we copy more than that because of alignment.
In the case fixed by the previous patch,
"ipv6: Check available headroom in ip6_xmit() even without options", we
end up in neigh_hh_output() with 14 bytes headroom, 14 bytes hardware
header and write 16 bytes, starting 2 bytes before the allocated buffer.
Panic, instead of silently corrupting adjacent slabs.
Signed-off-by: Stefano Brivio <sbrivio@...hat.com>
---
include/net/neighbour.h | 8 ++++++--
1 file changed, 6 insertions(+), 2 deletions(-)
diff --git a/include/net/neighbour.h b/include/net/neighbour.h
index f58b384aa6c9..95dcba741fd5 100644
--- a/include/net/neighbour.h
+++ b/include/net/neighbour.h
@@ -454,6 +454,7 @@ static inline int neigh_hh_bridge(struct hh_cache *hh, struct sk_buff *skb)
static inline int neigh_hh_output(const struct hh_cache *hh, struct sk_buff *skb)
{
+ unsigned int hh_alen = 0;
unsigned int seq;
unsigned int hh_len;
@@ -461,15 +462,18 @@ static inline int neigh_hh_output(const struct hh_cache *hh, struct sk_buff *skb
seq = read_seqbegin(&hh->hh_lock);
hh_len = hh->hh_len;
if (likely(hh_len <= HH_DATA_MOD)) {
+ hh_alen = HH_DATA_MOD;
/* this is inlined by gcc */
memcpy(skb->data - HH_DATA_MOD, hh->hh_data, HH_DATA_MOD);
} else {
- unsigned int hh_alen = HH_DATA_ALIGN(hh_len);
-
+ hh_alen = HH_DATA_ALIGN(hh_len);
memcpy(skb->data - hh_alen, hh->hh_data, hh_alen);
}
} while (read_seqretry(&hh->hh_lock, seq));
+ /* skb_push() won't panic if we have room for the unaligned size only */
+ BUG_ON(skb_headroom(skb) < hh_alen);
+
skb_push(skb, hh_len);
return dev_queue_xmit(skb);
}
--
2.19.2
Powered by blists - more mailing lists