[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20210227175114.28645-1-paskripkin@gmail.com>
Date: Sat, 27 Feb 2021 20:51:14 +0300
From: Pavel Skripkin <paskripkin@...il.com>
To: alobakin@...me
Cc: davem@...emloft.net, linmiaohe@...wei.com, netdev@...r.kernel.org,
linux-kernel@...r.kernel.org,
Pavel Skripkin <paskripkin@...il.com>,
syzbot+80dccaee7c6630fa9dcf@...kaller.appspotmail.com
Subject: [PATCH v3] net/core/skbuff: fix passing wrong size to __alloc_skb
syzbot found WARNING in __alloc_pages_nodemask()[1] when order >= MAX_ORDER.
It was caused by __netdev_alloc_skb(), which doesn't check len value after adding NET_SKB_PAD.
Order will be >= MAX_ORDER and passed to __alloc_pages_nodemask() if size > KMALLOC_MAX_SIZE.
Same happens in __napi_alloc_skb.
static void *kmalloc_large_node(size_t size, gfp_t flags, int node)
{
struct page *page;
void *ptr = NULL;
unsigned int order = get_order(size);
...
page = alloc_pages_node(node, flags, order);
...
[1] WARNING in __alloc_pages_nodemask+0x5f8/0x730 mm/page_alloc.c:5014
Call Trace:
__alloc_pages include/linux/gfp.h:511 [inline]
__alloc_pages_node include/linux/gfp.h:524 [inline]
alloc_pages_node include/linux/gfp.h:538 [inline]
kmalloc_large_node+0x60/0x110 mm/slub.c:3999
__kmalloc_node_track_caller+0x319/0x3f0 mm/slub.c:4496
__kmalloc_reserve net/core/skbuff.c:150 [inline]
__alloc_skb+0x4e4/0x5a0 net/core/skbuff.c:210
__netdev_alloc_skb+0x70/0x400 net/core/skbuff.c:446
netdev_alloc_skb include/linux/skbuff.h:2832 [inline]
qrtr_endpoint_post+0x84/0x11b0 net/qrtr/qrtr.c:442
qrtr_tun_write_iter+0x11f/0x1a0 net/qrtr/tun.c:98
call_write_iter include/linux/fs.h:1901 [inline]
new_sync_write+0x426/0x650 fs/read_write.c:518
vfs_write+0x791/0xa30 fs/read_write.c:605
ksys_write+0x12d/0x250 fs/read_write.c:658
do_syscall_64+0x2d/0x70 arch/x86/entry/common.c:46
entry_SYSCALL_64_after_hwframe+0x44/0xa9
Reported-by: syzbot+80dccaee7c6630fa9dcf@...kaller.appspotmail.com
Signed-off-by: Pavel Skripkin <paskripkin@...il.com>
---
Changes from v3:
* Removed Change-Id and extra tabs in net/core/skbuff.c
Changes from v2:
* Added length check to __napi_alloc_skb
* Added unlikely() in checks
Change from v1:
* Added length check to __netdev_alloc_skb
---
net/core/skbuff.c | 6 ++++++
1 file changed, 6 insertions(+)
diff --git a/net/core/skbuff.c b/net/core/skbuff.c
index 785daff48030..ec7ba8728b61 100644
--- a/net/core/skbuff.c
+++ b/net/core/skbuff.c
@@ -443,6 +443,9 @@ struct sk_buff *__netdev_alloc_skb(struct net_device *dev, unsigned int len,
if (len <= SKB_WITH_OVERHEAD(1024) ||
len > SKB_WITH_OVERHEAD(PAGE_SIZE) ||
(gfp_mask & (__GFP_DIRECT_RECLAIM | GFP_DMA))) {
+ if (unlikely(len > KMALLOC_MAX_SIZE))
+ return NULL;
+
skb = __alloc_skb(len, gfp_mask, SKB_ALLOC_RX, NUMA_NO_NODE);
if (!skb)
goto skb_fail;
@@ -517,6 +520,9 @@ struct sk_buff *__napi_alloc_skb(struct napi_struct *napi, unsigned int len,
if (len <= SKB_WITH_OVERHEAD(1024) ||
len > SKB_WITH_OVERHEAD(PAGE_SIZE) ||
(gfp_mask & (__GFP_DIRECT_RECLAIM | GFP_DMA))) {
+ if (unlikely(len > KMALLOC_MAX_SIZE))
+ return NULL;
+
skb = __alloc_skb(len, gfp_mask, SKB_ALLOC_RX, NUMA_NO_NODE);
if (!skb)
goto skb_fail;
--
2.25.1
Powered by blists - more mailing lists