lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Date: Wed, 28 Jun 2023 20:11:50 +0800
From: Liang Chen <liangchen.linux@...il.com>
To: ilias.apalodimas@...aro.org,
	hawk@...nel.org
Cc: kuba@...nel.org,
	davem@...emloft.net,
	edumazet@...gle.com,
	pabeni@...hat.com,
	linyunsheng@...wei.com,
	netdev@...r.kernel.org,
	liangchen.linux@...il.com
Subject: [PATCH net-next] skbuff: Optimize SKB coalescing for page pool case

In order to address the issues encountered with commit 1effe8ca4e34
("skbuff: fix coalescing for page_pool fragment recycling"), the
combination of the following condition was excluded from skb coalescing:

from->pp_recycle = 1
from->cloned = 1
to->pp_recycle = 1

However, with page pool environments, the aforementioned combination can
be quite common. In scenarios with a higher number of small packets, it
can significantly affect the success rate of coalescing. For example,
when considering packets of 256 bytes size, our comparison of coalescing
success rate is as follows:

Without page pool: 70%
With page pool: 13%

Consequently, this has an impact on performance:

Without page pool: 2.64 Gbits/sec
With page pool: 2.41 Gbits/sec

Therefore, it seems worthwhile to optimize this scenario and enable
coalescing of this particular combination. To achieve this, we need to
ensure the correct increment of the "from" SKB page's page pool fragment
count (pp_frag_count).

Following this optimization, the success rate of coalescing measured in our
environment has improved as follows:

With page pool: 60%

This success rate is approaching the rate achieved without using page pool,
and the performance has also been improved:

With page pool: 2.61 Gbits/sec

Below is the performance comparison for small packets before and after this
optimization. We observe no impact to packets larger than 4K.

without page pool fragment(PP_FLAG_PAGE_FRAG)
packet size     before      after
(bytes)         (Gbits/sec) (Gbits/sec)
128             1.28        1.37
256             2.41        2.61
512             4.56        4.87
1024            7.69        8.21
2048            12.85       13.41

with page pool fragment(PP_FLAG_PAGE_FRAG)
packet size     before      after
(bytes)         (Gbits/sec) (Gbits/sec)
128             1.28        1.37
256             2.35        2.62
512             4.37        4.86
1024            7.62        8.41
2048            13.07       13.53

with page pool fragment(PP_FLAG_PAGE_FRAG) and high order(order = 3)
packet size     before      after
(bytes)         (Gbits/sec) (Gbits/sec)
128             1.28        1.41
256             2.41        2.74
512             4.57        5.25
1024            8.61        9.71
2048            14.81       16.78

Signed-off-by: Liang Chen <liangchen.linux@...il.com>
---
 include/net/page_pool.h | 21 +++++++++++++++++++++
 net/core/skbuff.c       | 11 +++++++----
 2 files changed, 28 insertions(+), 4 deletions(-)

diff --git a/include/net/page_pool.h b/include/net/page_pool.h
index 126f9e294389..05e5d8ead63b 100644
--- a/include/net/page_pool.h
+++ b/include/net/page_pool.h
@@ -399,4 +399,25 @@ static inline void page_pool_nid_changed(struct page_pool *pool, int new_nid)
 		page_pool_update_nid(pool, new_nid);
 }
 
+static inline bool page_pool_is_pp_page(struct page *page)
+{
+	return (page->pp_magic & ~0x3UL) == PP_SIGNATURE;
+}
+
+static inline bool page_pool_is_pp_page_frag(struct page *page)
+{
+	return !!(page->pp->p.flags & PP_FLAG_PAGE_FRAG);
+}
+
+static inline void page_pool_page_ref(struct page *page)
+{
+	struct page *head_page = compound_head(page);
+
+	if (page_pool_is_pp_page(head_page) &&
+			page_pool_is_pp_page_frag(head_page))
+		atomic_long_inc(&head_page->pp_frag_count);
+	else
+		get_page(head_page);
+}
+
 #endif /* _NET_PAGE_POOL_H */
diff --git a/net/core/skbuff.c b/net/core/skbuff.c
index 6c5915efbc17..9806b091f0f6 100644
--- a/net/core/skbuff.c
+++ b/net/core/skbuff.c
@@ -5666,8 +5666,7 @@ bool skb_try_coalesce(struct sk_buff *to, struct sk_buff *from,
 	 * !@to->pp_recycle but its tricky (due to potential race with
 	 * the clone disappearing) and rare, so not worth dealing with.
 	 */
-	if (to->pp_recycle != from->pp_recycle ||
-	    (from->pp_recycle && skb_cloned(from)))
+	if (to->pp_recycle != from->pp_recycle)
 		return false;
 
 	if (len <= skb_tailroom(to)) {
@@ -5724,8 +5723,12 @@ bool skb_try_coalesce(struct sk_buff *to, struct sk_buff *from,
 	/* if the skb is not cloned this does nothing
 	 * since we set nr_frags to 0.
 	 */
-	for (i = 0; i < from_shinfo->nr_frags; i++)
-		__skb_frag_ref(&from_shinfo->frags[i]);
+	if (from->pp_recycle)
+		for (i = 0; i < from_shinfo->nr_frags; i++)
+			page_pool_page_ref(skb_frag_page(&from_shinfo->frags[i]));
+	else
+		for (i = 0; i < from_shinfo->nr_frags; i++)
+			__skb_frag_ref(&from_shinfo->frags[i]);
 
 	to->truesize += delta;
 	to->len += len;
-- 
2.31.1


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ