lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Message-Id: <7bf26d03fab8d99cdeea165990e9f2cf054b77d6.1669489329.git.andreyknvl@google.com> Date: Sat, 26 Nov 2022 20:12:13 +0100 From: andrey.konovalov@...ux.dev To: Marco Elver <elver@...gle.com>, "David S . Miller" <davem@...emloft.net>, Eric Dumazet <edumazet@...gle.com>, Jakub Kicinski <kuba@...nel.org>, Paolo Abeni <pabeni@...hat.com> Cc: Andrey Konovalov <andreyknvl@...il.com>, Alexander Potapenko <glider@...gle.com>, Dmitry Vyukov <dvyukov@...gle.com>, Andrey Ryabinin <ryabinin.a.a@...il.com>, kasan-dev@...glegroups.com, Peter Collingbourne <pcc@...gle.com>, Evgenii Stepanov <eugenis@...gle.com>, Florian Mayer <fmayer@...gle.com>, Jann Horn <jannh@...gle.com>, Mark Brand <markbrand@...gle.com>, netdev@...r.kernel.org, Andrew Morton <akpm@...ux-foundation.org>, linux-mm@...ck.org, linux-kernel@...r.kernel.org, Andrey Konovalov <andreyknvl@...gle.com> Subject: [PATCH v2 2/2] net, kasan: sample tagging of skb allocations with HW_TAGS From: Andrey Konovalov <andreyknvl@...gle.com> As skb page_alloc allocations tend to be big, tagging and checking all such allocations with Hardware Tag-Based KASAN introduces a significant slowdown in testing scenarios that extensively use the network. This is undesirable, as Hardware Tag-Based KASAN is intended to be used in production and thus its performance impact is crucial. Use __GFP_KASAN_SAMPLE flag for skb page_alloc allocations to make KASAN use sampling and tag only some of these allocations. When running a local loopback test on a testing MTE-enabled device in sync mode, enabling Hardware Tag-Based KASAN intoduces a 50% slowdown. Applying this patch and setting kasan.page_alloc.sampling to a value higher than 1 allows to lower the slowdown. The performance improvement saturates around the sampling interval value of 10, which lowers the slowdown to 20%. The slowdown in real scenarios will likely be better. Signed-off-by: Andrey Konovalov <andreyknvl@...gle.com> --- net/core/skbuff.c | 4 ++-- net/core/sock.c | 2 +- 2 files changed, 3 insertions(+), 3 deletions(-) diff --git a/net/core/skbuff.c b/net/core/skbuff.c index 88fa40571d0c..fdea87deee13 100644 --- a/net/core/skbuff.c +++ b/net/core/skbuff.c @@ -6135,8 +6135,8 @@ struct sk_buff *alloc_skb_with_frags(unsigned long header_len, while (order) { if (npages >= 1 << order) { page = alloc_pages((gfp_mask & ~__GFP_DIRECT_RECLAIM) | - __GFP_COMP | - __GFP_NOWARN, + __GFP_COMP | __GFP_NOWARN | + __GFP_KASAN_SAMPLE, order); if (page) goto fill_page; diff --git a/net/core/sock.c b/net/core/sock.c index a3ba0358c77c..f7d20070ad88 100644 --- a/net/core/sock.c +++ b/net/core/sock.c @@ -2842,7 +2842,7 @@ bool skb_page_frag_refill(unsigned int sz, struct page_frag *pfrag, gfp_t gfp) /* Avoid direct reclaim but allow kswapd to wake */ pfrag->page = alloc_pages((gfp & ~__GFP_DIRECT_RECLAIM) | __GFP_COMP | __GFP_NOWARN | - __GFP_NORETRY, + __GFP_NORETRY | __GFP_KASAN_SAMPLE, SKB_FRAG_PAGE_ORDER); if (likely(pfrag->page)) { pfrag->size = PAGE_SIZE << SKB_FRAG_PAGE_ORDER; -- 2.25.1
Powered by blists - more mailing lists