[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <lsq.1511315893.910420047@decadent.org.uk>
Date: Wed, 22 Nov 2017 01:58:13 +0000
From: Ben Hutchings <ben@...adent.org.uk>
To: linux-kernel@...r.kernel.org, stable@...r.kernel.org
CC: akpm@...ux-foundation.org,
"Jesper Dangaard Brouer" <brouer@...hat.com>,
"David S. Miller" <davem@...emloft.net>,
"Florian Westphal" <fw@...len.de>
Subject: [PATCH 3.16 080/133] Revert "net: use lib/percpu_counter API for
fragmentation mem accounting"
3.16.51-rc1 review patch. If anyone has any objections, please let me know.
------------------
From: Jesper Dangaard Brouer <brouer@...hat.com>
commit fb452a1aa3fd4034d7999e309c5466ff2d7005aa upstream.
This reverts commit 6d7b857d541ecd1d9bd997c97242d4ef94b19de2.
There is a bug in fragmentation codes use of the percpu_counter API,
that can cause issues on systems with many CPUs.
The frag_mem_limit() just reads the global counter (fbc->count),
without considering other CPUs can have upto batch size (130K) that
haven't been subtracted yet. Due to the 3MBytes lower thresh limit,
this become dangerous at >=24 CPUs (3*1024*1024/130000=24).
The correct API usage would be to use __percpu_counter_compare() which
does the right thing, and takes into account the number of (online)
CPUs and batch size, to account for this and call __percpu_counter_sum()
when needed.
We choose to revert the use of the lib/percpu_counter API for frag
memory accounting for several reasons:
1) On systems with CPUs > 24, the heavier fully locked
__percpu_counter_sum() is always invoked, which will be more
expensive than the atomic_t that is reverted to.
Given systems with more than 24 CPUs are becoming common this doesn't
seem like a good option. To mitigate this, the batch size could be
decreased and thresh be increased.
2) The add_frag_mem_limit+sub_frag_mem_limit pairs happen on the RX
CPU, before SKBs are pushed into sockets on remote CPUs. Given
NICs can only hash on L2 part of the IP-header, the NIC-RXq's will
likely be limited. Thus, a fair chance that atomic add+dec happen
on the same CPU.
Revert note that commit 1d6119baf061 ("net: fix percpu memory leaks")
removed init_frag_mem_limit() and instead use inet_frags_init_net().
After this revert, inet_frags_uninit_net() becomes empty.
Fixes: 6d7b857d541e ("net: use lib/percpu_counter API for fragmentation mem accounting")
Fixes: 1d6119baf061 ("net: fix percpu memory leaks")
Signed-off-by: Jesper Dangaard Brouer <brouer@...hat.com>
Acked-by: Florian Westphal <fw@...len.de>
Signed-off-by: David S. Miller <davem@...emloft.net>
[bwh: Backported to 3.16: adjust context]
Signed-off-by: Ben Hutchings <ben@...adent.org.uk>
---
include/net/inet_frag.h | 30 +++++++++---------------------
net/ipv4/inet_fragment.c | 4 +---
2 files changed, 10 insertions(+), 24 deletions(-)
--- a/include/net/inet_frag.h
+++ b/include/net/inet_frag.h
@@ -1,18 +1,13 @@
#ifndef __NET_FRAG_H__
#define __NET_FRAG_H__
-#include <linux/percpu_counter.h>
-
struct netns_frags {
int nqueues;
struct list_head lru_list;
spinlock_t lru_lock;
- /* The percpu_counter "mem" need to be cacheline aligned.
- * mem.count must not share cacheline with other writers
- */
- struct percpu_counter mem ____cacheline_aligned_in_smp;
-
+ /* Keep atomic mem on separate cachelines in structs that include it */
+ atomic_t mem ____cacheline_aligned_in_smp;
/* sysctls */
int timeout;
int high_thresh;
@@ -104,42 +99,29 @@ static inline void inet_frag_put(struct
/* Memory Tracking Functions. */
-/* The default percpu_counter batch size is not big enough to scale to
- * fragmentation mem acct sizes.
- * The mem size of a 64K fragment is approx:
- * (44 fragments * 2944 truesize) + frag_queue struct(200) = 129736 bytes
- */
-static unsigned int frag_percpu_counter_batch = 130000;
-
static inline int frag_mem_limit(struct netns_frags *nf)
{
- return percpu_counter_read(&nf->mem);
+ return atomic_read(&nf->mem);
}
static inline void sub_frag_mem_limit(struct inet_frag_queue *q, int i)
{
- __percpu_counter_add(&q->net->mem, -i, frag_percpu_counter_batch);
+ atomic_sub(i, &q->net->mem);
}
static inline void add_frag_mem_limit(struct inet_frag_queue *q, int i)
{
- __percpu_counter_add(&q->net->mem, i, frag_percpu_counter_batch);
+ atomic_add(i, &q->net->mem);
}
static inline void init_frag_mem_limit(struct netns_frags *nf)
{
- percpu_counter_init(&nf->mem, 0);
+ atomic_set(&nf->mem, 0);
}
static inline int sum_frag_mem_limit(struct netns_frags *nf)
{
- int res;
-
- local_bh_disable();
- res = percpu_counter_sum_positive(&nf->mem);
- local_bh_enable();
-
- return res;
+ return atomic_read(&nf->mem);
}
static inline void inet_frag_lru_move(struct inet_frag_queue *q)
--- a/net/ipv4/inet_fragment.c
+++ b/net/ipv4/inet_fragment.c
@@ -122,8 +122,6 @@ void inet_frags_exit_net(struct netns_fr
local_bh_disable();
inet_frag_evictor(nf, f, true);
local_bh_enable();
-
- percpu_counter_destroy(&nf->mem);
}
EXPORT_SYMBOL(inet_frags_exit_net);
Powered by blists - more mailing lists