[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <1525990182-12042-1-git-send-email-gfree.wind@vip.163.com>
Date: Fri, 11 May 2018 06:09:42 +0800
From: gfree.wind@....163.com
To: davem@...emloft.net, daniel@...earbox.net, edumazet@...gle.com,
willemb@...gle.com, jakub.kicinski@...ronome.com,
ktkhai@...tuozzo.com, ast@...nel.org, linux@...musvillemoes.dk,
john.fastabend@...il.com, brouer@...hat.com, dsahern@...il.com,
netdev@...r.kernel.org
Cc: Gao Feng <gfree.wind@....163.com>
Subject: [PATCH net v2] rps: Correct wrong skb_flow_limit check when enable RPS
From: Gao Feng <gfree.wind@....163.com>
The skb flow limit is implemented for each CPU independently. In the
current codes, the function skb_flow_limit gets the softnet_data by
this_cpu_ptr. But the target cpu of enqueue_to_backlog would be not
the current cpu when enable RPS. As the result, the skb_flow_limit checks
the stats of current CPU, while the skb is going to append the queue of
another CPU. It isn't the expected behavior.
Now pass the softnet_data as a param to make consistent.
Fixes: 99bbc7074190 ("rps: selective flow shedding during softnet overflow")
Signed-off-by: Gao Feng <gfree.wind@....163.com>
---
v2: Add Fixes tag per Eric, and enhance the commit log
v1: intial version
net/core/dev.c | 7 ++-----
1 file changed, 2 insertions(+), 5 deletions(-)
diff --git a/net/core/dev.c b/net/core/dev.c
index af0558b..0f98eff 100644
--- a/net/core/dev.c
+++ b/net/core/dev.c
@@ -3883,18 +3883,15 @@ static int rps_ipi_queued(struct softnet_data *sd)
int netdev_flow_limit_table_len __read_mostly = (1 << 12);
#endif
-static bool skb_flow_limit(struct sk_buff *skb, unsigned int qlen)
+static bool skb_flow_limit(struct softnet_data *sd, struct sk_buff *skb, unsigned int qlen)
{
#ifdef CONFIG_NET_FLOW_LIMIT
struct sd_flow_limit *fl;
- struct softnet_data *sd;
unsigned int old_flow, new_flow;
if (qlen < (netdev_max_backlog >> 1))
return false;
- sd = this_cpu_ptr(&softnet_data);
-
rcu_read_lock();
fl = rcu_dereference(sd->flow_limit);
if (fl) {
@@ -3938,7 +3935,7 @@ static int enqueue_to_backlog(struct sk_buff *skb, int cpu,
if (!netif_running(skb->dev))
goto drop;
qlen = skb_queue_len(&sd->input_pkt_queue);
- if (qlen <= netdev_max_backlog && !skb_flow_limit(skb, qlen)) {
+ if (qlen <= netdev_max_backlog && !skb_flow_limit(sd, skb, qlen)) {
if (qlen) {
enqueue:
__skb_queue_tail(&sd->input_pkt_queue, skb);
--
1.9.1
Powered by blists - more mailing lists