[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1369201832-17163-3-git-send-email-horms@verge.net.au>
Date: Wed, 22 May 2013 14:50:32 +0900
From: Simon Horman <horms@...ge.net.au>
To: Eric Dumazet <eric.dumazet@...il.com>,
Julian Anastasov <ja@....bg>, Ingo Molnar <mingo@...hat.com>,
Peter Zijlstra <peterz@...radead.org>,
"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
Cc: lvs-devel@...r.kernel.org, netdev@...r.kernel.org,
netfilter-devel@...r.kernel.org, linux-kernel@...r.kernel.org,
Pablo Neira Ayuso <pablo@...filter.org>,
Dipankar Sarma <dipankar@...ibm.com>,
Simon Horman <horms@...ge.net.au>
Subject: [PATCH ipvs-next v3 2/2] ipvs: use cond_resched_rcu() helper when walking connections
This avoids the situation where walking of a large number of connections
may prevent scheduling for a long time while also avoiding excessive
calls to rcu_read_unlock() and rcu_read_lock().
Note that in the case of !CONFIG_PREEMPT_RCU this will
add a call to cond_resched().
[ja@....bg: add cond_resched_rcu to more places]
Signed-off-by: Julian Anastasov <ja@....bg>
Signed-off-by: Simon Horman <horms@...ge.net.au>
---
net/netfilter/ipvs/ip_vs_conn.c | 23 ++++++++---------------
1 file changed, 8 insertions(+), 15 deletions(-)
diff --git a/net/netfilter/ipvs/ip_vs_conn.c b/net/netfilter/ipvs/ip_vs_conn.c
index a083bda..c8c52a9 100644
--- a/net/netfilter/ipvs/ip_vs_conn.c
+++ b/net/netfilter/ipvs/ip_vs_conn.c
@@ -975,8 +975,7 @@ static void *ip_vs_conn_array(struct seq_file *seq, loff_t pos)
return cp;
}
}
- rcu_read_unlock();
- rcu_read_lock();
+ cond_resched_rcu();
}
return NULL;
@@ -1015,8 +1014,7 @@ static void *ip_vs_conn_seq_next(struct seq_file *seq, void *v, loff_t *pos)
iter->l = &ip_vs_conn_tab[idx];
return cp;
}
- rcu_read_unlock();
- rcu_read_lock();
+ cond_resched_rcu();
}
iter->l = NULL;
return NULL;
@@ -1206,17 +1204,13 @@ void ip_vs_random_dropentry(struct net *net)
int idx;
struct ip_vs_conn *cp, *cp_c;
+ rcu_read_lock();
/*
* Randomly scan 1/32 of the whole table every second
*/
for (idx = 0; idx < (ip_vs_conn_tab_size>>5); idx++) {
unsigned int hash = net_random() & ip_vs_conn_tab_mask;
- /*
- * Lock is actually needed in this loop.
- */
- rcu_read_lock();
-
hlist_for_each_entry_rcu(cp, &ip_vs_conn_tab[hash], c_list) {
if (cp->flags & IP_VS_CONN_F_TEMPLATE)
/* connection template */
@@ -1252,8 +1246,9 @@ void ip_vs_random_dropentry(struct net *net)
__ip_vs_conn_put(cp);
}
}
- rcu_read_unlock();
+ cond_resched_rcu();
}
+ rcu_read_unlock();
}
@@ -1267,11 +1262,8 @@ static void ip_vs_conn_flush(struct net *net)
struct netns_ipvs *ipvs = net_ipvs(net);
flush_again:
+ rcu_read_lock();
for (idx = 0; idx < ip_vs_conn_tab_size; idx++) {
- /*
- * Lock is actually needed in this loop.
- */
- rcu_read_lock();
hlist_for_each_entry_rcu(cp, &ip_vs_conn_tab[idx], c_list) {
if (!ip_vs_conn_net_eq(cp, net))
@@ -1286,8 +1278,9 @@ flush_again:
__ip_vs_conn_put(cp);
}
}
- rcu_read_unlock();
+ cond_resched_rcu();
}
+ rcu_read_unlock();
/* the counter may be not NULL, because maybe some conn entries
are run by slow timer handler or unhashed but still referred */
--
1.8.2.1
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists