[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <4EBA7038.4050702@mt.lv>
Date: Wed, 09 Nov 2011 14:21:12 +0200
From: Maris Paupe <marisp@...lv>
To: netdev@...r.kernel.org
Subject: [PATCH] flow_cache_flush soft lockup with heavy ipsec traffic
During ipsec packet processing flow_cache_flush() may get called which
creates flow_cache_gc_taklet(), this function is guarded by mutex and
waits until all tasklets are finished before releasing it, another
softirq may happen during flow_cache_gc_taklet(), in case when this irq
is packet reading from a device, it can happen that flow_cache_flush()
gets called again and a deadlock occurs.
Here i purpose a simple fix to this problem by disabling softirqs during
tasklet process. It could also be fixed in ipsec processing code, but I
am too unfamiliar with it to touch it.
Signed-off-by: Maris Paupe <marisp@...lv>
diff --git a/net/core/flow.c b/net/core/flow.c
index 8ae42de..19ff283 100644
--- a/net/core/flow.c
+++ b/net/core/flow.c
@@ -105,6 +105,7 @@ static void flow_cache_gc_task(struct work_struct *work)
struct list_head gc_list;
struct flow_cache_entry *fce, *n;
+ local_bh_disable();
INIT_LIST_HEAD(&gc_list);
spin_lock_bh(&flow_cache_gc_lock);
list_splice_tail_init(&flow_cache_gc_list, &gc_list);
@@ -112,6 +113,7 @@ static void flow_cache_gc_task(struct work_struct *work)
list_for_each_entry_safe(fce, n, &gc_list, u.gc_list)
flow_entry_kill(fce);
+ local_bh_enable();
}
static DECLARE_WORK(flow_cache_gc_work, flow_cache_gc_task);
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists