lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 09 Nov 2011 14:16:04 +0100
From:	Eric Dumazet <eric.dumazet@...il.com>
To:	Maris Paupe <marisp@...lv>
Cc:	netdev@...r.kernel.org
Subject: Re: [PATCH]  flow_cache_flush soft lockup with heavy ipsec traffic

Le mercredi 09 novembre 2011 à 14:21 +0200, Maris Paupe a écrit :
> During ipsec packet processing flow_cache_flush() may get called which 
> creates flow_cache_gc_taklet(), this function is guarded by mutex and 
> waits until all tasklets are finished before releasing it, another 
> softirq may happen during flow_cache_gc_taklet(), in case when this irq 
> is packet reading from a device, it can happen that flow_cache_flush() 
> gets called again and a deadlock occurs.
> Here i purpose a simple fix to this problem by disabling softirqs during 
> tasklet process. It could also be fixed in ipsec processing code, but I 
> am too unfamiliar with it to touch it.
> 
> Signed-off-by: Maris Paupe <marisp@...lv>
> 
> diff --git a/net/core/flow.c b/net/core/flow.c
> index 8ae42de..19ff283 100644
> --- a/net/core/flow.c
> +++ b/net/core/flow.c
> @@ -105,6 +105,7 @@ static void flow_cache_gc_task(struct work_struct *work)
>   	struct list_head gc_list;
>   	struct flow_cache_entry *fce, *n;
> 
> +	local_bh_disable();
>   	INIT_LIST_HEAD(&gc_list);
>   	spin_lock_bh(&flow_cache_gc_lock);
>   	list_splice_tail_init(&flow_cache_gc_list, &gc_list);
> @@ -112,6 +113,7 @@ static void flow_cache_gc_task(struct work_struct *work)
> 
>   	list_for_each_entry_safe(fce, n, &gc_list, u.gc_list)
>   		flow_entry_kill(fce);
> +	local_bh_enable();
>   }

Sorry, I dont understand your patch.

BH are disabled by the spin_lock_bh() call.

Once flow_cache_entry are in garbage list, nothing but garbage collector
can access them. I see no possible deadlock. Or there is a bug somewhere
and your patch avoid it.

Whole point of using a work queue to perform garbage collect was to not
hold BH too long (allowing sotirq to process incoming packets), so you
basically remove what was done in commit 8e4795605d.

Could you explain the problem you have ? Any stack trace or something ?



--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ