[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <45D382B7.9020901@candelatech.com>
Date: Wed, 14 Feb 2007 13:44:23 -0800
From: Ben Greear <greearb@...delatech.com>
To: Stephen Hemminger <shemminger@...ux-foundation.org>
CC: Francois Romieu <romieu@...zoreil.com>, netdev@...r.kernel.org,
Kyle Lucke <klucke@...ibm.com>,
Raghavendra Koushik <raghavendra.koushik@...erion.com>,
Al Viro <viro@....linux.org.uk>
Subject: Re: [BUG] RTNL and flush_scheduled_work deadlocks
Stephen Hemminger wrote:
> Ben found this but the problem seems pretty widespread.
>
> The following places are subject to deadlock between flush_scheduled_work
> and the RTNL mutex. What can happen is that a work queue routine (like
> bridge port_carrier_check) is waiting forever for RTNL, and the driver
> routine has called flush_scheduled_work with RTNL held and is waiting
> for the work queue to clear.
>
> Several other places have comments like: "can't call flush_scheduled_work
> here or it will deadlock". Most of the problem places are in device close
> routine. My recommendation would be to add a check for device netif_running in
> what ever work routine is used, and move the flush_scheduled_work to the
> remove routine.
I seem to be able to trigger this within about 1 minute on a
particular 2.6.18.2 system with some 8139too devices, so if someone
has a patch that could be tested, I'll gladly test it. For
whatever reason, I haven't hit this problem on 2.6.20 yet, but
that could easily be dumb luck, and I haven't been running .20
very much.
To add to the list below, tg3 has this problem as well, as far as I
can tell by looking at the code.
Thanks,
Ben
>
> 8139too.c: rtl8139_close --> rtl8139_stop_thread
> r8169.c: rtl8169_down
> cassini.c: cas_change_mtu
> iseries_veth.c: veth_stop_connection
> s2io.c: s2io_close
> sis190.c: sis190_down
>
--
Ben Greear <greearb@...delatech.com>
Candela Technologies Inc http://www.candelatech.com
-
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists