[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20090729231735.GB14066@gondor.apana.org.au>
Date: Thu, 30 Jul 2009 07:17:35 +0800
From: Herbert Xu <herbert@...dor.apana.org.au>
To: Neil Horman <nhorman@...driver.com>
Cc: Matt Mackall <mpm@...enic.com>,
"David S. Miller" <davem@...emloft.net>, netdev@...r.kernel.org,
Matt Carlson <mcarlson@...adcom.com>
Subject: Re: netpoll + xmit_lock == deadlock
On Wed, Jul 29, 2009 at 07:15:17PM -0400, Neil Horman wrote:
>
> Not quite. I agree private locking in a driver is a pain when you consider
> netpoll clients, its not the tx/tx recursion you need to worry about though, its
> shared locking between the tx and rx path that you need to be worried about.
> We should be protected against deadlock on the _xmit_lock from what we discussed
> above, but if you take a lock in the driver, then call printk, its possible that
> you'll go down the ->poll routine path in the driver. If there you try to take
> the same private lock, the result is then deadlock.
xmit_lock suffers from exactly the same problem in ->poll.
> I was thinking that perhaps what we should do is simply not call netpoll_poll
> from within netpoll_send_skb. That breaks the only spot that I see in which we
> call a receive code from within the tx path, breaking the deadlock possibilty.
> Perhaps instead we can call netif_rx_schedule on the network interfaces napi
> struct. We already queue the frames and set a timer to try sending again later.
> By calling netif_rx_schedule, we move the receive work to the net_rx_action
> softirq (where it really should be).
>
> Thoughts?
Alternatively we can modify the drivers to use try lock or other
mechanisms that do not result in a dead-lock.
Cheers,
--
Visit Openswan at http://www.openswan.org/
Email: Herbert Xu ~{PmV>HI~} <herbert@...dor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists