[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <OF8B3BDDEC.3F73CFD4-ON6525735C.001C8A0F-6525735C.001D2340@in.ibm.com>
Date: Thu, 20 Sep 2007 10:48:15 +0530
From: Krishna Kumar2 <krkumar2@...ibm.com>
To: David Miller <davem@...emloft.net>
Cc: general@...ts.openfabrics.org, netdev@...r.kernel.org,
rdreier@...co.com
Subject: Re: [Bug, PATCH and another Bug] Was: Fix refcounting problem with netif_rx_reschedule()
Hi Dave,
David Miller <davem@...emloft.net> wrote on 09/19/2007 09:35:57 PM:
> The NAPI_STATE_SCHED flag bit should provide all of the necessary
> synchornization.
>
> Only the setter of that bit should add the NAPI instance to the
> polling list.
>
> The polling loop runs atomically on the cpu where the NAPI instance
> got added to the per-cpu polling list. And therefore decisions to
> complete NAPI are serialized too.
>
> That serialized completion decision is also when the list deletion
> occurs.
About the "list deletion occurs", isn't the race I mentioned still present?
If done < budget, the driver does netif_rx_complete (at which time some
other
cpu can add this NAPI to their list). But the first cpu might do some more
actions on the napi, like ipoib_poll() calls request_notify_cq(priv->cq),
when other cpu might have started using this napi.
(net_rx_action's 'list_move' however will not execute since work != weight)
Thanks,
- KK
-
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists