[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <53E8BBCB.50308@citrix.com>
Date: Mon, 11 Aug 2014 13:49:15 +0100
From: Zoltan Kiss <zoltan.kiss@...rix.com>
To: David Vrabel <david.vrabel@...rix.com>,
Wei Liu <wei.liu2@...rix.com>, <netdev@...r.kernel.org>,
<xen-devel@...ts.xen.org>
CC: Ian Campbell <ian.campbell@...rix.com>
Subject: Re: [Xen-devel] [PATCH net v2 1/3] xen-netback: move NAPI add/remove
calls
On 11/08/14 13:35, David Vrabel wrote:
> On 08/08/14 17:37, Wei Liu wrote:
>> Originally napi_add was in init_queue and napi_del was in deinit_queue,
>> while kthreads were handled in _connect and _disconnect. Move napi_add
>> and napi_remove to _connect and _disconnect so that they reside togother
>> with kthread operations.
>>
>> Signed-off-by: Wei Liu <wei.liu2@...rix.com>
>> Cc: Ian Campbell <ian.campbell@...rix.com>
>> Cc: Zoltan Kiss <zoltan.kiss@...rix.com>
>> ---
>> drivers/net/xen-netback/interface.c | 12 ++++++++----
>> 1 file changed, 8 insertions(+), 4 deletions(-)
>>
>> diff --git a/drivers/net/xen-netback/interface.c b/drivers/net/xen-netback/interface.c
>> index 48a55cd..fdb4fca 100644
>> --- a/drivers/net/xen-netback/interface.c
>> +++ b/drivers/net/xen-netback/interface.c
>> @@ -528,9 +528,6 @@ int xenvif_init_queue(struct xenvif_queue *queue)
>>
>> init_timer(&queue->rx_stalled);
>>
>> - netif_napi_add(queue->vif->dev, &queue->napi, xenvif_poll,
>> - XENVIF_NAPI_WEIGHT);
>> -
>> return 0;
>> }
>>
>> @@ -618,6 +615,9 @@ int xenvif_connect(struct xenvif_queue *queue, unsigned long tx_ring_ref,
>> wake_up_process(queue->task);
>> wake_up_process(queue->dealloc_task);
>>
>> + netif_napi_add(queue->vif->dev, &queue->napi, xenvif_poll,
>> + XENVIF_NAPI_WEIGHT);
>> +
>> return 0;
>>
>> err_rx_unbind:
>> @@ -675,6 +675,11 @@ void xenvif_disconnect(struct xenvif *vif)
>>
>> for (queue_index = 0; queue_index < num_queues; ++queue_index) {
>> queue = &vif->queues[queue_index];
>> + netif_napi_del(&queue->napi);
>> + }
>
> Why have you added an additional loop over all the queues? The ordering
> looks wrong as well. I think you want
>
> 1. unbind from irqhandler
> 2. napi del
> 3. stop task
> 4. stop dealloc task
> 5. unmap frontend rings.
And that's how they are ordered. The idea for having the netif_napi_del
as a separate loop came from me: it could be more efficient to start
tearing down all the NAPI instances first, so by the time we stop the
dealloc thread, it is likely it already done most of the work.
But now I realized that netif_napi_del just delete the instance from a
list, the real thing happens in xenvif_carrier_off: xenvif_down calls
napi_disable on all queues, and it waits until all the work is done. So
it doesn't makes sense to have the netif_napi_del in a separate loop any
more.
>
> David
>
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists