[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <5321D265.8070907@citrix.com>
Date: Thu, 13 Mar 2014 15:44:37 +0000
From: Zoltan Kiss <zoltan.kiss@...rix.com>
To: Ian Campbell <Ian.Campbell@...rix.com>,
Wei Liu <wei.liu2@...rix.com>
CC: <xen-devel@...ts.xenproject.org>, <netdev@...r.kernel.org>,
<linux-kernel@...r.kernel.org>, <jonathan.davies@...rix.com>
Subject: Re: [PATCH net-next] xen-netback: Schedule NAPI from dealloc thread
instead of callback
On 13/03/14 10:42, Ian Campbell wrote:
> On Thu, 2014-03-13 at 10:17 +0000, Wei Liu wrote:
>> On Wed, Mar 12, 2014 at 09:04:41PM +0000, Zoltan Kiss wrote:
>>> If there are unconsumed requests in the ring, but there isn't enough free
>>> pending slots, the NAPI instance deschedule itself. As the frontend won't send
>>> any more interrupts in this case, it is the task of whoever release the pending
>>> slots to schedule the NAPI instance in this case. Originally it was done in the
>>> callback, but it's better at the end of the dealloc thread, otherwise there is
>>> a risk that the NAPI instance just deschedule itself as the dealloc thread
>>> couldn't release any used slot yet. However, as there are a lot of pending
>>> packets, NAPI will be scheduled again, and it is very unlikely that the dealloc
>>> thread can't release enough slots in the meantime.
>>>
>>
>> So this patch restores the property that "only two parties access the
>> ring", right?
>
> I think so, and therefore:
> Acked-by: Ian Campbell <ian.campbell@...rix.com>
I've just discussed this with Ian, this solution still doesn't solve the
racing between NAPI and the dealloc thread, however it only causes some
unnecessary napi_schedule's, not ring stalls.
There is a better solution, I'll post it as soon as it passes all the test!
Zoli
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists